How can I copy files between pods or between nodes in the kubernetes cluster?
Is this possible inside the kubernetes cluster?
All examples I've found are copying from a local disk to a pod or vice versa.
Or the only option to copy from node to node, for example over ssh, scp or with other utils?
Thanks for the answers.
It's not possible to do cluster to cluster copying. You'd need to use
kubectl cpto copy it locally, then copy the file back:
kubectl cp <pod>:/tmp/test /tmp/test kubctl cp /tmp/test <pod>:/tmp/test
If you are trying to share files between pods, and only one pods needs write access, you probably want to mount an ro volume on multiple pods, or use an object store like S3. Copying files to and from pods really shouldn't be something you're doing often, that's an anti-pattern
See also questions close to this topic
Nginx Ingress Controller Installation Error, "dial tcp 10.96.0.1:443: i/o timeout"
I'm trying to setup a kubernetes cluster with kubeadm and vagrant. I faced an error during installing nginx ingress controller was timeout when the pods is trying to retrieve the configmap through kubernetes API. I have looked around and trying to apply their solution, still no luck, this is the reason I come out with this post.
I'm using vagrant to setup 2 nodes with ubuntu/xenial image.
kmaster ------------------------------------------- network: Adapter1: NAT Adapter2: HostOnly-network, IP:192.168.2.71 kworker1 ------------------------------------------- network: Adapter1: NAT Adapter2: HostOnly-network, IP:192.168.2.72
I followed the kubeadm to setup the cluster
And my kube cluster init command as below:
kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.2.71
and apply calico network plugin policy:
kubectl apply -f \ https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/etcd.yaml kubectl apply -f \ https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
(Calico is a plugin I currently successful installed with, I will come out another post for flannel plugin which the plugin unable to access the service)
I'm using helm to install ingress controller followed the tutorial https://kubernetes.github.io/ingress-nginx/deploy/
That's the error occurred once I applied helm deploy command when I describe the pod
Appreciate someone can help, as I know the reason was the pod unable to access kubernetes API. But not this already should enable by kubernetes by default?
Another solution provided from kubernetes official website:
1) install kube-proxy with sidecar, I still new with kubernetes and I'm looking for example how to install kube-proxy with sidecar. Appreciate if someone could provide an example.
2) use client-go, I'm very confuse when I read this post, it seems that using go command to pull the go script, and I have no clue how's it working with kubernetes pods.
GKE with gcloud sql postgres: the sidecar proxy setup does not work
I am trying to setup a node.js app on GKE with a gcloud SQL Postgres database with a sidecar proxy. I am following along the docs but do not get it working. The proxy does not seem to be able to start (the app container does start). I have no idea why the proxy container can not start and also have no idea how to debug this (e.g. how do i get an error message!?).
apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: username: [base64_username] password: [base64_password]
kubectl get secrets:
NAME TYPE DATA AGE default-token-tbgsv kubernetes.io/service-account-token 3 5d mysecret Opaque 2 7h
apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: app: myapp spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: app image: gcr.io/myproject/firstapp:v2 ports: - containerPort: 8080 env: - name: POSTGRES_DB_HOST value: 127.0.0.1:5432 - name: POSTGRES_DB_USER valueFrom: secretKeyRef: name: mysecret key: username - name: POSTGRES_DB_PASSWORD valueFrom: secretKeyRef: name: mysecret key: password - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=myproject:europe-west4:databasename=tcp:5432", "-credential_file=/secrets/cloudsql/mysecret.json"] securityContext: runAsUser: 2 allowPrivilegeEscalation: false volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true volumes: - name: cloudsql-instance-credentials secret: secretName: mysecret
kubectl create -f ./kubernetes/app-deployment.json:
kubectl get deployments:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE myapp 1 1 1 0 5s
kubectl get pods:
NAME READY STATUS RESTARTS AGE myapp-5bc965f688-5rxwp 1/2 CrashLoopBackOff 1 10s
kubectl describe pod/myapp-5bc955f688-5rxwp -n default:
Name: myapp-5bc955f688-5rxwp Namespace: default Priority: 0 PriorityClassName: <none> Node: gke-standard-cluster-1-default-pool-1ec52705-186n/10.164.0.4 Start Time: Sat, 15 Dec 2018 21:46:03 +0100 Labels: app=myapp pod-template-hash=1675219244 Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container app; cpu request for container cloudsql-proxy Status: Running IP: 10.44.1.9 Controlled By: ReplicaSet/myapp-5bc965f688 Containers: app: Container ID: docker://d3ba7ff9c581534a4d55a5baef2d020413643e0c2361555eac6beba91b38b120 Image: gcr.io/myproject/firstapp:v2 Image ID: docker-pullable://gcr.io/myproject/firstapp@sha256:80168b43e3d0cce6d3beda6c3d1c679cdc42e88b0b918e225e7679252a59a73b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sat, 15 Dec 2018 21:46:04 +0100 Ready: True Restart Count: 0 Requests: cpu: 100m Environment: POSTGRES_DB_HOST: 127.0.0.1:5432 POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro) cloudsql-proxy: Container ID: docker://96e2ed0de8fca21ecd51462993b7083bec2a31f6000bc2136c85842daf17435d Image: gcr.io/cloudsql-docker/gce-proxy:1.11 Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy@sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b Port: <none> Host Port: <none> Command: /cloud_sql_proxy -instances=myproject:europe-west4:databasename=tcp:5432 -credential_file=/secrets/cloudsql/mysecret.json State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sat, 15 Dec 2018 22:43:37 +0100 Finished: Sat, 15 Dec 2018 22:43:37 +0100 Ready: False Restart Count: 16 Requests: cpu: 100m Environment: <none> Mounts: /secrets/cloudsql from cloudsql-instance-credentials (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: cloudsql-instance-credentials: Type: Secret (a volume populated by a Secret) SecretName: mysecret Optional: false default-token-tbgsv: Type: Secret (a volume populated by a Secret) SecretName: default-token-tbgsv Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 59m default-scheduler Successfully assigned default/myapp-5bc955f688-5rxwp to gke-standard-cluster-1-default-pool-1ec52705-186n Normal Pulled 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/myproject/firstapp:v2" already present on machine Normal Created 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container Normal Started 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container Normal Started 59m (x4 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container Normal Pulled 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine Normal Created 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container Warning BackOff 4m46s (x252 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Back-off restarting failed container
Why http and not https? kubectl cluster-info "kubernetes master running"
I've googled for hours and I can't seem to find the info. This might be a simple question...my question is this:
I have a big script to start up k8s. When everything is up and running and i do a "kubectl cluster-info" I get "kubernetes master is running on http://...". EVERY example I read online says the result should be "https://..."
My question is what file/yaml/property/etc makes kubernetes master run http vs https?
I have both ports (80/443) defined in my kube-apiserver.yaml file. Do I have to get things working with the "insecure-port=0" in apiserver? Or can master run https without this?
How does SGE (Sun Grid Engine) Monitor VMEM (Virtual Memory) Usage for Jobs?
SGE enables users to set limits on virtual memory/vmem usage (e.g. the
h_vmemargument for a job submission).
But how exactly does SGE monitor VMEM usage and send a kill signal if it is exceeded? Does it poll at some frequency? Add up some kernel provided value across a process tree? How does this work mechanistically? Even an incomplete explanation or simple pointer to source code would be greatly appreciated.
Apache Ignite. How to create timer task (cron-based) without it execution on per node?
How to create timer task (cron-based) without it execution on each node i.e. one execution per one time point in timetable using Apache Ingite?
I have cluster that consists in 2 nodes and application (war) with timer-task. In non-cluster mode application works well. But it has inside timer-task (ie. start each 5 minutes), that works with shared resources.
I try to do it. But IngiteScheduler#scheduleLocal deploys and runs task on each node, if both application instance started (each application instance try to start same timer task).
I assume that ignite have mechanism for deploy task with id fot it...
iframe is not loading in cluster environment
In our application we are using iframe in cluster environment. The iframe is loading and working fine in individual node but same code is getting below exception while we are loading using the below lines.
We are using below line to load iframe: document.getElementById("id").src = "/iframe/url" The js exception in cluster environment is: VM48:1 Uncaught SyntaxError: Unexpected token u in JSON at position 0 at JSON.parse (<anonymous>) at Function.n.parseJSON (jquery-1.12.4.min.js?v=188.8.131.52:4) at setupValidationsForElement (merchant-html.js?v=184.108.40.206:417) at SetupValidation (merchant-html.js?v=220.127.116.11:382) at HTMLDocument.<anonymous> (merchant-html.js?v=18.104.22.168:47) at i (jquery-1.12.4.min.js?v=22.214.171.124:2) at Object.fireWith [as resolveWith] (jquery-1.12.4.min.js?v=126.96.36.199:2) at Function.ready (jquery-1.12.4.min.js?v=188.8.131.52:2) at HTMLDocument.K (jquery-1.12.4.min.js?v=184.108.40.206:2)
Benefit to placing Database and Application in same Kubernetes pod
I know just the bare minimum of Kubernetes. However, I wanted to know if there would be any benefit in running 2 containers in a single pod:
- 1 Container running the application (e.g. a NodeJS app)
- 1 Container running the corresponding local database (e.g. a PouchDB database)
Would this would increase performance or the down-sides of coupling the two containers would overcome any benefits?
How to remove LimitRange from default namespace in kubernetes?
For one of requirements , i created a new pod on my default name space using below yaml file
apiVersion: v1 kind: LimitRange metadata: name: mem-min-max-demo-lr1 spec: limits: - max: memory: 5Gi min: memory: 900Mi type: Container
Now i need to remove these LimitRange from default namespace in kubernetes?
Copy command crash pod startup on Kubernetes
I'm new with Kubernetes and I'm trying to understand about the commands. Basically what I'm trying to do is to create a Tomcat deployment, add an nfs and after that I copy the war file to the tomcat webapps.
But its failing
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: webapp11-deployment spec: replicas: 2 template: metadata: labels: app: webapp11 spec: volumes: - name: www-persistent-storage persistentVolumeClaim: claimName: claim-webapp11 containers: - name: webapp11-pod image: tomcat:8.0 volumeMounts: - name: www-persistent-storage mountPath: /apps/build command: ["sh","-c","cp /apps/build/v1/sample.war /usr/local/tomcat/webapps"] ports: - containerPort: 8080
As far as I understand when the image when an image has a command, like catalina.sh run on the Tomcat image, it will have a conflict with a command from the kubernetes.
Is that correct?
There is anyway to run a command after the pod starts?