How can I copy files between pods or between nodes in the kubernetes cluster?
Is this possible inside the kubernetes cluster?
All examples I've found are copying from a local disk to a pod or vice versa.
Or the only option to copy from node to node, for example over ssh, scp or with other utils?
Thanks for the answers.
It's not possible to do cluster to cluster copying. You'd need to use
kubectl cpto copy it locally, then copy the file back:
kubectl cp <pod>:/tmp/test /tmp/test kubctl cp /tmp/test <pod>:/tmp/test
If you are trying to share files between pods, and only one pods needs write access, you probably want to mount an ro volume on multiple pods, or use an object store like S3. Copying files to and from pods really shouldn't be something you're doing often, that's an anti-pattern
See also questions close to this topic
AzureFunctions AppInsights Logging does not work in Azure AKS
I've been using Azure functions (non static with proper DI) for a short while now. I recently added ApplicationInsights by using the APPINSIGHTS_INSTRUMENTATIONKEY key. When debugging locally it works all fine.
If I run it by publishing the function and using the following dockerfile to run it locally on docker it works fine as well.
FROM mcr.microsoft.com/azure-functions/dotnet:2.0-alpine ENV AzureWebJobsScriptRoot=/home/site/wwwroot COPY ./publish/ /home/site/wwwroot
However. If i go a step further and try to deploy it to kubernetes (in my case Azure AKS) by using the following YAML files. The function starts fine with log files showing the loading of the Application insights parameter. However, it does not log to insights.
apiVersion: v1 kind: Secret metadata: name: mytestfunction-secrets namespace: "testfunction" type: Opaque data: ApplicationInsights: YTljOTA4ZDgtMTkyZC00ODJjLTkwNmUtMTI2OTQ3OGZhYjZmCg== --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mytestfunction namespace: "testfunction" labels: app: mytestfunction spec: replicas: 1 template: metadata: namespace: "testfunction" labels: app: mytestfunction spec: containers: - image: mytestfunction:1.1 name: mytestfunction ports: - containerPort: 5000 imagePullPolicy: Always env: - name: AzureFunctionsJobHost__Logging__Console__IsEnabled value: 'true' - name: ASPNETCORE_ENVIRONMENT value: PRODUCTION - name: ASPNETCORE_URLS value: http://+:5000 - name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT value: '5' - name: APPINSIGHTS_INSTRUMENTATIONKEY valueFrom: secretKeyRef: name: mytestfunction-secrets key: ApplicationInsights imagePullSecrets: - name: imagepullsecrets
However. I did alter the yaml by not storing the key as a secret and then it did work.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mytestfunction namespace: "testfunction" labels: app: mytestfunction spec: replicas: 1 template: metadata: namespace: "testfunction" labels: app: mytestfunction spec: containers: - image: mytestfunction:1.1 name: mytestfunction ports: - containerPort: 5000 imagePullPolicy: Always env: - name: AzureFunctionsJobHost__Logging__Console__IsEnabled value: 'true' - name: ASPNETCORE_ENVIRONMENT value: PRODUCTION - name: ASPNETCORE_URLS value: http://+:5000 - name: WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT value: '5' - name: APPINSIGHTS_INSTRUMENTATIONKEY value: a9c908d8-192d-482c-906e-1269478fab6f imagePullSecrets: - name: imagepullsecrets
I'm kind of surprised by the fact that the difference between the notation is causing Azure functions to not log to insights. My impression was that the running application does not care or know whether the value came from a secret or a regular notation in kubernetes. Even though it might be debatable whether the instrumentationkey is a secret or not, i would prefer to store it there. Does anyone have an idea why this might cause it?
# Not working - name: APPINSIGHTS_INSTRUMENTATIONKEY valueFrom: secretKeyRef: name: mytestfunction-secrets key: ApplicationInsights # Working - name: APPINSIGHTS_INSTRUMENTATIONKEY value: a9c908d8-192d-482c-906e-1269478fab6f
These are the versions i'm using
- Azure Functions Core Tools (2.4.419)
- Function Runtime Version: 2.0.12332.0
- Azure AKS: 1.12.x
Also. The Instrumentation key is a fake one for sharing purposes. not an actual one.
Kubernetes VPA : issue with targetref selector + minimal resources
I have 2 issues : - my Vertical pod autoscaler doesn't follow my minimal resource policies :
Spec: Resource Policy: Container Policies: Min Allowed: Cpu: 50m <==== mini allowed for CPU Memory: 75Mi Mode: auto Target Ref: API Version: extensions/v1beta1 Kind: Deployment Name: hello-world Update Policy: Update Mode: Auto Status: Conditions: Last Transition Time: 2019-03-19T19:11:36Z Status: True Type: RecommendationProvided Recommendation: Container Recommendations: Container Name: hello-world Lower Bound: Cpu: 25m Memory: 262144k Target: Cpu: 25m <==== actual CPU configured by the VPA Memory: 262144k
I configured my VPA to use the new kind of label selector using targetref but in the recommender logs it says I'm using the legacy one :
Error while fetching legacy selector. Reason: v1beta1 selector not found
Here is my deployment configuration :
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world namespace: hello-world labels: name: hello-world spec: selector: matchLabels: name: hello-world replicas: 2 template: metadata: labels: name: hello-world spec: securityContext: fsGroup: 101 containers: - name: hello-world image: xxx/hello-world:latest imagePullPolicy: Always ports: - containerPort: 3000 protocol: TCP resources: limits: cpu: 500m memory: 500Mi requests: cpu: 100m memory: 150Mi volumeMounts: - mountPath: /u/app/www/images name: nfs-volume volumes: - name: nfs-volume persistentVolumeClaim: claimName: hello-world
Here is my VPA configuration :
--- apiVersion: "autoscaling.k8s.io/v1beta2" kind: VerticalPodAutoscaler metadata: name: hello-world namespace: hello-world spec: targetRef: apiVersion: "extensions/v1beta1" kind: Deployment name: hello-world resourcePolicy: containerPolicies: - minAllowed: cpu: 50m memory: 75Mi mode: auto updatePolicy: updateMode: "Auto"
I'm running kubernetes v1.13.2, VPA v0.4 and here is his configuration :
--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: vpa-recommender namespace: kube-system spec: replicas: 1 template: metadata: labels: app: vpa-recommender spec: serviceAccountName: vpa-recommender containers: - name: recommender image: k8s.gcr.io/vpa-recommender:0.4.0 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - containerPort: 8080 command: - ./recommender - --alsologtostderr=false - --logtostderr=false - --prometheus-address=http://prometheus-service.monitoring:9090/ - --prometheus-cadvisor-job-name=cadvisor - --v=10
k8s - Pod restart time
I was running this to see how job restart works in k8s.
kubectl run alpine --image=alpine --restart=OnFailure -- exit 1
The alpine image was already there. The first failure happened almost within a second. k8s takes 5 minutes to do 5 restarts! why does it not try immediately? Is there any way reduce the time between 2 restarts?
How to store a Net socket to a database in order to keep a node js app stateless?
My application is using
NetAPI (https://nodejs.org/api/net.html) sockets and accept connections from clients that cannot use any other way to communicate than raw TCP sockets.
I'm trying to use
pm2with my app and clusterize it on all my cpus. Unfortunately this does not work since one process cannot use the in-memory stored sockets from any of the other processes.
I'm looking for a way to save each connections to a database, even an in-memory store would fit my needs for as long as i can call it and use it again from any other process.
Somebody else asked pretty much the same question here he was told that he should just use Redis to store the sockets. But the guy who answered apparently had no idea how to do that.
Question was asked here: Will PM2 work with Node.js net API?
My question is how could i do that ?
Actually i think this is just impossible. What i really need is a way to "recreate" the socket object through it's file descriptor number that could be save as an integer in the database, but again, this is getting really tricky. I'm kind of stuck here. Documentation says nothing about it.
Maybe there's another way to keep my app stateless ?
Thanks a lot for reading, i'll be glad if someone can help.
Running scraping jobs in parallel on cluster
I would like to split scraping urls among many crawling processes and run them on separate google cloud instances. I could do this by hand (the same spider with just different input data) but it is very annoying to manage 10-20 instances. Is there a possibility to run instance group and specify which process should be executed on which instance? I am using scrapy spider and right now I will split input data manually. Next step will be to use redis queue.
I have past experience with MPI and cluster computing. I remember there was option to specify maximal number of processes per node. I would like to do similar thing in this case.
getting "ERROR p2p-gorpc: stream reset" after running ipfs-cluster-service daemon using VagrantFile
I am trying to join my slave node to existing cluster master-node, my master node is already up but, I am getting "ERROR p2p-gorpc: stream reset" after running below command through vagrantFile.
$ ipfs-cluster-service daemon --bootstrap $MULTI_ADDRESS
Here the output on slave side
as well, after this command my master-node is also disconnected from cluster-network
please help me out
Why is my PSP (Pod Security Policy) failing for my Deployment
I'm running in GKE (Google Kubernetes Engine) Cluster. I've enabled Pod Security Policies (PSP) on my cluster. My deployment is stating it does not meet the PSP requirements.
From the Kubernetes Pod Security Policy - Policy Order documentation I understand that each PSP is tested in Alphabetical order (at creation) and when the one passes, the deployment is good to go.
Is there a utility I could use to run a deployment (e.g. yml) vs a Pod Security Policy (PSP) definition and then get a report on any Pod Security Policy (PSP) violation(s).
It feels like this would make for a highly valuable development tool. Do tell me if I'm missing the mark in how I should be thinking about PSPs.
How can we restart the Kubernetes pod if its readiness fail
a quick question. I know if the Kubernetes liveness probe fails, kubernetes will help restart the pod and try again. But how about the readiness probe fails? How can I also ask kubernetes to restart the pod?
api-group-0 0/1 Running 0 6h35m
Restart this pod can make it works. Thanks all!
How pod replicas sync with each other - Kubernetes?
I have a MySQL database pod with 3 replicas.
Now I'm making some changes in one pod(pod data,not pod configuration), say I'm adding a table.
How will the change reflect on the other replicas of the pod?
I'm using kubernetes v1.13 with 3 worker nodes.