How can I copy files between pods or between nodes in the kubernetes cluster?
Is this possible inside the kubernetes cluster?
All examples I've found are copying from a local disk to a pod or vice versa.
Or the only option to copy from node to node, for example over ssh, scp or with other utils?
Thanks for the answers.
It's not possible to do cluster to cluster copying. You'd need to use
kubectl cpto copy it locally, then copy the file back:
kubectl cp <pod>:/tmp/test /tmp/test kubctl cp /tmp/test <pod>:/tmp/test
If you are trying to share files between pods, and only one pods needs write access, you probably want to mount an ro volume on multiple pods, or use an object store like S3. Copying files to and from pods really shouldn't be something you're doing often, that's an anti-pattern
See also questions close to this topic
Best secured API call strategy within a kubernetes cluster
I have a cluster with 10 different microservices in Kubernetes cluster. There are 3 different front-end applications based on ReactJS and all the other applications are in NodeJS. To make secure calls from front-end to the backend, I only have one application which is handling requests from the frontend and then it redirects to other apps within the cluster. For front-end I am using JWT token to authenticate all the calls to talk to the only NodeJS app providing the API endpoint to outside world.
My question is, is it necessary to secure all the api calls within the Kubernetes cluster?
Backend Node APP 1 makes a call to Node APP 2, should this be secured as well? does it have any potential downfall or threat? Also, if it is necessary, what can be the best strategy?
Requirements to install kubernetes
i want to install kubernetes
What is the best Requirements to install it ?
I will install jenkins Over it
Kubernetes Node Failure Query
How can I know if a node failed at some point in kubernetes. I want to just know if a node failed, which node it was and when it failed?
Because what I know is if a node fails, a new node takes place of it.
I'm using AKS (Azure)
Kubernetes cpu requests/limits in heterogeneous cluster
Kubernetes allows to specify the cpu limit and/or request for a POD.
Limits and requests for CPU resources are measured in cpu units. One cpu, in Kubernetes, is equivalent to:
1 AWS vCPU 1 GCP Core 1 Azure vCore 1 IBM vCPU 1 Hyperthread on a bare-metal Intel processor with Hyperthreading
Unfortunately, when using an heterogeneous cluster (for instance with different processors), the cpu limit/request depends on the node on which the POD is assigned; especially for real time applications.
If we assume that:
- we can compute a fined tuned cpu limit for the POD for each kind of hardware of the cluster
- we want to let the Kubernetes scheduler choose a matching node in the whole cluster
Is there a way to launch the POD so that the cpu limit/request depends on the node chosen by the Kubernetes scheduler (or a Node label)?
The obtained behavior should be (or equivalent to):
- Before assigning the POD, the scheduler chooses the node by checking different cpu requests depending on the Node (or a Node Label)
- At runtime, Kublet checks a specific cpu limit depending on the Node (or a Node Label)
JGroups ReplicatedHashMap in a cluster
My Spring based web app is deployed to production in a Tomcat cluster (4+ nodes) with sticky sessions. The max number of nodes will not exceed 8-10 in a few years time.
I need to cache some data(mostly configuration), to avoid hitting Oracle. Since the nature of this data is mostly configuration, I would say the ratio of reads to writes is 999999 / 1.
I don't want to use a full-blown caching solution such as Infinispan/Hazelcast/Redis as it adds operation complexity to the product and the requirement is to cache some small, mostly read-only data(let's say a few hundred kilobytes the most)
At first, I wanted to implement a simple replicating map myself, then I saw
[JGroups]ships with a
[ReplicatedHashMap]. I think it suits my needs but I'm not sure whether I'm missing something.
What else should I consider? Has anyone used it in production?
Setting a VerneMq cluster (VBox
My idea is to install a VerneMQ cluster on test environment (no need of security here). I installed VerneMq(1.4.1) on 2 different VMs on Virtual Box (Ubuntu 16). I started the 2 instances of VerneMQ and their status are well "Active". I try to do (in both side)
sudo vmq-admin cluster join discovery-node=192.168.56.103:44000
But I got all the time this error
Couldn't join cluster due to not_reachable
As there is no need of security for the moment I flushed the IPtables with
sudo iptables -F sudo iptables -X sudo iptables -t nat -F sudo iptables -t nat -X sudo iptables -t mangle -F sudo iptables -t mangle -X sudo iptables -P INPUT ACCEPT sudo iptables -P FORWARD ACCEPT sudo iptables -P OUTPUT ACCEPT
and even stopped the firewall
sudo ufw disable
I also tried to ping and check the port with nmap (I changed the port to 44000 in case of....):
sudo nmap 192.168.56.103 -p 44000
I got all the result (for nmap) :
PORT STATE SERVICE 44000/tcp open unknown
Although all that I continue to get the error
Couldn't join cluster due to not_reachable
Thanks for the one that have an idea
Squid proxy connectivity from kubernetes pod
I was able to successfully setup the proxy server in Azure and using it on my browser to verify. I have an application running on a Kubernetes pod to which I have passed the proxy url as environment variable. The pod is configured as a daemonSet. As soon as I put in the proxy url - below is the error I get:
`/usr/lib/ruby/2.3.0/net/http/response.rb:120:in `error!': 503 "Service Unavailable" (Net::HTTPFatalError) from /usr/lib/ruby/2.3.0/net/http/response.rb:129:in `value' from /usr/lib/ruby/2.3.0/net/http.rb:920:in `connect' from /usr/lib/ruby/2.3.0/net/http.rb:863:in `do_start' from /usr/lib/ruby/2.3.0/net/http.rb:852:in `start' from /var/lib/gems/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:715:in `transmit' from /var/lib/gems/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:145:in `execute' from /var/lib/gems/2.3.0/gems/rest-client-2.0.2/lib/restclient/request.rb:52:in `execute' from /var/lib/gems/2.3.0/gems/rest-client-2.0.2/lib/restclient/resource.rb:51:in `get' from /var/lib/gems/2.3.0/gems/kubeclient-1.1.4/lib/kubeclient/common.rb:328:in `block in api' from /var/lib/gems/2.3.0/gems/kubeclient-1.1.4/lib/kubeclient/common.rb:58:in `handle_exception' from /var/lib/gems/2.3.0/gems/kubeclient-1.1.4/lib/kubeclient/common.rb:327:in `api' from /var/lib/gems/2.3.0/gems/kubeclient-1.1.4/lib/kubeclient/common.rb:322:in `api_valid?' from /var/lib/gems/2.3.0/gems/fluent-plugin-kubernetes_metadata_filter-2.1.2/lib/fluent/plugin/filter_kubernetes_metadata.rb:234:in `configure' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/plugin.rb:164:in `configure' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/agent.rb:152:in `add_filter' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/agent.rb:70:in `block in configure' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/agent.rb:64:in `each' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/agent.rb:64:in `configure' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/root_agent.rb:112:in `configure' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/engine.rb:131:in `configure' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/engine.rb:96:in `run_configure' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/supervisor.rb:795:in `run_configure' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/supervisor.rb:579:in `dry_run' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/supervisor.rb:597:in `supervise' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/supervisor.rb:502:in `run_supervisor' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/lib/fluent/command/fluentd.rb:310:in `<top (required)>' from /usr/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /usr/lib/ruby/2.3.0/rubygems/core_ext/kernel_require.rb:55:in `require' from /var/lib/gems/2.3.0/gems/fluentd-1.2.0/bin/fluentd:8:in `<top (required)>' from /usr/local/bin/fluentd:22:in `load' from /usr/local/bin/fluentd:22:in `<main>'
The pod at this juncture does not start with the error of
Any help would be great. Also, if any other information required - I'll add on. Mention it please.
What are the benefits/drawbacks of deploying my database and application into separate containers in the same pod?
I am trying to understand the benefits and drawbacks of the following architectures when it comes to deploying my application and database containers using kubernetes.
A little background: The application sits behind an Nginx proxy. All requests flow from the proxy to the web server. The web server is the only thing that has access to the (read only) database.
Pod#1 - Database container only
Pod#2 - Application container only
Pod#1 - Database container & Application container
From my research so far, I have found comments recommending Architecture 1 for scaling reasons. https://linchpiner.github.io/k8s-multi-container-pods.html
Does anyone have insight onto which of these approaches would be better suited for my situation?
Committing kubernetes pods to some new instance of pods
I've deployed a kubernetes infrastructure in my organization and have scaled up my application but at a time it requires custom modification, so I need to change and commit my images again and again and then pull that image to see my changes. Is there are anyway via which I can commit my pods and use it as an instance to replicate to other pods, can some please advice as to how to overcome this problem without committing the base images again and again.