K3S: Do i need a virtual IP for server nodes when using HA mode?
I'm experimenting with k8s and tried out their HA mode with kubeadm (Reference). Then i discovered k3s and started to play with k3s' HA mode (the one with embedded ectd), but the docs do not mention anything related to this. So i would just like some insight in this topic, if it is needed or not, or when would i need it
See also questions close to this topic
-
Connect to HDFS HA (High Availability) from Scala
I have a Scala code that now is able to connect to HDFS through a single namenode (non-HA). Namenode, location, conf.location and Kerberos parameters are specified in a .conf file inside of the Scala project. However, now there's a new cluster with HA (involving standby and primary namenodes). Do you know how to configure the client in Scala to support both environments non-HA and HA(with auto-switching of namenodes)?
-
RabbitMQ Fetch from Closest Replica
In a cluster scenario with mirrored queues, is there a way for consumers to consume/fetch data from a mirrored queue/Slave node instead of always reaching out to the master node?
If you think on scalability, having all consumers call a single node responsible to be the master of a specific queue means all traffic goes to a single node.
Kafka allows consumers to fetch data from the closest node if that node contains a replica of the leader, is there something similar on RabbitMQ?
-
Kafka scalability if consuming from slave node
In a cluster scenario with data replication > 1, why is that we must always consume from a master/leader of a partition instead of being able to consume from a slave/follower node that contains a replica of this master node?
I understand the Kafka will always route the request to a master node(of that particular partition/topic) but doesn't this affect scalability (since all requests go to a single node)? Wouldnt it be better if we could read from any node containing the replica information and not necessarily the master?
-
How to create .kube directory inside the Home directory on Mac?
We usually add the
Kubernetes
configuration file inside the.kube
directory in our Home directory when we use eitherWindows
orLinux
operating systems. But when I try to create a.kube
directory onMac OS
, it says,You can't use a name that begins with the dot because these names are reserved for the system. Please choose another name.
How can I do this to put the k8s configuration file into it?
-
Can we run sonobuoy to be k8s conformance on a Rancher cluster
We setup a rancher cluster with 3 nodes for testing and I would like to apply for k8s conformance using this rancher cluster. However, while running sonobuoy it returns error
ERRO[0000] could not create sonobuoy client: failed to get rest config: invalid configuration: no configuration has been provided
It seems like Rancher does not have any kubernates binaries built-in (Kubectl, kubeadm etc). May I know if it is possible to be k8s conformance on a rancher cluster?
-
Possible to remove cluster from 2.4.8 server and import it on a fresh 2.5 server?
due to a wrong installation method (single node) we would like to migrate our existing kubernetes cluster to a newer and HA-rancher-kubernetes cluster.
Can someone tell me if it’s safe to do following:
- remove (previously imported) cluster from our 2.4.8 single-node rancher installation
- register this cluster again on our new kubernetes-managed 2.5 rancher-cluster?
We already tried this with our development cluster and it worked fine, the only thing which was necessary to do was to:
- create user/admin accounts again
- reassign all namespaces to the corresponding rancher projects
Would be nice to get some more opinions on this, right now it looks more or less safe :smiley:
Also does someone know what happens if one kubernetes cluster is registered/imported to two rancher instances at the same time (like 2.4.8 and 2.5 at the same time) - I know its probably really a bad Idea - just want to get a better understanding if I’m wrong :D
-
How to setup private registry with containerd and k3s?
I managed to run a registry without any certificates and do
docker login ip
from local machine.When tried to push an image there was some x509... error.
I can post the yaml file but it's too big. There's just a service exposed as loadbalancer, deployment and pvc.
There's the following link but it feels confusing https://github.com/containerd/cri/blob/release/1.4/docs/registry.md
Is it possible to use ip without domain?
with docker I would do the following and edit that daemon json config for insecure registry
docker run \ --name registry \ --publish published=5000,target=5000 \ --env REGISTRY_AUTH=htpasswd \ --env REGISTRY_AUTH_HTPASSWD_REALM="Registry Realm" \ --env REGISTRY_AUTH_HTPASSWD_PATH=/var/apps/registry/auth/registry.password \ --mount type=bind,source=/auth,destination=/auth \ --mount type=volume,source=registry,destination=/var/lib/registry \ registry:2
-
k3s - networking between pods not working
I'm struggling with this cross-communication between pods even though clusterIP services are set up for them. All the pods are on the same master node, and in the same namespace. In Summary:
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-744f4df6df-rxhph 1/1 Running 0 136m 10.42.0.31 raspberrypi <none> <none> nginx-2-867f4f8859-csn48 1/1 Running 0 134m 10.42.0.32 raspberrypi <none> <none> $ kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx-service ClusterIP 10.43.155.201 <none> 80/TCP 136m app=nginx nginx-service2 ClusterIP 10.43.182.138 <none> 85/TCP 134m app=nginx-2
where I can't curl http://nginx-service2:85 from within nginx container, or vice versa... while I validated that this worked from my docker desktop installation:
# docker desktop root@nginx-7dc45fbd74-7prml:/# curl http://nginx-service2:85 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> # k3s root@nginx-744f4df6df-rxhph:/# curl http://nginx-service2.pwk3spi-vraptor:85 curl: (6) Could not resolve host: nginx-service2.pwk3spi-vraptor
After googling the issue (and please correct me if I'm wrong) it seems like a coredns issue, because looking at the logs and see the error timeouts:
$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE helm-install-traefik-qr2bd 0/1 Completed 0 153d metrics-server-7566d596c8-nnzg2 1/1 Running 59 148d svclb-traefik-kjbbr 2/2 Running 60 153d traefik-758cd5fc85-wzjrn 1/1 Running 20 62d local-path-provisioner-6d59f47c7-4hvf2 1/1 Running 72 148d coredns-7944c66d8d-gkdp4 1/1 Running 0 3m47s $ kubectl logs coredns-7944c66d8d-gkdp4 -n kube-system .:53 [INFO] plugin/reload: Running configuration MD5 = 1c648f07b77ab1530deca4234afe0d03 CoreDNS-1.6.9 linux/arm, go1.14.1, 1766568 [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:50482->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:34160->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:53485->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:46642->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:55329->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:44471->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:49182->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:54082->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:48151->192.168.8.109:53: i/o timeout [ERROR] plugin/errors: 2 1898797220.1916943194. HINFO: read udp 10.42.0.38:48599->192.168.8.109:53: i/o timeout
where people recommended
- changing the coredns config map to forward to your master node IP
... other CoreFile stuff
forward . host server IP
... other CoreFile stuff
- or adding your coredns clusterip IP as a nameserver to /etc/resolve.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.42.0.38
nameserver 192.168.8.1
nameserver fe80::266:19ff:fea7:85e7%wlan0
, however didn't find that these solutions worked.
Details for reference:
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME raspberrypi Ready master 153d v1.18.9+k3s1 192.168.8.109 <none> Raspbian GNU/Linux 10 (buster) 5.10.9-v7l+ containerd://1.3.3-k3s2 $ kubectl get svc -n kube-system -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 153d k8s-app=kube-dns metrics-server ClusterIP 10.43.205.8 <none> 443/TCP 153d k8s-app=metrics-server traefik-prometheus ClusterIP 10.43.222.138 <none> 9100/TCP 153d app=traefik,release=traefik traefik LoadBalancer 10.43.249.133 192.168.8.109 80:31222/TCP,443:32509/TCP 153d app=traefik,release=traefik $ kubectl get ep kube-dns -n kube-system NAME ENDPOINTS AGE kube-dns 10.42.0.38:53,10.42.0.38:9153,10.42.0.38:53 153d
No idea where I'm going wrong, or if I focused on the wrong stuff, or how to continue. Any help will be much appreciated, please.
-
Is there any way to bind K3s / flannel to another interface?
I have a K3s (v1.20.4+k3s1) cluster with 3 nodes, each with two interfaces. The default interface has a public IP, the second one a 10.190.1.0 address. I installed K3s with and without the -flannel-backend=none option and then deployed flannel via " kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml", previously binding the kube-flannel container to the internal interface via the args "--iface=". In this setup the kube-flannel pods get the node-ip of the internal interface, but I can't reach the pods on the other nodes via ICPM. If I deploy flannel without -iface arg, the kube-flannel pods get an address from the 10.42.0.0 network. Then I can reach the pods of the other hosts, but the traffic will be routed through the public interfaces, which I want to avoid. Does anyone have a tip for me?