kubectl get nodes - The connection was refused

  • Running Ubuntu 18.04.1 LTS in a Virtual Box virtual machine
  • I seem to have the same issue as reported here on SO

I installed this a few days ago and all was well. I could connect via kubectl no problems. However now when I do the following:

$ kubectl get nodes
The connection to the server was refused - did you specify the right host or port?

UPDATE: Added environment settings.


$ kubectl config view
apiVersion: v1
- cluster:
    certificate-authority-data: DATA+OMITTED
  name: kubernetes
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
- name: kubernetes-admin
    client-certificate-data: REDACTED
    client-key-data: REDACTED
$ kubectl get pods

Even if I set the variable explicitly to the config file in my home directory which is:

$ ls -l .kube/config
-rw------- 1 someuser someuser 5450 Oct 15 21:58 .kube/config

it makes no difference. The 'kubectl config view' still returns the same data (as no KUBECONFIG variable setting looks for a config file in the above location by default)

The firewall is also off:

$ sudo ufw status
Status: inactive

I can see kubelet is still OK:

$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
   Active: active (running) since Mon 2018-10-15 21:46:55 AEDT; 1 weeks 1 days ago

It does not appear the apiserver is running:

$ ps aux | grep kube
root      10304  9.4  1.5 1380412 136776 ?      Ssl  Oct15 1093:57 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
root      11104  0.7  0.3  43168 32476 ?        Ssl  Oct15  92:07 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
donovan   11757  0.0  0.0  14428  1044 pts/1    S+   22:39   0:00 grep --color=auto kube
root     159921  0.0  0.1  16252  8824 ?        Ssl  Oct19   5:02 /chart-repo sync --mongo-url=kubeapps-mongodb --mongo-user=root stable https://kubernetes-charts.storage.googleapis.com

~$ sudo lsof -i
systemd-r   516 systemd-resolve   12u  IPv4    28394      0t0  UDP localhost:domain
systemd-r   516 systemd-resolve   13u  IPv4    28395      0t0  TCP localhost:domain (LISTEN)
avahi-dae   627           avahi   12u  IPv4    31555      0t0  UDP *:mdns
avahi-dae   627           avahi   13u  IPv6    31556      0t0  UDP *:mdns
avahi-dae   627           avahi   14u  IPv4    31557      0t0  UDP *:47611
avahi-dae   627           avahi   15u  IPv6    31558      0t0  UDP *:35014
xrdp-sesm   750            root    7u  IPv6    33682      0t0  TCP ip6-localhost:3350 (LISTEN)
sshd       2018            root    3u  IPv4  8211858      0t0  TCP *:ssh (LISTEN)
sshd       2018            root    4u  IPv6  8211860      0t0  TCP *:ssh (LISTEN)
sshd       2161            root    3u  IPv4    44589      0t0  TCP KUBE-01:ssh-> (ESTABLISHED)
sshd       2254         donovan    3u  IPv4    44589      0t0  TCP KUBE-01:ssh-> (ESTABLISHED)
sshd       6348            root    3u  IPv4    57332      0t0  TCP KUBE-01:ssh-> (ESTABLISHED)
sshd       6429         donovan    3u  IPv4    57332      0t0  TCP KUBE-01:ssh-> (ESTABLISHED)
kubelet   10304            root    9u  IPv4    98081      0t0  TCP localhost:38077 (LISTEN)
kubelet   10304            root   19u  IPv4   118188      0t0  TCP localhost:10248 (LISTEN)
kubelet   10304            root   20u  IPv6   117597      0t0  TCP *:10250 (LISTEN)
cupsd     19145            root    6u  IPv6 21711266      0t0  TCP ip6-localhost:ipp (LISTEN)
cupsd     19145            root    7u  IPv4 21711267      0t0  TCP localhost:ipp (LISTEN)
cups-brow 19146            root    7u  IPv4 21710056      0t0  UDP *:ipp

but for the life of me I cannot figure out how to check that the kube-apiserver is running (via service check or similair) as this is what I guess is causing the issue?

Update: Seems API Server is failing because of etcd

Digging into the docker logs:

sudo less /var/log/containers/kube-apiserver-kube-01_kube-system_kube-apiserver-00c9e483c6f0f84520d0f6b41cfb8e6489ef030aac91c8d6ac30c88bde44e9f1.log
{"log":"Flag --insecure-port has been deprecated, This flag will be removed in a future version.\n","stream":"stderr","time":"2018-10-24T10:32:08.316846636Z"}
{"log":"I1024 10:32:08.316937       1 server.go:681] external host was not specified, using\n","stream":"stderr","time":"2018-10-24T10:32:08.317214326Z"}
{"log":"I1024 10:32:08.317252       1 server.go:152] Version: v1.12.1\n","stream":"stderr","time":"2018-10-24T10:32:08.317368622Z"}
{"log":"I1024 10:32:09.025904       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.\n","stream":"stderr","time":"2018-10-24T10:32:09.026105478Z"}
{"log":"I1024 10:32:09.025981       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.\n","stream":"stderr","time":"2018-10-24T10:32:09.026159677Z"}
{"log":"I1024 10:32:09.026595       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.\n","stream":"stderr","time":"2018-10-24T10:32:09.026704563Z"}
{"log":"I1024 10:32:09.026625       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.\n","stream":"stderr","time":"2018-10-24T10:32:09.026717163Z"}
{"log":"F1024 10:32:29.031135       1 storage_decorator.go:57] Unable to create storage backend: config (\u0026{ /registry [] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true true 1000 0xc420ba1cb0 \u003cnil\u003e 5m0s 1m0s}), err (dial tcp connect: connection refused)\n","stream":"stderr","time":"2018-10-24T10:32:29.032482723Z"}


  1. Why is etcd failing inside Docker?
  2. On an Ubuntu machine how do I figure out if all the k8s bits are running?
  3. How do I further troubleshoot this issue? (so I can get kubectl talking to the cluster again)