Kubernetes - kubeadm join - 新控制平面加入後連線被拒絕

Kubernetes - kubeadm join - 新控制平面加入後連線被拒絕

問題

我正在嘗試將第二個控制平面節點加入 K8S 叢集。第一個節點的 IP 為 10.1.50.4,第二個節點的 IP 為 10.1.50.5。控制平面節點 10.1.50.250 有一個負載平衡器虛擬 IP。

K8S版本:1.20.1-00

命令

kubeadm join 10.1.50.4:6443 --token ozlhby.pbi2v5kp0x8ix9cl --discovery-token-ca-cert-hash sha256:7aff9979cace02a9f1e98d82253ef9a8c1594c80ea0860aba6ef628xdx7103fb --control-plane --certificate-key 3606aa528cd7d730efafcf535625577d6fx77x7cb6f90e5a8517a807065672d --v=5

輸出

I0112 02:20:39.801195   30603 join.go:395] [preflight] found NodeName empty; using OS hostname as NodeName
I0112 02:20:39.801669   30603 join.go:399] [preflight] found advertiseAddress empty; using default interface's IP address as advertiseAddress
I0112 02:20:39.802091   30603 initconfiguration.go:104] detected and using CRI socket: /var/run/dockershim.sock
I0112 02:20:39.802715   30603 interface.go:400] Looking for default routes with IPv4 addresses
I0112 02:20:39.802998   30603 interface.go:405] Default route transits interface "ens160"
I0112 02:20:39.803501   30603 interface.go:208] Interface ens160 is up
I0112 02:20:39.803739   30603 interface.go:256] Interface "ens160" has 2 addresses :[10.1.50.5/24 fe80::20c:29ff:fe2d:674d/64].
I0112 02:20:39.803903   30603 interface.go:223] Checking addr  10.1.50.5/24.
I0112 02:20:39.804074   30603 interface.go:230] IP found 10.1.50.5
I0112 02:20:39.804230   30603 interface.go:262] Found valid IPv4 address 10.1.50.5 for interface "ens160".
I0112 02:20:39.804356   30603 interface.go:411] Found active IP 10.1.50.5 
[preflight] Running pre-flight checks
I0112 02:20:39.804727   30603 preflight.go:90] [preflight] Running general checks
I0112 02:20:39.804935   30603 checks.go:249] validating the existence and emptiness of directory /etc/kubernetes/manifests
I0112 02:20:39.805227   30603 checks.go:286] validating the existence of file /etc/kubernetes/kubelet.conf
I0112 02:20:39.805375   30603 checks.go:286] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I0112 02:20:39.805501   30603 checks.go:102] validating the container runtime
I0112 02:20:39.957746   30603 checks.go:128] validating if the "docker" service is enabled and active
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0112 02:20:40.118312   30603 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0112 02:20:40.118439   30603 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0112 02:20:40.118525   30603 checks.go:649] validating whether swap is enabled or not
I0112 02:20:40.118634   30603 checks.go:376] validating the presence of executable conntrack
I0112 02:20:40.118786   30603 checks.go:376] validating the presence of executable ip
I0112 02:20:40.118920   30603 checks.go:376] validating the presence of executable iptables
I0112 02:20:40.118991   30603 checks.go:376] validating the presence of executable mount
I0112 02:20:40.119140   30603 checks.go:376] validating the presence of executable nsenter
I0112 02:20:40.119218   30603 checks.go:376] validating the presence of executable ebtables
I0112 02:20:40.119310   30603 checks.go:376] validating the presence of executable ethtool
I0112 02:20:40.119369   30603 checks.go:376] validating the presence of executable socat
I0112 02:20:40.119434   30603 checks.go:376] validating the presence of executable tc
I0112 02:20:40.119508   30603 checks.go:376] validating the presence of executable touch
I0112 02:20:40.119601   30603 checks.go:520] running all checks
I0112 02:20:40.274926   30603 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0112 02:20:40.275311   30603 checks.go:618] validating kubelet version
I0112 02:20:40.459593   30603 checks.go:128] validating if the "kubelet" service is enabled and active
I0112 02:20:40.489282   30603 checks.go:201] validating availability of port 10250
I0112 02:20:40.489826   30603 checks.go:432] validating if the connectivity type is via proxy or direct
I0112 02:20:40.490313   30603 join.go:465] [preflight] Discovering cluster-info
I0112 02:20:40.490582   30603 token.go:78] [discovery] Created cluster-info discovery client, requesting info from "10.1.50.4:6443"
I0112 02:20:40.511725   30603 token.go:116] [discovery] Requesting info from "10.1.50.4:6443" again to validate TLS against the pinned public key
I0112 02:20:40.527163   30603 token.go:133] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.1.50.4:6443"
I0112 02:20:40.527277   30603 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I0112 02:20:40.527323   30603 join.go:479] [preflight] Fetching init configuration
I0112 02:20:40.527372   30603 join.go:517] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I0112 02:20:40.561702   30603 interface.go:400] Looking for default routes with IPv4 addresses
I0112 02:20:40.561742   30603 interface.go:405] Default route transits interface "ens160"
I0112 02:20:40.562257   30603 interface.go:208] Interface ens160 is up
I0112 02:20:40.562548   30603 interface.go:256] Interface "ens160" has 2 addresses :[10.1.50.5/24 fe80::20c:29ff:fe2d:674d/64].
I0112 02:20:40.562680   30603 interface.go:223] Checking addr  10.1.50.5/24.
I0112 02:20:40.562745   30603 interface.go:230] IP found 10.1.50.5
I0112 02:20:40.562774   30603 interface.go:262] Found valid IPv4 address 10.1.50.5 for interface "ens160".
I0112 02:20:40.562800   30603 interface.go:411] Found active IP 10.1.50.5 
I0112 02:20:40.576707   30603 preflight.go:101] [preflight] Running configuration dependant checks
[preflight] Running pre-flight checks before initializing the new control plane instance
I0112 02:20:40.577061   30603 checks.go:577] validating Kubernetes and kubeadm version
I0112 02:20:40.577369   30603 checks.go:166] validating if the firewall is enabled and active
I0112 02:20:40.598127   30603 checks.go:201] validating availability of port 6443
I0112 02:20:40.598485   30603 checks.go:201] validating availability of port 10259
I0112 02:20:40.598744   30603 checks.go:201] validating availability of port 10257
I0112 02:20:40.598987   30603 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0112 02:20:40.599271   30603 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0112 02:20:40.599481   30603 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0112 02:20:40.599533   30603 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0112 02:20:40.599686   30603 checks.go:432] validating if the connectivity type is via proxy or direct
I0112 02:20:40.599762   30603 checks.go:471] validating http connectivity to first IP address in the CIDR
I0112 02:20:40.600028   30603 checks.go:471] validating http connectivity to first IP address in the CIDR
I0112 02:20:40.600350   30603 checks.go:201] validating availability of port 2379
I0112 02:20:40.600510   30603 checks.go:201] validating availability of port 2380
I0112 02:20:40.600840   30603 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0112 02:20:40.699836   30603 checks.go:839] image exists: k8s.gcr.io/kube-apiserver:v1.20.1
I0112 02:20:40.796995   30603 checks.go:839] image exists: k8s.gcr.io/kube-controller-manager:v1.20.1
I0112 02:20:40.889726   30603 checks.go:839] image exists: k8s.gcr.io/kube-scheduler:v1.20.1
I0112 02:20:40.977887   30603 checks.go:839] image exists: k8s.gcr.io/kube-proxy:v1.20.1
I0112 02:20:41.072019   30603 checks.go:839] image exists: k8s.gcr.io/pause:3.2
I0112 02:20:41.164679   30603 checks.go:839] image exists: k8s.gcr.io/etcd:3.4.13-0
I0112 02:20:41.255987   30603 checks.go:839] image exists: k8s.gcr.io/coredns:1.7.0
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0112 02:20:41.270660   30603 certs.go:45] creating PKI assets
I0112 02:20:41.271129   30603 certs.go:474] validating certificate period for ca certificate
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master-1 kube.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.50.5 10.1.50.250]
I0112 02:20:42.284014   30603 certs.go:474] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0112 02:20:42.412481   30603 certs.go:474] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [10.1.50.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master-1 localhost] and IPs [10.1.50.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
I0112 02:20:44.631172   30603 certs.go:76] creating new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0112 02:20:45.370294   30603 manifests.go:96] [control-plane] getting StaticPodSpecs
I0112 02:20:45.370640   30603 certs.go:474] validating certificate period for CA certificate
I0112 02:20:45.370743   30603 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0112 02:20:45.370767   30603 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0112 02:20:45.370779   30603 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0112 02:20:45.370790   30603 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0112 02:20:45.370802   30603 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0112 02:20:45.381917   30603 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0112 02:20:45.381975   30603 manifests.go:96] [control-plane] getting StaticPodSpecs
I0112 02:20:45.382292   30603 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0112 02:20:45.382324   30603 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0112 02:20:45.382336   30603 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0112 02:20:45.382347   30603 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0112 02:20:45.382357   30603 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0112 02:20:45.382367   30603 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0112 02:20:45.382377   30603 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0112 02:20:45.383243   30603 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0112 02:20:45.383285   30603 manifests.go:96] [control-plane] getting StaticPodSpecs
I0112 02:20:45.383551   30603 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0112 02:20:45.384124   30603 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I0112 02:20:45.391793   30603 local.go:80] [etcd] Checking etcd cluster health
I0112 02:20:45.391826   30603 local.go:83] creating etcd client that connects to etcd pods
I0112 02:20:45.391841   30603 etcd.go:177] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0112 02:20:45.436952   30603 etcd.go:101] etcd endpoints read from pods: https://10.1.50.4:2379
I0112 02:20:45.467237   30603 etcd.go:247] etcd endpoints read from etcd: https://10.1.50.4:2379
I0112 02:20:45.467292   30603 etcd.go:119] update etcd endpoints: https://10.1.50.4:2379
I0112 02:20:45.497258   30603 kubelet.go:110] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0112 02:20:45.499069   30603 kubelet.go:139] [kubelet-start] Checking for an existing Node in the cluster with name "k8s-master-1" and status "Ready"
I0112 02:20:45.506135   30603 kubelet.go:153] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0112 02:20:50.940170   30603 cert_rotation.go:137] Starting client certificate rotation controller
I0112 02:20:50.946669   30603 kubelet.go:188] [kubelet-start] preserving the crisocket information for the node
I0112 02:20:50.946719   30603 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master-1" as an annotation
I0112 02:21:01.078081   30603 local.go:148] creating etcd client that connects to etcd pods
I0112 02:21:01.078135   30603 etcd.go:177] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0112 02:21:01.130781   30603 etcd.go:101] etcd endpoints read from pods: https://10.1.50.4:2379
I0112 02:21:01.240220   30603 etcd.go:247] etcd endpoints read from etcd: https://10.1.50.4:2379
I0112 02:21:01.240255   30603 etcd.go:119] update etcd endpoints: https://10.1.50.4:2379
I0112 02:21:01.240812   30603 local.go:156] [etcd] Getting the list of existing members
I0112 02:21:01.282237   30603 local.go:164] [etcd] Checking if the etcd member already exists: https://10.1.50.5:2380
I0112 02:21:01.282791   30603 local.go:175] [etcd] Adding etcd member: https://10.1.50.5:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I0112 02:21:01.370283   30603 local.go:182] Updated etcd member list: [{k8s-master-1 https://10.1.50.5:2380} {k8s-master-0 https://10.1.50.4:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I0112 02:21:01.372930   30603 etcd.go:488] [etcd] attempting to see if all cluster endpoints ([https://10.1.50.4:2379 https://10.1.50.5:2379]) are available 1/8
I0112 02:21:03.455137   30603 etcd.go:468] Failed to get etcd status for https://10.1.50.5:2379: failed to dial endpoint https://10.1.50.5:2379 with maintenance client: context deadline exceeded
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

網路測試 - 10.1.50.4

kubectl get nodes

在此輸入影像描述

10.1.50.4>lsof -i -P -n | grep LISTEN

在此輸入影像描述

安裝etcd-client並執行etcdctl member list(10.1.50.5嘗試加入後)

在此輸入影像描述

etcdctl cluster-health(10.1.50.5嘗試加入後)

在此輸入影像描述

systemctl restart network(10.1.50.5嘗試加入後)

在此輸入影像描述

etcdctl --version(10.1.50.5嘗試加入後)

在此輸入影像描述

kubeadm version(10.1.50.5嘗試加入後)

在此輸入影像描述

kubectl get nodes (after 10.1.50.5 tried to join)

在此輸入影像描述

網路測試 - 10.1.50.5 - 加入前

route -n

在此輸入影像描述

nmap -p 6443 10.1.50.4

在此輸入影像描述

ping 10.1.50.4

在此輸入影像描述

ping 10.1.50.250

在此輸入影像描述

網路測試 - 10.1.50.5 - 加入後

route -n

相同的

nmap -p 6443 10.1.50.4

在此輸入影像描述

ping 10.1.50.4

相同的

ping 10.1.50.250

相同的

編輯

命令kubectl get pods --all-namespaces

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-744cfdf676-vf6fw   1/1     Running   0          47h
kube-system   calico-node-plsv4                          1/1     Running   0          47h
kube-system   coredns-74ff55c5b-btdsr                    1/1     Running   0          47h
kube-system   coredns-74ff55c5b-q66c9                    1/1     Running   0          47h
kube-system   etcd-k8s-master-0                          1/1     Running   0          47h
kube-system   kube-apiserver-k8s-master-0                1/1     Running   0          47h
kube-system   kube-controller-manager-k8s-master-0       1/1     Running   0          47h
kube-system   kube-proxy-7jqx9                           1/1     Running   0          47h
kube-system   kube-scheduler-k8s-master-0                1/1     Running   0          47h

命令docker exec -it k8s_POD_etcd-k8s-master-0_kube-system_a9f805c0eb22e024f35cb6a5e3768516_0 etcdctl --endpoints=https://10.1.50.4:2379 --key=/etc/kubernetes/pki/etcd/peer.key --cert=/etc/kubernetes/pki/etcd/peer.crt --cacert=/etc/kubernetes/pki/etcd/ca.crt member list

OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"etcdctl\": executable file not found in $PATH": unknown

答案1

命令

docker exec -it k8s_POD_etcd-<nodename>_kube-system_<docker container id> etcdctl --endpoints=https://<node ip>:2379 --key=/etc/kubernetes/pki/etcd/peer.key --cert=/etc/kubernetes/pki/etcd/peer.crt --cacert=/etc/kubernetes/pki/etcd/ca.crt member list

回覆

OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"etcdctl\": executable file not found in $PATH": unknown

這個問題比較適合你的問題。

Kubernetes OCI 運行時執行失敗 - 啟動容器程序導致“exec: \”etcdctl\”: $PATH 中找不到可執行檔”: 未知

答案2

節點 10.1.50.5 或具有此 IP/名稱的節點是否已加入集群,然後再次刪除?如果是,您必須刪除 etcd 中的條目 10.1.50.5 (或節點名稱)(而不是透過 kubeadm/kubectl)。或者您將 10.1.50.5 IP 位址變更為 10.1.50.55,然後嘗試再次加入。

要配置 etcd,您必須登入 etcd。例如這個指令(也可以使用 kubectl exec 來實作)

docker exec -it k8s_etcd_etcd-10.1.50.4...COMPLETEPODNAME \
  etcdctl --endpoints=https://10.1.50.4:2379 \
  --key=/etc/kubernetes/pki/etcd/peer.key  \
  --cert=/etc/kubernetes/pki/etcd/peer.crt  \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt member list

etcdctl member remove MEMBERUUID

相關內容