Pods do sistema kube CrashLoopBackOff

Pods do sistema kube CrashLoopBackOff

EDITAR. Eu fiz exatamente o mesmo passo no servidor Ubuntu 20.05 e está funcionando bem ...

Eu crio um novo cluster kube no servidor Ubuntu 22.04, mas tenho vários problemas. Pods do sistema kube subindo e descendo. Verifiquei os registros, mas não consigo encontrar o problema.

kubectl obtém po -A

kubectl descreve po calico-kube-controllers-7bdbfc669-kdts2 -n kube-system

Às vezes não consigo usar o kubectl, acho que é porque os pods kube-api estão inativos.

Ligação recusada

rbo@ubuntuserver:~$ kubectl get po -A
NAMESPACE     NAME                                      READY   STATUS             RESTARTS         AGE
kube-system   calico-kube-controllers-7bdbfc669-kdts2   1/1     Running            7 (6m13s ago)    16m
kube-system   calico-node-jz5xb                         1/1     Running            7 (7m9s ago)     16m
kube-system   coredns-787d4945fb-l4bf5                  1/1     Running            6 (5m59s ago)    5h26m
kube-system   coredns-787d4945fb-nt8lh                  1/1     Running            4 (93s ago)      5h26m
kube-system   etcd-ubuntuserver                         1/1     Running            16 (6m40s ago)   5h26m
kube-system   kube-apiserver-ubuntuserver               1/1     Running            15 (4m29s ago)   5h26m
kube-system   kube-controller-manager-ubuntuserver      0/1     CrashLoopBackOff   17 (2m21s ago)   5h25m
kube-system   kube-proxy-lc5nm                          0/1     CrashLoopBackOff   15 (44s ago)     5h26m
kube-system   kube-scheduler-ubuntuserver               1/1     Running            17 (5m40s ago)   5h25m
rbo@ubuntuserver:~$ journalctl -n 30
Dec 13 21:17:12 ubuntuserver kubelet[662]: E1213 21:17:12.362890     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:13 ubuntuserver kubelet[662]: I1213 21:17:13.366892     662 scope.go:115] "RemoveContainer" containerID="4621fe31f41ed1c053e77f495ed215271c4dd12b080405c533dc84e1185680d4"
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.368653147Z" level=info msg="CreateContainer within sandbox \"41195dbd058b802b9812da2ce092a6298580768d9c42401a808f0f5d02342ba5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:17,}"
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.379867865Z" level=info msg="CreateContainer within sandbox \"41195dbd058b802b9812da2ce092a6298580768d9c42401a808f0f5d02342ba5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:17,} returns container id \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\""
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.380737267Z" level=info msg="StartContainer for \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\""
Dec 13 21:17:13 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6-runc.Jtkdkn.mount: Deactivated successfully.
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.450275779Z" level=info msg="StartContainer for \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\" returns successfully"
Dec 13 21:17:14 ubuntuserver kubelet[662]: I1213 21:17:14.329726     662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:14 ubuntuserver kubelet[662]: E1213 21:17:14.330141     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
Dec 13 21:17:18 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.VJwxXb.mount: Deactivated successfully.
Dec 13 21:17:18 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.SXK3sR.mount: Deactivated successfully.
Dec 13 21:17:19 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-56191d51b9dfe7b56c320ba85c897d7deafdf070364777a698782d6685ffe256-runc.KeqqBx.mount: Deactivated successfully.
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083816298Z" level=info msg="StopPodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083907197Z" level=info msg="TearDown network for sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083949736Z" level=info msg="StopPodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" returns successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084731070Z" level=info msg="RemovePodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084757189Z" level=info msg="Forcibly stopping sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084814886Z" level=info msg="TearDown network for sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.088007063Z" level=info msg="RemovePodSandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" returns successfully"
Dec 13 21:17:25 ubuntuserver kubelet[662]: I1213 21:17:25.349409     662 scope.go:115] "RemoveContainer" containerID="23c70b66c62384bca25329d3a7ab5c24e209cb791339edff70f4016235ea5dea"
Dec 13 21:17:25 ubuntuserver kubelet[662]: E1213 21:17:25.349754     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:25 ubuntuserver kubelet[662]: I1213 21:17:25.366261     662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:25 ubuntuserver kubelet[662]: E1213 21:17:25.366602     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
Dec 13 21:17:28 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.UWyAsT.mount: Deactivated successfully.
Dec 13 21:17:29 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.X1afEe.mount: Deactivated successfully.
Dec 13 21:17:36 ubuntuserver kubelet[662]: I1213 21:17:36.366771     662 scope.go:115] "RemoveContainer" containerID="23c70b66c62384bca25329d3a7ab5c24e209cb791339edff70f4016235ea5dea"
Dec 13 21:17:36 ubuntuserver kubelet[662]: E1213 21:17:36.367009     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:39 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-56191d51b9dfe7b56c320ba85c897d7deafdf070364777a698782d6685ffe256-runc.yA3w8Y.mount: Deactivated successfully.
Dec 13 21:17:40 ubuntuserver kubelet[662]: I1213 21:17:40.362760     662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:40 ubuntuserver kubelet[662]: E1213 21:17:40.363111     662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
lines 1-30/30 (END)

Responder1

Usando kubeadm e containerd. Uma redefinição do kubeadm, incluindo a exclusão manual de $HOME/.kube/config e /etc/cni/net.d pareceu consertar isso para mim.

Reinicialize o kubeadm.

$ sudo kubeadm reset

Remova este arquivo de configuração ou substitua-o mais tarde, quando solicitado.

$ sudo rm $HOME/.kube/config

Limpe a configuração do CNI (acho que essa foi a parte fundamental para mim).Use por sua conta e risco(como tenho pouco conhecimento sobre o que está lá, sou bastante novo), não faria mal nenhum fazer backup desses arquivos!

$ sudo rm -rf /etc/cni/net.d

'Kubeadm init' agora deve inicializar um cluster funcional sem que os pods do sistema kube sejam executados no crashLoopBackOff 'espontaneamente'.

$ sudo kubeadm init

Responder2

Seguir este tuto está funcionando bem em 22.04. Eu pulei a configuração do kernel e a configuração do containerd, o que não é obrigatório em 20.04.

Tutorial de instalação do kube no servidor Ubuntu 22.04

informação relacionada