
EDITAR. Hice exactamente el mismo paso en ubuntu server 20.05 y está funcionando bien...
Creo un nuevo clúster kube en el servidor ubuntu 22.04 pero tengo varios problemas. Pods del sistema kube subiendo y bajando. Revisé los registros pero no encuentro el problema.
kubectl describe po calico-kube-controllers-7bdbfc669-kdts2 -n kube-system
En algún momento no puedo usar kubectl, creo que es porque los pods de kube-api no funcionan.
rbo@ubuntuserver:~$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-7bdbfc669-kdts2 1/1 Running 7 (6m13s ago) 16m
kube-system calico-node-jz5xb 1/1 Running 7 (7m9s ago) 16m
kube-system coredns-787d4945fb-l4bf5 1/1 Running 6 (5m59s ago) 5h26m
kube-system coredns-787d4945fb-nt8lh 1/1 Running 4 (93s ago) 5h26m
kube-system etcd-ubuntuserver 1/1 Running 16 (6m40s ago) 5h26m
kube-system kube-apiserver-ubuntuserver 1/1 Running 15 (4m29s ago) 5h26m
kube-system kube-controller-manager-ubuntuserver 0/1 CrashLoopBackOff 17 (2m21s ago) 5h25m
kube-system kube-proxy-lc5nm 0/1 CrashLoopBackOff 15 (44s ago) 5h26m
kube-system kube-scheduler-ubuntuserver 1/1 Running 17 (5m40s ago) 5h25m
rbo@ubuntuserver:~$ journalctl -n 30
Dec 13 21:17:12 ubuntuserver kubelet[662]: E1213 21:17:12.362890 662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:13 ubuntuserver kubelet[662]: I1213 21:17:13.366892 662 scope.go:115] "RemoveContainer" containerID="4621fe31f41ed1c053e77f495ed215271c4dd12b080405c533dc84e1185680d4"
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.368653147Z" level=info msg="CreateContainer within sandbox \"41195dbd058b802b9812da2ce092a6298580768d9c42401a808f0f5d02342ba5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:17,}"
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.379867865Z" level=info msg="CreateContainer within sandbox \"41195dbd058b802b9812da2ce092a6298580768d9c42401a808f0f5d02342ba5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:17,} returns container id \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\""
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.380737267Z" level=info msg="StartContainer for \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\""
Dec 13 21:17:13 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6-runc.Jtkdkn.mount: Deactivated successfully.
Dec 13 21:17:13 ubuntuserver containerd[672]: time="2022-12-13T21:17:13.450275779Z" level=info msg="StartContainer for \"4f86b5e615ba3f0b987024bf64c3063275c3090a4c3b3689172de592908aedc6\" returns successfully"
Dec 13 21:17:14 ubuntuserver kubelet[662]: I1213 21:17:14.329726 662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:14 ubuntuserver kubelet[662]: E1213 21:17:14.330141 662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
Dec 13 21:17:18 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.VJwxXb.mount: Deactivated successfully.
Dec 13 21:17:18 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.SXK3sR.mount: Deactivated successfully.
Dec 13 21:17:19 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-56191d51b9dfe7b56c320ba85c897d7deafdf070364777a698782d6685ffe256-runc.KeqqBx.mount: Deactivated successfully.
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083816298Z" level=info msg="StopPodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083907197Z" level=info msg="TearDown network for sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.083949736Z" level=info msg="StopPodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" returns successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084731070Z" level=info msg="RemovePodSandbox for \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084757189Z" level=info msg="Forcibly stopping sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\""
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.084814886Z" level=info msg="TearDown network for sandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" successfully"
Dec 13 21:17:24 ubuntuserver containerd[672]: time="2022-12-13T21:17:24.088007063Z" level=info msg="RemovePodSandbox \"9a253fdf76b2614b7a8280d8cdcae43ee9e94736fe309982771e7d82b86118cd\" returns successfully"
Dec 13 21:17:25 ubuntuserver kubelet[662]: I1213 21:17:25.349409 662 scope.go:115] "RemoveContainer" containerID="23c70b66c62384bca25329d3a7ab5c24e209cb791339edff70f4016235ea5dea"
Dec 13 21:17:25 ubuntuserver kubelet[662]: E1213 21:17:25.349754 662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:25 ubuntuserver kubelet[662]: I1213 21:17:25.366261 662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:25 ubuntuserver kubelet[662]: E1213 21:17:25.366602 662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
Dec 13 21:17:28 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.UWyAsT.mount: Deactivated successfully.
Dec 13 21:17:29 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-da16c36612417ba0f5ab81357a0a2452cf4ae38b47cb9eab860e2d7bbe0de637-runc.X1afEe.mount: Deactivated successfully.
Dec 13 21:17:36 ubuntuserver kubelet[662]: I1213 21:17:36.366771 662 scope.go:115] "RemoveContainer" containerID="23c70b66c62384bca25329d3a7ab5c24e209cb791339edff70f4016235ea5dea"
Dec 13 21:17:36 ubuntuserver kubelet[662]: E1213 21:17:36.367009 662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-lc5nm_kube-system(0f9da167-6b5b-4530-8ae4-067bcfd88098)\"" pod="kube-system/kube-proxy-lc5nm" podUID=0f9da167-6b5b-4530-8ae4-067bcfd88098
Dec 13 21:17:39 ubuntuserver systemd[1]: run-containerd-runc-k8s.io-56191d51b9dfe7b56c320ba85c897d7deafdf070364777a698782d6685ffe256-runc.yA3w8Y.mount: Deactivated successfully.
Dec 13 21:17:40 ubuntuserver kubelet[662]: I1213 21:17:40.362760 662 scope.go:115] "RemoveContainer" containerID="d90c8c392ef73770f2161ab12a98cdbdc3e3f7937239f8b10763f760d6091201"
Dec 13 21:17:40 ubuntuserver kubelet[662]: E1213 21:17:40.363111 662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-ubuntuserver_kube-system(f8f5737540cef58793e773c366765eac)\"" pod="kube-system/kube-controller-manager-ubuntuse>
lines 1-30/30 (END)
Respuesta1
Usando kubeadm y containerd. Un reinicio de kubeadm que incluye la eliminación manual de $HOME/.kube/config y /etc/cni/net.d pareció solucionarlo.
Restablecer kubeadm.
$ sudo kubeadm reset
Elimine este archivo de configuración o sobrescríbalo más tarde cuando se le solicite.
$ sudo rm $HOME/.kube/config
Borre la configuración CNI (creo que esta fue la parte clave para mí).Úselo bajo su propio riesgo(Como tengo poco conocimiento sobre lo que hay allí, soy bastante nuevo), ¡no estaría de más hacer una copia de seguridad de esos archivos!
$ sudo rm -rf /etc/cni/net.d
'Kubeadm init' ahora debería inicializar un clúster en funcionamiento sin que los pods del sistema kube se ejecuten en crashLoopBackOff 'espontáneamente'.
$ sudo kubeadm init
Respuesta2
Seguir este tutorial funciona bien el 22.04. Me salté la configuración del kernel y la configuración del contenedor, lo que no es obligatorio el 20.04.