
두 개의 가상 머신(kvm)에 기본 클러스터를 배포했으며 하나는 제어 플레인을 사용하여 마스터로 지정되었습니다. kubeadm init
모든 것이 올바르게 시작되는 것으로 보이지만 를 사용하여 가장 기본적인 검사도 수행하려고 하면 kubectl
연결 거부 오류가 발생합니다.
업데이트 2:
작동했지만 이유를 이해하지 못합니다. 원래는 sudo를 사용하여 모든 것을 전용 사용자로 실행하고 있었습니다. 루트(su root)로 전환하고 단계를 반복하자마자 모든 것이 작동하기 시작했습니다. 변경의 원인은 무엇입니까? 사용자가 아닌 루트 환경에 있는 것과 관련이 있습니까? 홈 디렉토리가 다른가요? 작업 디렉토리? 나는 여기서 헤매고 있다
업데이트 1: 최소 실패 예:
이번에는 우분투 20.04를 실행하는 또 다른 가상 머신을 만들어서이 튜토리얼가능한 한 원본 예제에 가깝지만 마지막 단계는 원래 문제와 마찬가지로 실패합니다. 이 튜토리얼이 실제로 완료되었나요?
단계별 실행:
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt update
apt install -y kubelet=1.20.0-00 kubeadm=1.20.0-00 kubectl=1.20.0-00
apt-mark hold kubelet kubeadm kubectl
export VERSION=19.03 && curl -sSL get.docker.com | sh
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl enable docker
systemctl daemon-reload
systemctl restart docker
kubeadm init --ignore-preflight-errors=all
>>> At this point everything fails - some service is missing?
환경에 대한 추가 정보:
- 호스트는 kvm을 실행하는 ubuntu 20.04 LTS입니다.
- 게스트(여기서 k8s를 설치함)는 우분투 서버 22.04입니다.
- 호스트 네트워크가 192.168.xy에 있습니다.
- VM용 브리지는 10.0.1.x/24에 있습니다.
- UFW는비활성
회로망:
ip route show
- default via 10.0.1.254 dev ens3 proto static # 이것은 내 가상 머신을 외부 세계와 연결하는 브리지의 일부입니다.
- 10.0.1.0/24 dev ens3 proto kernelscope link src 10.0.1.10 # 이것은 머신의 IP입니다. 노드를 호스팅하는 머신에는 11, 12, 13 등으로 끝나는 IP가 있습니다.
- 172.17.0.0/16 dev docker0 proto kernelscope link src 172.17.0.1 linkdown # docker에 의해 자동 생성됨 - 최신 넷마스크 형식에는 적합하지 않았으므로 docker /16은 괜찮을 것이라고 믿습니다. 나는 그것에 대해 아무것도 하지 않았다.
cat /etc/hosts
primary
호스트에 추가했는데 나머지는 자동 생성되었습니다.
127.0.0.1 localhost
127.0.0.1 primary
127.0.1.1 k8-vm
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
도커 데몬 구성
cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
me@k8-vm:~$ kubectl 포드 가져오기
E0117 09:10:43.476675 13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
E0117 09:10:43.477525 13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
E0117 09:10:43.479198 13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
E0117 09:10:43.479754 13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
E0117 09:10:43.481822 13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
The connection to the server 10.0.1.10:6443 was refused - did you specify the right host or port?
~/.kube/config에서 구성을 설정했습니다.
cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: secret_stuff
server: https://10.0.1.10:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: secret_stuff
client-key-data: secret_stuff
iptables를 플러시하려고 시도했지만무엇다시 복원하는 중입니다. iptables의 의심스러운 항목:
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
Chain KUBE-SERVICES (2 references)
target prot opt source destination
REJECT tcp -- anywhere 10.96.0.1 /* default/kubernetes:https has no endpoints */ reject-with icmp-port-unreachable
REJECT udp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns has no endpoints */ reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp has no endpoints */ reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:metrics has no endpoints */ reject-with icmp-port-unreachable
kubectl이 iptables에 의해 차단될 수 있나요?
로그, 출력:
me@primary:~$ sudo kubeadm init --apiserver-advertise-address=172.105.148.95 --apiserver-cert-extra-sans=172.105.148.95 --pod-network-cidr=10.13.13.0/16 --node- 이름 기본
[init] Using Kubernetes version: v1.29.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0119 09:35:48.771333 2949 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local primary] and IPs [10.96.0.1 172.105.148.95]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost primary] and IPs [172.105.148.95 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost primary] and IPs [172.105.148.95 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
@ user23204747 - 귀하의 질문에 답변하십시오 :
journalctl -fu kubelet
Jan 19 09:42:58 primary kubelet[3063]: E0119 09:42:58.312695 3063 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.105.148.95:6443/api/v1/nodes\": dial tcp 172.105.148.95:6443: connect: connection refused" node="primary"
Jan 19 09:42:59 primary kubelet[3063]: E0119 09:42:59.476195 3063 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.105.148.95:6443/api/v1/namespaces/default/events\": dial tcp 172.105.148.95:6443: connect: connection refused" event="&Event{ObjectMeta:{primary.17abb5f60468180b default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:primary,UID:primary,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node primary status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:primary,},FirstTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,LastTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:primary,}"
Jan 19 09:43:02 primary kubelet[3063]: E0119 09:43:02.208913 3063 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"primary\" not found"
Jan 19 09:43:05 primary kubelet[3063]: E0119 09:43:05.289531 3063 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.105.148.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp 172.105.148.95:6443: connect: connection refused" interval="7s"
Jan 19 09:43:05 primary kubelet[3063]: I0119 09:43:05.315765 3063 kubelet_node_status.go:73] "Attempting to register node" node="primary"
Jan 19 09:43:05 primary kubelet[3063]: E0119 09:43:05.424960 3063 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.105.148.95:6443/api/v1/nodes\": dial tcp 172.105.148.95:6443: connect: connection refused" node="primary"
Jan 19 09:43:05 primary kubelet[3063]: W0119 09:43:05.450439 3063 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.105.148.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:05 primary kubelet[3063]: E0119 09:43:05.450569 3063 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.105.148.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:09 primary kubelet[3063]: I0119 09:43:09.129122 3063 scope.go:117] "RemoveContainer" containerID="af6057fbd0e43b4628685f316cffef08e6ea4a6236223120ce825681b53a45ad"
Jan 19 09:43:09 primary kubelet[3063]: E0119 09:43:09.129843 3063 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-primary_kube-system(b8faf6a4553601b5610c06ed8c93f9e3)\"" pod="kube-system/etcd-primary" podUID="b8faf6a4553601b5610c06ed8c93f9e3"
Jan 19 09:43:09 primary kubelet[3063]: E0119 09:43:09.585849 3063 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.105.148.95:6443/api/v1/namespaces/default/events\": dial tcp 172.105.148.95:6443: connect: connection refused" event="&Event{ObjectMeta:{primary.17abb5f60468180b default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:primary,UID:primary,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node primary status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:primary,},FirstTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,LastTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:primary,}"
Jan 19 09:43:12 primary kubelet[3063]: I0119 09:43:12.128675 3063 scope.go:117] "RemoveContainer" containerID="a9185c62d88f3192935626d7213b4378816563580b4afa1ac14b74f4029b548c"
Jan 19 09:43:12 primary kubelet[3063]: E0119 09:43:12.209179 3063 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"primary\" not found"
Jan 19 09:43:12 primary kubelet[3063]: E0119 09:43:12.398807 3063 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.105.148.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp 172.105.148.95:6443: connect: connection refused" interval="7s"
Jan 19 09:43:12 primary kubelet[3063]: I0119 09:43:12.427248 3063 kubelet_node_status.go:73] "Attempting to register node" node="primary"
Jan 19 09:43:12 primary kubelet[3063]: E0119 09:43:12.551820 3063 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.105.148.95:6443/api/v1/nodes\": dial tcp 172.105.148.95:6443: connect: connection refused" node="primary"
Jan 19 09:43:14 primary kubelet[3063]: W0119 09:43:14.542312 3063 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.105.148.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:14 primary kubelet[3063]: E0119 09:43:14.542441 3063 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.105.148.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:17 primary kubelet[3063]: W0119 09:43:17.648986 3063 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.105.148.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:17 primary kubelet[3063]: E0119 09:43:17.649087 3063 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.105.148.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:19 primary kubelet[3063]: E0119 09:43:19.507898 3063 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.105.148.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp 172.105.148.95:6443: connect: connection refused" interval="7s"
Jan 19 09:43:19 primary kubelet[3063]: I0119 09:43:19.554658 3063 kubelet_node_status.go:73] "Attempting to register node" node="primary"
Jan 19 09:43:19 primary kubelet[3063]: E0119 09:43:19.679757 3063 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.105.148.95:6443/api/v1/nodes\": dial tcp 172.105.148.95:6443: connect: connection refused" node="primary"
Jan 19 09:43:19 primary kubelet[3063]: E0119 09:43:19.694847 3063 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.105.148.95:6443/api/v1/namespaces/default/events\": dial tcp 172.105.148.95:6443: connect: connection refused" event="&Event{ObjectMeta:{primary.17abb5f60468180b default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:primary,UID:primary,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node primary status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:primary,},FirstTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,LastTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:primary,}"
Jan 19 09:43:20 primary kubelet[3063]: W0119 09:43:20.761463 3063 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.105.148.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dprimary&limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:20 primary kubelet[3063]: E0119 09:43:20.761607 3063 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.105.148.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dprimary&limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:21 primary kubelet[3063]: E0119 09:43:21.126858 3063 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.105.148.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:22 primary kubelet[3063]: E0119 09:43:22.209975 3063 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"primary\" not found"
Jan 19 09:43:24 primary kubelet[3063]: I0119 09:43:24.129352 3063 scope.go:117] "RemoveContainer" containerID="af6057fbd0e43b4628685f316cffef08e6ea4a6236223120ce825681b53a45ad"
Jan 19 09:43:24 primary kubelet[3063]: E0119 09:43:24.130066 3063 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-primary_kube-system(b8faf6a4553601b5610c06ed8c93f9e3)\"" pod="kube-system/etcd-primary"
podUID="b8faf6a4553601b5610c06ed8c93f9e3"
CRI 출력:
sudo crictl ps --all
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] validate service connection: validate CRI v1 runtime API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory"
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory"
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
58dcdbbbbdb57 53b148a9d1963 About a minute ago Exited kube-apiserver 17 aa7a744e3afca kube-apiserver-primary
af6057fbd0e43 a0eed15eed449 3 minutes ago Exited etcd 19 12ffe5fa72b5c etcd-primary
a20b25ee9f45e 406945b511542 3 minutes ago Running kube-scheduler 5 507b8c9d8b7ab kube-scheduler-primary
049545de2ba82 79d451ca186a6 3 minutes ago Running kube-controller-manager 4 75616efa726fd kube-controller-manager-primary
8fbf3e41714a7 79d451ca186a6 8 minutes ago Exited kube-controller-manager 3 ae5e4208a01ef kube-controller-manager-primary
4b13c766ae730 406945b511542 8 minutes ago Exited kube-scheduler 4 7e4f0405277e4 kube-scheduler-primary
컨테이너 로그: 약간 지저분하지만 결국 재시도하고 실패하는 등 동일한 오류 메시지를 계속 뱉어냅니다.
W0119 09:43:29.388741 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0119 09:43:30.103643 1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
F0119 09:43:33.102160 1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
kubelet 상태:
sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2024-01-19 13:36:51 UTC; 12min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 8539 (kubelet)
Tasks: 12 (limit: 37581)
Memory: 33.6M
CPU: 19.659s
CGroup: /system.slice/kubelet.service
└─8539 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/p>
Jan 19 13:49:18 primary kubelet[8539]: W0119 13:49:18.366543 8539 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://master-node:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup master-node on 127.0.0.53:53: server m>
Jan 19 13:49:18 primary kubelet[8539]: E0119 13:49:18.366695 8539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://master-node:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup master-no>
Jan 19 13:49:19 primary kubelet[8539]: E0119 13:49:19.481598 8539 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://master-node:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp: lookup master-node on 127.0.0.53:>
Jan 19 13:49:19 primary kubelet[8539]: I0119 13:49:19.902400 8539 kubelet_node_status.go:70] "Attempting to register node" node="primary"
Jan 19 13:49:19 primary kubelet[8539]: E0119 13:49:19.904293 8539 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://master-node:6443/api/v1/nodes\": dial tcp: lookup master-node on 127.0.0.53:53: server misbehaving" node="primary"
Jan 19 13:49:21 primary kubelet[8539]: E0119 13:49:21.741936 8539 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"primary\" not found"
Jan 19 13:49:26 primary kubelet[8539]: E0119 13:49:26.261529 8539 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"primary.17abc31ca21e74b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:>
Jan 19 13:49:26 primary kubelet[8539]: E0119 13:49:26.483811 8539 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://master-node:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp: lookup master-node on 127.0.0.53:>
Jan 19 13:49:26 primary kubelet[8539]: I0119 13:49:26.907141 8539 kubelet_node_status.go:70] "Attempting to register node" node="primary"
Jan 19 13:49:26 primary kubelet[8539]: E0119 13:49:26.909431 8539 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://master-node:6443/api/v1/nodes\": dial tcp: lookup master-node on 127.0.0.53:53: server misbehaving" node="primary"
답변1
먼저 apiserver 포드가 실행 중인지 확인하여 문제를 배제했습니다. 컨트롤 플레인 노드에서 아래 항목을 시도해 보세요.
첫째, kubelet 로그에 의심스러운 것이 있나요?
journalctl -fu kubelet
그런 다음 Pod에 대한 컨테이너 런타임 로그가 표시됩니다.
sudo crictl ps --all # get container_id
sudo crictl logs container_id
kubeadm init
실행한 전체 명령을 포함하지 않았 으므로 --apiserver-advertise-address
올바르게 설정했는지 확인하세요.
답변2
이 connection refused
오류는 일반적으로 요청이 서버에 도달했지만 지정된 포트에서 실행 중인 서비스가 없음을 의미합니다. 마스터 노드에서 API 서버가 시작되었습니까? 다음 명령을 사용하여 kube-api 서버의 상태를 확인하려면:
Systemctl status kube-apiserver
kubeconfig 파일은 에 있으므로
\~/.kube/config
KUBECONFIG 환경 변수가 올바른 파일을 가리키도록 설정해야 합니다.클라이언트 시스템과 마스터 노드 사이에 네트워크 연결이 있는지 확인하십시오. 시스템에서 마스터 노드의 IP 주소에 연결할 수 있는지 확인하십시오.
방화벽과 보안 그룹을 확인하세요. Kubernetes API 통신에 필요한 포트가 열려 있는지 확인하세요. 기본적으로 kubernetes API 서버는 포트 6443을 수신합니다.
iptables에 의심스러운 항목이 있다고 의심되는 경우 다음 명령을 사용하여 iptable 규칙을 검사하십시오.
iptables -L -n
- kubernetes API 통신을 방해할 수 있는 규칙을 찾을 수 있습니다. 명령을 사용하여 일시적으로 규칙을 플러시할 수 있습니다.iptables -F
- 다음을 사용하여 마스터 노드에서 kube-proxy 상태를 확인합니다.
kubectl -n kube-system get pods -l k8s-app=kube-proxy
.
편집하다
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
위의 오류 메시지에서 연결 거부 오류는 비활성화된 필수 cgroup과 같은 노드의 잘못된 구성으로 인해 발생할 수 있으며 확실히 kubelet에 문제를 일으킬 수 있습니다.
- 다음 명령을 실행하여 노드의 kubelet 서비스 상태를 확인합니다.
systemctl status kubelet
- 이 명령은 실패했거나 건강하지 않은 kubelet을 표시합니다. kubelet이 실행되고 있지 않으면 다음을 사용하여 시작해 보세요.systemctl start kubelet
- 노드 구성을 검증하여 올바르게 설정되었는지 확인하고 필요한 사항이 있는지 확인하세요.cgroup비활성화되었거나 잘못 구성되었습니다.
A. containerd를 cri로 사용하는 경우 다음 명령을 따르십시오 SystemdCgroup = false
.SystemdCgroup = true
sudo systemctl restart containerd
sudo systemctl restart kubelet
컴퓨터를 재부팅하세요(선택사항일 수도 있음)
B. docker를 cri로 사용하는 경우 다음 명령을 엽니다. vi /etc/docker/daemon.json 이 ["native.cgroupdriver=systemd"]를 "exec-opts": ["native.cgroupdriver=cgroupfs"]로 변경합니다.
sudo systemctl daemon-reload
sudo systemctl restart kubelet
컴퓨터를 재부팅하세요(선택사항일 수도 있음)