Kubeadm と kubectl - 接続が拒否されました

Kubeadm と kubectl - 接続が拒否されました

2 台の仮想マシン (kvm) に基本クラスターを展開し、そのうち 1 台をマスターとして指定し、その上にコントロール プレーンを使用しましたkubeadm init。すべてが正しく起動しているように見えますが、最も基本的なチェックを実行しようとすると、kubectl接続拒否エラーが発生します。

更新2:

動作はしましたが、理由がわかりません。もともと、sudo を使用してすべてを専用ユーザーとして実行していました。root に切り替えて (su root)、手順を繰り返すと、すべてが機能し始めました。変更の原因は何ですか。ユーザーではなく root 環境になったことと何らかの関係がありますか。ホーム ディレクトリが違うのですか。作業ディレクトリですか。わかりません。

更新 1: 最小限の失敗例:

今回はUbuntu 20.04を実行する別の仮想マシンを作成し、それをレプリカにしようとしました。このチュートリアル元の例にできるだけ近づけていますが、最後のステップは最初の問題と同じように失敗します。このチュートリアルは実際に完了していますか?

ステップごとに実行します:

curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt update
apt install -y kubelet=1.20.0-00 kubeadm=1.20.0-00 kubectl=1.20.0-00
apt-mark hold kubelet kubeadm kubectl
export VERSION=19.03 && curl -sSL get.docker.com | sh

cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

systemctl enable docker
systemctl daemon-reload
systemctl restart docker

kubeadm init --ignore-preflight-errors=all

>>> At this point everything fails - some service is missing?

環境についての詳細:

  • ホストはUbuntu 20.04 LTSで、kvmを実行しています。
  • ゲスト(k8sをインストールする場所)はUbuntuサーバー22.04です
  • ホストネットワークは192.168.xyにあります
  • VMのブリッジは10.0.1.x/24にあります
  • UFWは非活性

通信網:

ip route show
  • default via 10.0.1.254 dev ens3 proto static # これは、仮想マシンを外部の世界に接続するブリッジの一部です。
  • 10.0.1.0/24 dev ens3 proto kernel scope link src 10.0.1.10 # これはマシンの IP です。ノードをホストしているマシンの IP は 11、12、13 などで終わります。
  • 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown # docker によって自動生成 - 最新のネットマスク形式は苦手なので、docker /16 は問題ないと信じています。何もしていません。
cat /etc/hosts

primaryをホストに追加しました。残りは自動生成されます。

127.0.0.1 localhost
127.0.0.1 primary
127.0.1.1 k8-vm

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
dockerデーモンの設定
cat /etc/docker/daemon.json 
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

me@k8-vm:~$ kubectl ポッドを取得します

E0117 09:10:43.476675   13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
E0117 09:10:43.477525   13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
E0117 09:10:43.479198   13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
E0117 09:10:43.479754   13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
E0117 09:10:43.481822   13492 memcache.go:265] couldn't get current server API group list: Get "https://10.0.1.10:6443/api?timeout=32s": dial tcp 10.0.1.10:6443: connect: connection refused
The connection to the server 10.0.1.10:6443 was refused - did you specify the right host or port?

~/.kube/config に設定をセットアップしました:

cat ~/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: secret_stuff
    server: https://10.0.1.10:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: secret_stuff
    client-key-data: secret_stuff

iptablesをフラッシュしてみましたが、何か復元中です。iptables 内の疑わしいエントリ:

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  -- !localhost/8          localhost/8          /* block incoming     localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-FORWARD (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             ctstate INVALID
Chain KUBE-SERVICES (2 references)
target     prot opt source               destination         
REJECT     tcp  --  anywhere             10.96.0.1            /* default/kubernetes:https has no endpoints */ reject-with icmp-port-unreachable
REJECT     udp  --  anywhere             10.96.0.10           /* kube-system/kube-dns:dns has no endpoints */ reject-with icmp-port-unreachable
REJECT     tcp  --  anywhere             10.96.0.10           /* kube-system/kube-dns:dns-tcp has no endpoints */ reject-with icmp-port-unreachable
REJECT     tcp  --  anywhere             10.96.0.10           /* kube-system/kube-dns:metrics has no endpoints */ reject-with icmp-port-unreachable

kubectl が iptables によってブロックされている可能性がありますか?


ログ、出力:

me@primary:~$ sudo kubeadm init --apiserver-advertise-address=172.105.148.95 --apiserver-cert-extra-sans=172.105.148.95 --pod-network-cidr=10.13.13.0/16 --node-name primary

[init] Using Kubernetes version: v1.29.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0119 09:35:48.771333    2949 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local primary] and IPs [10.96.0.1 172.105.148.95]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost primary] and IPs [172.105.148.95 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost primary] and IPs [172.105.148.95 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key


   [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "super-admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

@user23204747 - あなたの質問に答えます:

journalctl -fu kubelet
Jan 19 09:42:58 primary kubelet[3063]: E0119 09:42:58.312695    3063 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.105.148.95:6443/api/v1/nodes\": dial tcp 172.105.148.95:6443: connect: connection refused" node="primary"
Jan 19 09:42:59 primary kubelet[3063]: E0119 09:42:59.476195    3063 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.105.148.95:6443/api/v1/namespaces/default/events\": dial tcp 172.105.148.95:6443: connect: connection refused" event="&Event{ObjectMeta:{primary.17abb5f60468180b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:primary,UID:primary,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node primary status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:primary,},FirstTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,LastTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:primary,}"
Jan 19 09:43:02 primary kubelet[3063]: E0119 09:43:02.208913    3063 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"primary\" not found"
Jan 19 09:43:05 primary kubelet[3063]: E0119 09:43:05.289531    3063 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.105.148.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp 172.105.148.95:6443: connect: connection refused" interval="7s"
Jan 19 09:43:05 primary kubelet[3063]: I0119 09:43:05.315765    3063 kubelet_node_status.go:73] "Attempting to register node" node="primary"
Jan 19 09:43:05 primary kubelet[3063]: E0119 09:43:05.424960    3063 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.105.148.95:6443/api/v1/nodes\": dial tcp 172.105.148.95:6443: connect: connection refused" node="primary"
Jan 19 09:43:05 primary kubelet[3063]: W0119 09:43:05.450439    3063 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.105.148.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:05 primary kubelet[3063]: E0119 09:43:05.450569    3063 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.105.148.95:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:09 primary kubelet[3063]: I0119 09:43:09.129122    3063 scope.go:117] "RemoveContainer" containerID="af6057fbd0e43b4628685f316cffef08e6ea4a6236223120ce825681b53a45ad"
Jan 19 09:43:09 primary kubelet[3063]: E0119 09:43:09.129843    3063 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-primary_kube-system(b8faf6a4553601b5610c06ed8c93f9e3)\"" pod="kube-system/etcd-primary" podUID="b8faf6a4553601b5610c06ed8c93f9e3"
Jan 19 09:43:09 primary kubelet[3063]: E0119 09:43:09.585849    3063 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.105.148.95:6443/api/v1/namespaces/default/events\": dial tcp 172.105.148.95:6443: connect: connection refused" event="&Event{ObjectMeta:{primary.17abb5f60468180b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:primary,UID:primary,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node primary status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:primary,},FirstTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,LastTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:primary,}"
Jan 19 09:43:12 primary kubelet[3063]: I0119 09:43:12.128675    3063 scope.go:117] "RemoveContainer" containerID="a9185c62d88f3192935626d7213b4378816563580b4afa1ac14b74f4029b548c"
Jan 19 09:43:12 primary kubelet[3063]: E0119 09:43:12.209179    3063 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"primary\" not found"
Jan 19 09:43:12 primary kubelet[3063]: E0119 09:43:12.398807    3063 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.105.148.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp 172.105.148.95:6443: connect: connection refused" interval="7s"
Jan 19 09:43:12 primary kubelet[3063]: I0119 09:43:12.427248    3063 kubelet_node_status.go:73] "Attempting to register node" node="primary"
Jan 19 09:43:12 primary kubelet[3063]: E0119 09:43:12.551820    3063 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.105.148.95:6443/api/v1/nodes\": dial tcp 172.105.148.95:6443: connect: connection refused" node="primary"
Jan 19 09:43:14 primary kubelet[3063]: W0119 09:43:14.542312    3063 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.105.148.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:14 primary kubelet[3063]: E0119 09:43:14.542441    3063 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.105.148.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:17 primary kubelet[3063]: W0119 09:43:17.648986    3063 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.105.148.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:17 primary kubelet[3063]: E0119 09:43:17.649087    3063 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.105.148.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:19 primary kubelet[3063]: E0119 09:43:19.507898    3063 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.105.148.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp 172.105.148.95:6443: connect: connection refused" interval="7s"
Jan 19 09:43:19 primary kubelet[3063]: I0119 09:43:19.554658    3063 kubelet_node_status.go:73] "Attempting to register node" node="primary"
Jan 19 09:43:19 primary kubelet[3063]: E0119 09:43:19.679757    3063 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.105.148.95:6443/api/v1/nodes\": dial tcp 172.105.148.95:6443: connect: connection refused" node="primary"
Jan 19 09:43:19 primary kubelet[3063]: E0119 09:43:19.694847    3063 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.105.148.95:6443/api/v1/namespaces/default/events\": dial tcp 172.105.148.95:6443: connect: connection refused" event="&Event{ObjectMeta:{primary.17abb5f60468180b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:primary,UID:primary,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node primary status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:primary,},FirstTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,LastTimestamp:2024-01-19 09:35:52.130377739 +0000 UTC m=+0.287533886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:primary,}"
Jan 19 09:43:20 primary kubelet[3063]: W0119 09:43:20.761463    3063 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.105.148.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dprimary&limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:20 primary kubelet[3063]: E0119 09:43:20.761607    3063 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.105.148.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dprimary&limit=500&resourceVersion=0": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:21 primary kubelet[3063]: E0119 09:43:21.126858    3063 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.105.148.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.105.148.95:6443: connect: connection refused
Jan 19 09:43:22 primary kubelet[3063]: E0119 09:43:22.209975    3063 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"primary\" not found"
Jan 19 09:43:24 primary kubelet[3063]: I0119 09:43:24.129352    3063 scope.go:117] "RemoveContainer" containerID="af6057fbd0e43b4628685f316cffef08e6ea4a6236223120ce825681b53a45ad"
Jan 19 09:43:24 primary kubelet[3063]: E0119 09:43:24.130066    3063 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=etcd pod=etcd-primary_kube-system(b8faf6a4553601b5610c06ed8c93f9e3)\"" pod="kube-system/etcd-primary" 

ポッドUID="b8faf6a4553601b5610c06ed8c93f9e3"

CRI出力:

sudo crictl ps --all
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
ERRO[0000] validate service connection: validate CRI v1 runtime API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory" 
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
ERRO[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory" 
CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
58dcdbbbbdb57       53b148a9d1963       About a minute ago   Exited              kube-apiserver            17                  aa7a744e3afca       kube-apiserver-primary
af6057fbd0e43       a0eed15eed449       3 minutes ago        Exited              etcd                      19                  12ffe5fa72b5c       etcd-primary
a20b25ee9f45e       406945b511542       3 minutes ago        Running             kube-scheduler            5                   507b8c9d8b7ab       kube-scheduler-primary
049545de2ba82       79d451ca186a6       3 minutes ago        Running             kube-controller-manager   4                   75616efa726fd       kube-controller-manager-primary
8fbf3e41714a7       79d451ca186a6       8 minutes ago        Exited              kube-controller-manager   3                   ae5e4208a01ef       kube-controller-manager-primary
4b13c766ae730       406945b511542       8 minutes ago        Exited              kube-scheduler            4                   7e4f0405277e4       kube-scheduler-primary

コンテナログ: 少し乱雑ですが、結局再試行して失敗するような同じエラーメッセージが出力され続けます。

W0119 09:43:29.388741       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0119 09:43:30.103643       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
F0119 09:43:33.102160       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
kubelet ステータス:
sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Fri 2024-01-19 13:36:51 UTC; 12min ago
       Docs: https://kubernetes.io/docs/home/
   Main PID: 8539 (kubelet)
      Tasks: 12 (limit: 37581)
     Memory: 33.6M
        CPU: 19.659s
     CGroup: /system.slice/kubelet.service
             └─8539 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/p>

Jan 19 13:49:18 primary kubelet[8539]: W0119 13:49:18.366543    8539 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://master-node:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup master-node on 127.0.0.53:53: server m>
Jan 19 13:49:18 primary kubelet[8539]: E0119 13:49:18.366695    8539 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://master-node:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp: lookup master-no>
Jan 19 13:49:19 primary kubelet[8539]: E0119 13:49:19.481598    8539 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://master-node:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp: lookup master-node on 127.0.0.53:>
Jan 19 13:49:19 primary kubelet[8539]: I0119 13:49:19.902400    8539 kubelet_node_status.go:70] "Attempting to register node" node="primary"
Jan 19 13:49:19 primary kubelet[8539]: E0119 13:49:19.904293    8539 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://master-node:6443/api/v1/nodes\": dial tcp: lookup master-node on 127.0.0.53:53: server misbehaving" node="primary"
Jan 19 13:49:21 primary kubelet[8539]: E0119 13:49:21.741936    8539 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"primary\" not found"
Jan 19 13:49:26 primary kubelet[8539]: E0119 13:49:26.261529    8539 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"primary.17abc31ca21e74b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:>
Jan 19 13:49:26 primary kubelet[8539]: E0119 13:49:26.483811    8539 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://master-node:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/primary?timeout=10s\": dial tcp: lookup master-node on 127.0.0.53:>
Jan 19 13:49:26 primary kubelet[8539]: I0119 13:49:26.907141    8539 kubelet_node_status.go:70] "Attempting to register node" node="primary"
Jan 19 13:49:26 primary kubelet[8539]: E0119 13:49:26.909431    8539 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://master-node:6443/api/v1/nodes\": dial tcp: lookup master-node on 127.0.0.53:53: server misbehaving" node="primary"

答え1

まず、apiserver ポッドが稼働しているかどうかを確認し、その側で問題が発生していないことを確認します。コントロール プレーン ノードで以下の操作を試してください。

まず、kubelet ログに何か疑わしい点はありますか?

journalctl -fu kubelet

次に、ポッドのコンテナ ランタイム ログ。

sudo crictl ps --all  # get container_id
sudo crictl logs container_id

実行したコマンド全体が含まれていないのでkubeadm init--apiserver-advertise-address正しく設定されていることを確認してください。

答え2

このconnection refusedエラーは通常、リクエストがサーバーに到達したが、指定されたポートでサービスが実行されていないことを意味します。マスター ノードで api-server が起動されていることを確認してください。kube-api サーバーのステータスを確認するには、次のコマンドを使用します。

Systemctl status kube-apiserver
  • kubeconfig ファイルは にあるため\~/.kube/config、KUBECONFIG 環境変数が正しいファイルを指すように設定してください。

  • クライアント マシンとマスター ノードの間にネットワーク接続があることを確認します。マシンからマスター ノードの IP アドレスにアクセスできることを確認します。

  • ファイアウォールとセキュリティ グループを確認します。Kubernetes API 通信に必要なポートが開いていることを確認します。デフォルトでは、Kubernetes API サーバーはポート 6443 をリッスンします。

  • iptablesに疑わしいエントリがあると思われる場合は、次のコマンドを使用してiptableルールを調べます。

iptables -L -n - Kubernetes API通信を妨害する可能性のあるルールを探すことができます。コマンドを使用してルールを一時的にフラッシュすることができます。iptables -F

  • マスターノードでkube-proxyのステータスを確認するには、kubectl -n kube-system get pods -l k8s-app=kube-proxy

編集

This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

上記のエラー メッセージから、接続拒否エラーは、必要な cgroup が無効になっているなどのノード上の誤った構成によって発生している可能性があり、kubelet で問題が発生する可能性が高くなります。

  1. 次のコマンドを実行して、ノード上の kubelet サービスのステータスを確認します。

systemctl status kubelet- このコマンドは、障害が発生したか不健康なkubeletを表示します。kubeletが動作していない場合は、次のコマンドで起動してみてください。systemctl start kubelet

  1. ノード構成を検証して正しく設定されていることを確認し、必要な設定がないか確認します。cグループ無効になっているか、誤って構成されています。

A. コンテナをcriとして使用している場合は、次のコマンドを実行して、これを次SystemdCgroup = falseのように変更します。SystemdCgroup = true

sudo systemctl restart containerd
sudo systemctl restart kubelet

マシンを再起動します(オプションの場合があります)

B. docker を cri として使用している場合は、次のコマンドを実行します。vi /etc/docker/daemon.json を開き、["native.cgroupdriver=systemd"] を "exec-opts": ["native.cgroupdriver=cgroupfs"] に変更します。

sudo systemctl daemon-reload
sudo systemctl restart kubelet

マシンを再起動します(オプションの場合があります)

関連情報