問題

問題

問題

kubectl(CentOS 7 上為 1.8.3)錯誤訊息的實際意義以及如何解決。任何了解正在發生的事情的參考資料。我想 kubelet 嘗試與 docker 對話,但不知何故它沒有發生,並且想知道哪些配置是相關的,什麼是,從 Linux 的角度也需要理解哪些概念/機制。

11 月 19 日 22:32:24 master kubelet[4425]: E1119 22:32:24.269786 4425 Summary.go:92] 無法取得「/system.slice/kubelet.service」的系統容器統計資料
:無法取得「/ system.slice/kubelet.service」的cgroup 統計資料「/system.slice/kubelet.service」:無法取得 con

Nov 19 22:32:24 master kubelet[4425]:E1119 22:32:24.269802 4425 Summary.go: 92] 無法取得「/」的系統容器統計資料system.slice/docker.service」:無法取得「/system.slice/docker.service」的 cgroup 統計資料:無法取得 conta

研究

發現了相同的錯誤,並透過更新 kubelet 的服務單元來解決此問題,如下所示,但沒有成功。

/etc/systemd/system/kubelet.service

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

背景

透過以下方式設定 Kubernetes 集群安裝 kubeadm。文件中的部分安裝 Docker關於對齊 cgroup 驅動程式的說明如下。

注意:請確保 kubelet 使用的 cgroup 驅動程式與 Docker 使用的 cgroup 驅動程式相同。為了確保相容性,您可以更新 Docker,如下所示:

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

但這樣做會導致docker服務無法啟動:

無法使用檔案 /etc/docker/daemon.json 設定 Docker 守護程式:以下指令都被指定為標誌」。11
月 19 日 16:55:56 localhost.localdomain systemd1:docker.service:主程序已退出,代碼=已退出,狀態=1/FAILURE。

Maser 節點已準備就緒,所有系統 Pod 都在運作。

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          39m
kube-system   kube-apiserver-master            1/1       Running   0          39m
kube-system   kube-controller-manager-master   1/1       Running   0          39m
kube-system   kube-dns-545bc4bfd4-mqqqk        3/3       Running   0          40m
kube-system   kube-flannel-ds-fclcs            1/1       Running   2          13m
kube-system   kube-flannel-ds-hqlnb            1/1       Running   0          39m
kube-system   kube-proxy-t7z5w                 1/1       Running   0          40m
kube-system   kube-proxy-xdw42                 1/1       Running   0          13m
kube-system   kube-scheduler-master            1/1       Running   0          39m

環境

CentOS 上的 Kubernetes 1.8.3 和 Flannel。

$ kubectl version -o json | python -m json.tool
{
    "clientVersion": {
        "buildDate": "2017-11-08T18:39:33Z",
        "compiler": "gc",
        "gitCommit": "f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd",
        "gitTreeState": "clean",
        "gitVersion": "v1.8.3",
        "goVersion": "go1.8.3",
        "major": "1",
        "minor": "8",
        "platform": "linux/amd64"
    },
    "serverVersion": {
        "buildDate": "2017-11-08T18:27:48Z",
        "compiler": "gc",
        "gitCommit": "f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd",
        "gitTreeState": "clean",
        "gitVersion": "v1.8.3",
        "goVersion": "go1.8.3",
        "major": "1",
        "minor": "8",
        "platform": "linux/amd64"
    }
}


$ kubectl describe node master
Name:               master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=master
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data={"VtepMAC":"86:b6:7a:d6:7b:b3"}
                    flannel.alpha.coreos.com/backend-type=vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager=true
                    flannel.alpha.coreos.com/public-ip=10.0.2.15
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             node-role.kubernetes.io/master:NoSchedule
CreationTimestamp:  Sun, 19 Nov 2017 22:27:17 +1100
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:27:13 +1100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Sun, 19 Nov 2017 23:04:56 +1100   Sun, 19 Nov 2017 22:32:24 +1100   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.99.10
  Hostname:    master
Capacity:
 cpu:     1
 memory:  3881880Ki
 pods:    110
Allocatable:
 cpu:     1
 memory:  3779480Ki
 pods:    110
System Info:
 Machine ID:                 ca0a351004604dd49e43f8a6258ddd77
 System UUID:                CA0A3510-0460-4DD4-9E43-F8A6258DDD77
 Boot ID:                    e9060efa-42be-498d-8cb8-8b785b51b247
 Kernel Version:             3.10.0-693.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.8.3
 Kube-Proxy Version:         v1.8.3
PodCIDR:                     10.244.0.0/24
ExternalID:                  master
Non-terminated Pods:         (7 in total)
  Namespace                  Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                              ------------  ----------  ---------------  -------------
  kube-system                etcd-master                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-dns-545bc4bfd4-mqqqk         260m (26%)    0 (0%)      110Mi (2%)       170Mi (4%)
  kube-system                kube-flannel-ds-hqlnb             0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-t7z5w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-master             100m (10%)    0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  810m (81%)    0 (0%)      110Mi (2%)       170Mi (4%)
Events:
  Type    Reason                   Age                From                Message
  ----    ------                   ----               ----                -------
  Normal  Starting                 38m                kubelet, master     Starting kubelet.
  Normal  NodeAllocatableEnforced  38m                kubelet, master     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    37m (x8 over 38m)  kubelet, master     Node master status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  37m (x8 over 38m)  kubelet, master     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    37m (x7 over 38m)  kubelet, master     Node master status is now: NodeHasNoDiskPressure
  Normal  Starting                 37m                kube-proxy, master  Starting kube-proxy.
  Normal  Starting                 32m                kubelet, master     Starting kubelet.
  Normal  NodeAllocatableEnforced  32m                kubelet, master     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    32m                kubelet, master     Node master status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  32m                kubelet, master     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    32m                kubelet, master     Node master status is now: NodeHasNoDiskPressure
  Normal  NodeNotReady             32m                kubelet, master     Node master status is now: NodeNotReady
  Normal  NodeReady                32m                kubelet, master     Node master status is now: NodeReady

相關內容