我有一個裸機伺服器,其中包含一個主 Kubernetes 節點。我需要將主節點移到新的裸機伺服器。我們如何移動或遷移它?
我已經做了我的研究但其中大多數與 GCP 叢集有關,我們將 4 個目錄從舊節點移至新節點,也更改了 IP,這個問題是 5 年前提出的,現在已經過時了。
/var/etcd
/srv/kubernetes
/srv/sshproxy
/srv/salt-overlay
假設我們使用的是最新的 k8s 版本 1.17,移動它的正確方法是什麼
答案1
下列的github問題評論中提到和Kubernetes主節點IP位址變更:
1。驗證您的etcd data directory
調查etcd pod in kube-system namespace
:
(預設值使用使用 kubeadm 建立的 k8s v1.17.0),
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
2。準備:
- 複製
/etc/kubernetes/pki
自碩士1到新大師2:
#create backup directory in Master2,
mkdir ~/backup
#copy from Master1 all key,crt files into the Master2
sudo scp -r /etc/kubernetes/pki [email protected]:~/backup
- 在大師2刪除具有舊 IP 位址的金鑰的證書apiserver 和 etcd 憑證:
./etcd/peer.crt
./apiserver.crt
rm ~/backup/pki/{apiserver.*,etcd/peer.*}
- 移動
pki directory to /etc/kubernetes
cp -r ~/backup/pki /etc/kubernetes/
3。在碩士1創造etcd 快照:
驗證您的API version
:
kubectl exec -it etcd-Master1 -n kube-system -- etcdctl version
etcdctl version: 3.4.3
API version: 3.4
- 使用電流etcd 容器:
kubectl exec -it etcd-master1 -n kube-system -- etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save /var/lib/etcd/snapshot1.db
- 使用或使用etcdctl 二進位文件:
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save /var/lib/etcd/snapshot1.db
4。複製創建的快照碩士1到大師2備份目錄:
scp ./snapshot1.db [email protected]:~/backup
5。準備Kubeadm 配置以反映碩士1配置:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: x.x.x.x
bindPort: 6443
nodeRegistration:
name: master2
taints: [] # Removing all taints from Master2 node.
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: 10.0.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
6.恢復快照:
- 使用
etcd:3.4.3-0
泊塢窗圖像:
docker run --rm \
-v $(pwd):/backup \
-v /var/lib/etcd:/var/lib/etcd \
--env ETCDCTL_API=3 \
k8s.gcr.io/etcd:3.4.3-0 \
/bin/sh -c "etcdctl snapshot restore './snapshot1.db' ; mv /default.etcd/member/ /var/lib/etcd/"
- 或使用
etcdctl
二進位檔案:
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot restore './snapshot1.db' ; mv ./default.etcd/member/ /var/lib/etcd/
7.初始化大師2:
sudo kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd --config kubeadm-config.yaml
# kubeadm-config.yaml prepared in 5 step.
- 注意:
[WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 master2_IP]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master2 localhost] and IPs [master2_ip 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master2 localhost] and IPs [master2_ip 127.0.0.1 ::1]
.
.
.
Your Kubernetes control-plane has initialized successfully!
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- k8s 物件驗證後(簡短範例):
kubectl get nodes
kubectl get pods -o wide
kubectl get pods -n kube-system -o wide
systemctl status kubelet
- 如果所有已部署的 k8s 物件(例如 pod、部署等)都移至新的大師2節點:
kubectl drain Master1
kubectl delete node Master1
筆記:
另外請考慮建立高可用集群在此設定中,您應該有可能擁有超過 1 個主節點,在此配置中,您可以以更安全的方式建立/刪除其他控制平面節點。