单master集群--master节点IP发生改变
主机名 | 原IP | 新IP |
---|---|---|
Master01 | 100.100.137.200 | 100.100.137.210 |
Node01 | 100.100.137.201 | 100.100.137.211 |
Node02 | 100.100.137.202 | 100.100.137.212 |
报错如下:
kubectl get all
Unable to connect to the server: dial tcp 100.100.137.200:6443: connect: no route to host
恢复过程如下:
- 所有机器修改hosts解析文件
sed -i 's#100.100.137.200#100.100.137.210#g' /etc/hosts
sed -i 's#100.100.137.201#100.100.137.211#g' /etc/hosts
sed -i 's#100.100.137.202#100.100.137.212#g' /etc/hosts
- 把/etc/kubernetes/*.conf中所有的旧ip换成新ip
cd /etc/kubernetes
find . -type f | xargs sed -i "s/100.100.137.200/100.100.137.210/g"
- 替换$HOME/.kube/config文件中的旧ip为新ip
cd $HOME/.kube/
find . -type f | xargs sed -i "s/100.100.137.200/100.100.137.210/g"
- 修改$HOME/.kube/cache/discovery/ 下的文件夹名改成新的ip
cd $HOME/.kube/cache/discovery/
mv 100.100.137.200_6443/ 100.100.137.210_6443/
- 重新生成证书
cd /etc/kubernetes/pki
mv apiserver.key apiserver.key.bak
mv apiserver.crt apiserver.crt.bak
kubeadm init phase certs apiserver --apiserver-advertise-address 100.100.137.210
I0513 17:27:16.733946 24860 version.go:255] remote version is much newer: v1.33.0; falling back to: stable-1.23
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 100.100.137.210]
- 重启kubelet,编辑以下 ConfigMap,将旧 IP 替换成新的 IP
systemctl restart kubelet
kubectl -n kube-system edit cm kube-proxy
kubectl -n kube-public edit cm cluster-info
- 所有node节点重新加入集群
重启master01机器之后在master节点生成加入集群的命令:
reboot
kubeadm token create --print-join-command
kubeadm reset
kubeadm join 100.100.137.210:6443 --token jfekda.agtcoz9ib95ua0m1 --discovery-token-ca-cert-hash sha256:759d3df4caeaa1c3bd85f18249cbd45b56828d9a88b77c715aae63e57a43e139
- 验证
这时已经完成:
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox1-857448d9ff-shdfz 1/1 Running 4 5d11h
default busybox2-5c8f48d797-kk8mx 1/1 Running 4 5d11h
default busybox3-c997b9cc4-tlvrq 1/1 Running 3 5d10h
default nginx-deployment-6b9d659f5f-qbchs 1/1 Running 3 5d8h
ingress-nginx ingress-nginx-admission-create-mlft8 0/1 Completed 0 5d8h
ingress-nginx ingress-nginx-admission-patch-z5zss 0/1 Completed 0 5d8h
ingress-nginx ingress-nginx-controller-7464b7f559-82dsg 1/1 Running 2 5d8h
kube-system calico-kube-controllers-5bb5d4f7f4-zzhwz 1/1 Running 9 4d11h
kube-system calico-node-44chg 1/1 Running 0 3m46s
kube-system calico-node-9hnfx 1/1 Running 0 3m46s
kube-system calico-node-r5gpv 1/1 Running 0 3m46s
kube-system coredns-64897985d-rxkhg 1/1 Running 3 (5m54s ago) 5d11h
kube-system coredns-64897985d-zcgtf 1/1 Running 3 (5m54s ago) 5d11h
kube-system etcd-master01 1/1 Running 1 (5m59s ago) 19m
kube-system kube-apiserver-master01 1/1 Running 1 (5m57s ago) 19m
kube-system kube-controller-manager-master01 1/1 Running 4 (5m59s ago) 5d11h
kube-system kube-proxy-5lv5m 1/1 Running 2 5d11h
kube-system kube-proxy-9khvt 1/1 Running 3 5d11h
kube-system kube-proxy-vxp2g 1/1 Running 3 (5m59s ago) 5d11h
kube-system kube-scheduler-master01 1/1 Running 4 (5m59s ago) 5d11h
test nginx-deployment1-79df89b8bb-2wgrx 1/1 Running 2 4d7h
test nginx-deployment2-645594b99c-g2p8z 1/1 Running 1 4d7h
kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 6d3h v1.23.9
node01 Ready <none> 6d3h v1.23.9
node02 Ready <none> 6d3h v1.23.9