部署环境准备
ip | 主机名 |
---|---|
100.100.137.200 | master01 |
100.100.137.201 | node01 |
100.100.137.202 | node02 |
主机配置
所有主机都需要配置
必备软件安装
cd /etc/yum.repos.d/
mkdir bak
mv CentOS-Base.repo CentOS-CR.repo CentOS-Debuginfo.repo CentOS-fasttrack.repo CentOS-Media.repo CentOS-Sources.repo CentOS-Vault.repo bak/
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y wget
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install tree nmap dos2unix lrzsz nc lsof wget tcpdump htop iftop iotop sysstat nethogs -y
yum install psmisc net-tools bash-completion vim-enhanced -y
主机名配置
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02
主机名与IP地址解析
cat >> /etc/hosts << EOF
100.100.137.200 master01
100.100.137.201 node01
100.100.137.202 node02
EOF
防火墙配置
systemctl stop firewalld
systemctl disable firewalld
SELINUX配置
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#' /etc/selinux/config
时间同步配置
timedatectl set-timezone Asia/Shanghai
yum install -y ntpdate
ntpdate ntp1.aliyun.com
配置内核转发及网桥过滤
cat >> /etc/sysctl.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
modprobe br_netfilter
lsmod | grep br_netfilter
sysctl -p /etc/sysctl.conf
安装ipset及ipvsadm
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
关闭SWAP分区
swapoff -a
安装containerd
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y containerd.io
mkdir /etc/containerd -p
containerd config default > /etc/containerd/config.toml
vim /etc/containerd/config.toml
SystemdCgroup = false 改为 SystemdCgroup = true
sandbox_image = "registry.k8s.io/pause:3.6" 改为 sandbox_image = "registry.k8s.io/pause:3.9"
systemctl enable containerd
systemctl start containerd
systemctl status containerd
kubernetes 1.27.0 集群部署
配置yum 源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kubeadm,kubelet和kubectl
yum install -y kubectl-1.27.0-0 kubelet-1.27.0-0 kubeadm-1.27.0-0
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
设置kubelet为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动
systemctl enable kubelet
准备k8s1.27.0 所需要的镜像
kubeadm config images list --kubernetes-version=v1.27.0
registry.k8s.io/kube-apiserver:v1.27.0
registry.k8s.io/kube-controller-manager:v1.27.0
registry.k8s.io/kube-scheduler:v1.27.0
registry.k8s.io/kube-proxy:v1.27.0
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/coredns/coredns:v1.10.1
国内拉不下来只能翻墙拉下下传输到服务器上
集群初始化
kubeadm init --kubernetes-version=v1.27.0 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=100.100.137.200
unpacking registry.k8s.io/kube-apiserver:v1.27.0 (sha256:89b8d9dbef2b905b7d028ca8b7f79d35ebd9baa66b0a3ee2ddd4f3e0e2804b45)...done
unpacking registry.k8s.io/kube-controller-manager:v1.27.0 (sha256:ddcd5cd96a3fbb109515a303c93cd245568311febc649a86c66caa4f05202aa7)...done
unpacking registry.k8s.io/kube-scheduler:v1.27.0 (sha256:939d0c6675c373639f53f05d61b5035172f95afb47ecffee6baf4e3d70543b66)...done
unpacking registry.k8s.io/kube-proxy:v1.27.0 (sha256:a08d09f394e20e78ad47b0060797652865f9f01eb5ee04b07b6526a66d0df7df)...done
unpacking registry.k8s.io/pause:3.9 (sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097)...done
unpacking registry.k8s.io/etcd:3.5.7-0 (sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83)...done
unpacking registry.k8s.io/coredns/coredns:v1.10.1 (sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e)...done
[root@master01 ~]# kubeadm init --kubernetes-version=v1.27.0 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=100.100.137.200
[init] Using Kubernetes version: v1.27.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0501 19:22:01.370981 17390 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd version (3.5.7-0)
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 100.100.137.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [100.100.137.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [100.100.137.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0501 19:22:10.653715 17390 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.0, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.506256 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: nnvnut.q6tk40fl736s5bou
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 100.100.137.200:6443 --token nnvnut.q6tk40fl736s5bou \
--discovery-token-ca-cert-hash sha256:9ecc804d1a9202831bcd450fa6339c82184e90d67e6c837cca5ad50d6cbb876f
初始化失败需要仔细查报错
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加node节点
kubeadm join 100.100.137.200:6443 --token nnvnut.q6tk40fl736s5bou \
--discovery-token-ca-cert-hash sha256:9ecc804d1a9202831bcd450fa6339c82184e90d67e6c837cca5ad50d6cbb876f
部署网络插件
wget https://docs.tigera.io/archive/v3.24/manifests/calico.yaml
vim calico.yaml
...
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
...
kubectl apply -f calico.yaml
验证
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6849cf9bcf-bjq6x 1/1 Running 0 6m53s
kube-system calico-node-9wtqg 1/1 Running 0 6m53s
kube-system calico-node-cxk4m 1/1 Running 0 6m53s
kube-system calico-node-m5lkp 1/1 Running 0 6m53s
kube-system coredns-5d78c9869d-cxrz2 1/1 Running 0 14m
kube-system coredns-5d78c9869d-gf4c5 1/1 Running 0 14m
kube-system etcd-master01 1/1 Running 0 14m
kube-system kube-apiserver-master01 1/1 Running 0 14m
kube-system kube-controller-manager-master01 1/1 Running 0 14m
kube-system kube-proxy-4jw64 1/1 Running 0 11m
kube-system kube-proxy-mvkgp 1/1 Running 0 14m
kube-system kube-proxy-xhmmb 1/1 Running 0 8m45s
kube-system kube-scheduler-master01 1/1 Running 0 14m
kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane 14m v1.27.0
node01 Ready <none> 11m v1.27.0
node02 Ready <none> 8m56s v1.27.0
部署完成。