使用的主机信息如下:
角色 | HOSTNAME | IP | CPU | 内存 | 系统盘 | CPU 架构 | 操作系统 |
---|---|---|---|---|---|---|---|
控制平面 | k8s-master | 192.168.0.101 | 2 | 4G | 64G | x86_64 | openSUSE Leap 15.6 |
工作平面 | k8s-worker-1 | 192.168.0.102 | 2 | 4G | 64G | x86_64 | openSUSE Leap 15.6 |
工作平面 | k8s-worker-2 | 192.168.0.103 | 4 | 8G | 64G | x86_64 | openSUSE Leap 15.6 |
1) 更新主机操作系统
zypper ref
zypper up -y
2) 重启应用更新
reboot
wget
下载二进制文件时需要,若使用其他方式获取文件,则无需安装)zypper install -y wget
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-worker-1
hostnamectl set-hostname k8s-worker-2
vim /etc/hosts
192.168.0.101 k8s-master
192.168.0.102 k8s-worker-1
192.168.0.103 k8s-worker-2
1) 临时禁用
swapoff -a
2) 永久禁用
vim /etc/fstab
注释掉包含 swap 的行
3) 查看 SWAP 分区
free -h
Swap 部分显示为 0 表示禁用
cat /proc/swaps
返回为空时表示已禁用
1) 临时加载
modprobe overlay
modprobe br_netfilter
2) 设置开机自动加载
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
1) 编辑配置文件
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
2) 更新系统设置
sysctl --system
1) 开放端口
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10259/tcp
firewall-cmd --permanent --add-port=10257/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
Calico CNI
时)firewall-cmd --permanent --add-port=179/tcp
firewall-cmd --permanent --add-port=4789/udp
Flannel CNI
时)firewall-cmd --permanent --add-port=8472/udp
2) 刷新防火墙规则
firewall-cmd --reload
1) 关闭防火墙
systemctl stop firewalld.service
2) 禁用开机自启动
systemctl disable firewalld.service
3) 检查防火墙状态
systemctl status firewalld.service
mkdir -p /opt/containerd/{bin,conf,run,data,ocicrypt}
mkdir -p /opt/containerd/image-verifier/bin
bin
为二进制文件目录
conf
为配置文件目录
run
为运行时的 sock 及相关文件目录
data
为数据目录
ocicrypt
为证书目录
mkdir -p /opt/cdi/{conf,data}
mkdir -p /opt/cni/{bin,conf}
mkdir -p /opt/nri/{run,conf,plugins}
cd /opt/containerd/bin
wget https://github.com/opencontainers/runc/releases/download/v1.2.6/runc.amd64
mv runc.amd64 runc
chmod +x runc
示例主机 CPU 架构为
x86_64
,因此使用amd64
格式文件,若 CPU 为其他架构,则需使用与之对应的文件。
cd /opt/cni/bin
wget https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
tar zxvf cni-plugins-linux-amd64-v1.6.2.tgz -C /opt/cni/bin --strip-components 1
rm -rf cni-plugins-linux-amd64-v1.6.2.tgz
示例主机 CPU 架构为
x86_64
,因此使用amd64
格式文件,若 CPU 为其他架构,则需使用与之对应的文件。
cd /opt/containerd/bin
wget https://github.com/containerd/containerd/releases/download/v2.0.5/containerd-2.0.5-linux-amd64.tar.gz
tar zxvf containerd-2.0.5-linux-amd64.tar.gz -C /opt/containerd/bin --strip-components 1
rm -rf containerd-2.0.5-linux-amd64.tar.gz
示例主机 CPU 架构为
x86_64
,因此使用amd64
格式文件,若 CPU 为其他架构,则需使用与之对应的文件。
~/.bashrc
)vim ~/.bashrc
# containerd Env Begin
export CONTAINERD_HOME=/opt/cni/bin:/opt/containerd/bin
export PATH=$PATH:$CONTAINERD_HOME
# containerd Env End
source ~/.bashrc
/opt/containerd/bin
为二进制文件目录
/etc/profile
)vim /etc/profile
# containerd Env Begin
export CONTAINERD_HOME=/opt/cni/bin:/opt/containerd/bin
export PATH=$PATH:$CONTAINERD_HOME
# containerd Env End
source /etc/profile
/opt/containerd/bin
为二进制文件目录
1) 生成配置文件
containerd config default | sudo tee /opt/containerd/conf/config.toml > /dev/null
2) 备份初始配置文件(可选)
cp -rpf /opt/containerd/conf/config.toml /opt/containerd/conf/config.toml.bak.init
3) 设置 SystemdCgroup 为 true
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /opt/containerd/conf/config.toml
或:
sed -i '105a SystemdCgroup = true' /opt/containerd/conf/config.toml
sed -i '106s#^# #' /opt/containerd/conf/config.toml
找到
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
下的SystemdCgroup
配置项,将其设置为true
如果没有的话,就需要新增配置:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc]
[plugins.'io.containerd.cri.v1.runtime'.containerd.runtimes.runc.options]
SystemdCgroup = true
4) 设置镜像(部分地区需要)
sed -i 's#registry.k8s.io/pause:3.10#registry.aliyuncs.com/google_containers/pause:3.10#' /opt/containerd/conf/config.toml
找到
[plugins."io.containerd.grpc.v1.cri"]
下的registry.k8s.io/pause:3.10
配置项,将其设置为可用的镜像(比如registry.aliyuncs.com/google_containers/pause:3.10
)
5) 自定义相关目录
sed -i 's#/var/lib/containerd#/opt/containerd/data#' /opt/containerd/conf/config.toml
sed -i 's#/etc/containerd#/opt/containerd/conf#' /opt/containerd/conf/config.toml
sed -i 's#/run/containerd#/opt/containerd/run#' /opt/containerd/conf/config.toml
sed -i 's#/etc/cdi#/opt/cdi/conf#' /opt/containerd/conf/config.toml
sed -i 's#/var/run/cdi#/opt/cdi/run#' /opt/containerd/conf/config.toml
sed -i 's#/opt/cni/bin#/opt/cni/bin#' /opt/containerd/conf/config.toml
sed -i 's#/etc/cni/net.d#/opt/cni/conf#' /opt/containerd/conf/config.toml
sed -i 's#/var/run/nri#/opt/nri/run#' /opt/containerd/conf/config.toml
sed -i 's#/etc/nri/conf.d#/opt/nri/conf#' /opt/containerd/conf/config.toml
1) 编辑 service 文件
vim /usr/lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target dbus.service
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/opt/containerd/bin/containerd --config /opt/containerd/conf/config.toml
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
/opt/containerd/bin
为二进制文件目录
/opt/containerd/conf/config.toml
为配置文件
2) 重载 service 文件
systemctl daemon-reload
systemctl enable --now containerd
1) 检查启动状态
systemctl status containerd
注意返回的 Active
是否为 running
,同时查看日志中是否存在报错(error
)信息,启动正常时的示例如下:
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: disabled)
Active: active (running) since Mon 2025-04-21 14:33:32 CST; 5min ago
Docs: https://containerd.io
Process: 21878 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 21880 (containerd)
Tasks: 7
CPU: 329ms
CGroup: /system.slice/containerd.service
└─21880 /opt/containerd/bin/containerd --config /opt/containerd/conf/config.toml
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181160880+08:00" level=info msg="Start recovering state"
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181217956+08:00" level=info msg="Start event monitor"
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181226832+08:00" level=info msg="Start cni network conf syncer for default"
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181231661+08:00" level=info msg="Start streaming server"
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181237101+08:00" level=info msg="Registered namespace \"k8s.io\" with NRI"
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181241630+08:00" level=info msg="runtime interface starting up..."
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181245146+08:00" level=info msg="starting plugins..."
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181262438+08:00" level=info msg="Synchronizing NRI (plugin) with current runtime state"
Apr 21 14:33:32 k8s-master containerd[21880]: time="2025-04-21T14:33:32.181329533+08:00" level=info msg="containerd successfully booted in 0.022739s"
Apr 21 14:33:32 k8s-master systemd[1]: Started containerd container runtime.
2) 查看版本
containerd
命令containerd --version
返回值根据安装的版本有所不同,示例如下:
containerd github.com/containerd/containerd/v2 v2.0.5 fb4c30d4ede3531652d86197bf3fc9515e5276d9
ctr
命令ctr --address /opt/containerd/run/containerd.sock version
返回示例如下:
Client:
Version: v2.0.5
Revision: fb4c30d4ede3531652d86197bf3fc9515e5276d9
Go version: go1.23.8
Server:
Version: v2.0.5
Revision: fb4c30d4ede3531652d86197bf3fc9515e5276d9
UUID: c32beb26-aa39-4749-bdcf-61d14f75f4a1
mkdir -pv /opt/kubernetes/{bin,conf,data,manifests,pki,etcd,logs}
bin
为二进制文件目录
conf
为配置文件目录
data
为数据目录
manifests
为静态 Pod 清单目录
pki
为证书和密钥目录
etcd
为 etcd 数据存储目录
logs
为日志目录
cd /opt/kubernetes/bin
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-amd64.tar.gz
tar zxvf crictl-v1.32.0-linux-amd64.tar.gz -C /opt/kubernetes/bin
rm -rf crictl-v1.32.0-linux-amd64.tar.gz
示例主机 CPU 架构为
x86_64
,因此使用amd64
格式文件,若 CPU 为其他架构,则需使用与之对应的文件。
1) 创建相关目录
mkdir -pv /opt/flannel/{bin,conf}
2) 安装
cd /opt/flannel/bin
wget https://github.com/flannel-io/flannel/releases/download/v0.26.7/flannel-v0.26.7-linux-amd64.tar.gz
tar zxvf flannel-v0.26.7-linux-amd64.tar.gz -C /opt/flannel/bin
rm -rf flannel-v0.26.7-linux-amd64.tar.gz
3) 配置
vim /opt/cni/conf/10-flannel.conflist
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
},
"subnetFile": "/opt/flannel/conf/custom-subnet.env"
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
vim /opt/flannel/conf/custom-subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
4) 设置系统服务
vim /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
Documentation=https://github.com/flannel-io/flannel
After=network.target network-online.target
Requires=network-online.target
After=etcd.service
Before=containerd.service kubelet.service
[Service]
Type=notify
ExecStart=/opt/flannel/bin/flanneld --kube-subnet-mgr --kubeconfig-file=/opt/kubernetes/conf/flanneld.kubeconfig -iface=eth0 -ip-masq=true -subnet-file=/opt/flannel/conf/custom-subnet.env
Restart=on-failure
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
示例主机 CPU 架构为
x86_64
,因此使用amd64
格式文件,若 CPU 为其他架构,则需使用与之对应的文件。
1) 下载二进程文件
cd /opt/kubernetes/bin
wget https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubelet
wget https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubeadm
wget https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubectl
示例主机 CPU 架构为
x86_64
,因此使用amd64
格式文件,若 CPU 为其他架构,则需使用与之对应的文件。
2) 配置可执行权限
chmod +x /opt/kubernetes/bin/*
~/.bashrc
)vim ~/.bashrc
# Kubernetes Env Begin
export KUBERNETES_HOME=/opt/flannel/bin:/opt/kubernetes/bin
export PATH=$PATH:$KUBERNETES_HOME
# Kubernetes Env End
source ~/.bashrc
/opt/kubernetes/bin
为二进制文件目录
/etc/profile
)vim /etc/profile
# Kubernetes Env Begin
export KUBERNETES_HOME=/opt/flannel/bin:/opt/kubernetes/bin
export PATH=$PATH:$KUBERNETES_HOME
# Kubernetes Env End
source /etc/profile
/opt/kubernetes/bin
为二进制文件目录
1) 编辑 kubeadm 配置文件
cd /opt/kubernetes/conf
vim /opt/kubernetes/conf/kubeadm-init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
nodeRegistration:
criSocket: unix:///opt/containerd/run/containerd.sock
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
containerRuntimeEndpoint: "unix:///opt/containerd/run/containerd.sock"
podLogsDir: /opt/kubernetes/logs
staticPodPath: /opt/kubernetes/manifests
healthzBindAddress: 192.168.0.101
2) 使用 kubeadm 配置文件生成 kubelet 默认配置文件
kubeadm init phase kubelet-start --config /opt/kubernetes/conf/kubeadm-init-config.yaml
返回信息中将标注文件路径,将其复制到自定义目录
cp -rpf /var/lib/kubelet/kubeadm-flags.env /opt/kubernetes/conf
cp -rpf /var/lib/kubelet/config.yaml /opt/kubernetes/conf
2) 修改配置文件内容
kubeadm-flags.env
文件sed -i 's#/var/run/containerd/containerd.sock#/opt/containerd/run/containerd.sock#' /opt/kubernetes/conf/kubeadm-flags.env
sed -i 's#registry.k8s.io/pause:3.10#registry.aliyuncs.com/google_containers/pause:3.10#' /opt/kubernetes/conf/kubeadm-flags.env
config.yaml
文件sed -i 's#/etc/kubernetes/pki/ca.crt#/opt/kubernetes/pki/ca.crt#' /opt/kubernetes/conf/config.yaml
sed -i 's#/etc/kubernetes/manifests#/opt/kubernetes/manifests#' /opt/kubernetes/conf/config.yaml
1) 编辑 service 文件
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
#ExecStart=/opt/kubernetes/bin/kubelet --config /opt/kubernetes/conf/config.yaml
ExecStart=/opt/kubernetes/bin/kubelet --container-runtime-endpoint unix:///opt/containerd/run/containerd.sock
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
/opt/kubernetes/bin
为二进制文件目录
2) 重载 service 文件
systemctl daemon-reload
systemctl enable --now kubelet
1) 检查 containerd 和 kubelet 运行状态
systemctl status containerd
systemctl status kubelet
2) 生成配置文件
cd /opt/kubernetes/conf
kubeadm config print init-defaults > kubeadm-config.yaml
3) 备份初始配置文件(可选)
cp -rpf kubeadm-config.yaml kubeadm-config.yaml.bak.init
4) 配置
修改如下内容:
sed -i 's#unix:///var/run/containerd/containerd.sock#unix:///opt/containerd/run/containerd.sock#' /opt/kubernetes/conf/kubeadm-config.yaml
sed -i 's#1.2.3.4#192.168.0.101#' /opt/kubernetes/conf/kubeadm-config.yaml
sed -i 's#/etc/kubernetes/pki#/opt/kubernetes/pki#' /opt/kubernetes/conf/kubeadm-config.yaml
sed -i 's#/var/lib/etcd#/opt/kubernetes/etcd#' /opt/kubernetes/conf/kubeadm-config.yaml
sed -i 's#registry.k8s.io#registry.aliyuncs.com/google_containers#' /opt/kubernetes/conf/kubeadm-config.yaml
sed -i 's#1.32.0#1.32.3#' /opt/kubernetes/conf/kubeadm-config.yaml
新增如下内容:
nodeRegistration:
kubeletExtraArgs:
root-dir: /opt/kubernetes/data
networking:
podSubnet: 10.244.0.0/16
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
enableServer: true
staticPodPath: /opt/kubernetes/manifests
5) 配置文件初始化集群
kubeadm init --config kubeadm-config.yaml
1) 检查 containerd 和 kubelet 运行状态
systemctl status containerd
systemctl status kubelet
2) 初始化集群
kubeadm init --cri-socket=unix:///opt/containerd/run/containerd.sock --kubernetes-version=v1.32.3 --apiserver-advertise-address=192.168.0.101 --apiserver-bind-port 6443 --pod-network-cidr=10.244.0.0/16 --service-cidr 10.96.0.0/12 --image-repository=registry.aliyuncs.com/google_containers
其中:
--cri-socket unix:///opt/containerd/run/containerd.sock
指定容器运行时(即 containerd) 的 sock 文件--kubernetes-version v1.32.13
指定部署的 Kubernetes 版本--apiserver-advertise-address 192.168.0.101
指定 API 服务器 IP--apiserver-bind-port 6443
指定 API 服务器绑定端口,用于高可用集群--pod-network-cidr 10.244.0.0/16
指定 Pod 使用的网段--service-cidr 10.96.0.0/12
指定 Service 用的网段--image-repository registry.aliyuncs.com/google_containers
指定自定义镜像仓库地址,默认使用 registry.k8s.io
其他可选参数:
--control-plane-endpoint [IP:端口号]
指定控制平面共享 IP:端口,用于高可用集群--service-dns-domain [域名]
指定服务 DNS 域,默认为 cluster.local
--cert-dir [目录]
指定证书存储目录,默认为 /etc/kubernetes/pki
--certificate-key [密钥]
指定证书加密密钥,用于高可用集群控制平面之间传输证书--upload-certs
指定将证书至集群,用于高可用集群--dry-run
指定不实际执行,只打印日志初始化成功时的日志大致如下:
[init] Using Kubernetes version: v1.32.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
W0418 15:46:07.413258 10598 checks.go:846] detected that the sandbox image "" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.aliyuncs.com/google_containers/pause:3.10" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.101]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.101 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.515944ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 4.001286229s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 4qudfv.6y16otmgi1jkblrj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.101:6443 --token 123456 \
--discovery-token-ca-cert-hash sha256:123456
注意其中的 kubeadm join 命令,在工作平面中使用此命令即可将其加入当前集群。
kubectl 默认使用 $HOME/.kube/config
配置文件,需要手动配置该文件后才能使用 kubectl 管理集群。
可直接复制 admin.conf
文件(默认路径为 /etc/kubernetes
,此处使用自定义目录 /opt/kubernetes/conf
)后配置权限:
mkdir -pv $HOME/.kube
cp /opt/kubernetes/conf/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
在部署 Pod 网络插件(CNI)之前,Pod 、 Service 无法通信,CoreDNS Pod 会处于 Pending 状态,集群功能不完整。
使用的 CNI 必须和
kubeadm init
时--pod-network-cidr
参数指定的网段一致,通常Calico
使用192.168.0.0/16
、Flannel
使用10.244.0.0/16
,因此本集群使用Flannel
1) 创建相关目录
mkdir -pv /opt/kubernetes/flannel
2) 下载 Flannel 的 yaml 文件
wget https://github.com/flannel-io/flannel/releases/download/v0.26.7/kube-flannel.yml
3) 按需修改 Flannel 的 yaml 文件
vim /opt/kubernetes/flannel/kube-flannel.yml
3) 部署 Flannel
systemctl stop flannel
systemctl enable --now flannel
systemctl status flannel
kubectl apply -f kube-flannel.yml
4) 检查 Flannel 部署状态
kubectl get nodes
kubectl get pods -n kube-system
systemctl status containerd
systemctl status kubelet
kubectl get nodes
kubectl cluster-info
kubectl get pods -n kube-system
kubectl get componentstatuses
kubectl run busybox --image=busybox:1.28 -- sleep 3600
kubectl exec -it busybox -- nslookup kubernetes.default
kubectl get --raw='/healthz'
kubectl -n kube-system exec -it etcd-<控制面节点名称> -- etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
endpoint health
控制平面初始化成功后,会生成 join 命令,建议直接使用此命令将工作平面加入集群
kubeadm join 192.168.0.101:6443 --cri-socket /opt/containerd/run/containerd.sock --token=123456 --discovery-token-ca-cert-hash=sha256:123456
其中:
192.168.0.101:6443
为控制平面地址--token=123456
为 token,由控制平面自动生成--discovery-token-ca-cert-hash=sha256:123456
为证书 hash 值,由控制平面自动生成其他可选参数:
--node-name=[名称]
指定节点名称,默认为主机名--control-plane
指定此节点以控制平面身份加入--certificate-key=[密钥]
指定证书解密密钥,和 kubeadm init
的 --certificate-key
相同--apiserver-advertise-address=[IP:端口号]
指定控制平面共享 IP:端口,用于高可用集群--dry-run
指定不实际执行,只打印日志containerd
作为容器运行时,如使用其他容器运行时,请参考相关文档 kubeadm reset
和 kubeadm reset -f
命令重置,并在删除 $HOME/.kube/config
后重新初始化