此处示例的 Kubernetes 集群包含四个节点,其中一个管理节点、三个工作节点,实际的测试和生产环境,建议配置多个管理节点,以提高容错率。
示例节点:
IP | CPU | 内存 | 操作系统 | 类型 |
---|---|---|---|---|
192.168.10.100 | 2 | 4GB | openSUSE Leap 15.5 | 管理节点(Master) |
192.168.10.101 | 2 | 8GB | openSUSE Leap 15.5 | 工作节点(Worker) |
192.168.10.102 | 2 | 8GB | openSUSE Leap 15.5 | 工作节点(Worker) |
192.168.10.103 | 2 | 8GB | openSUSE Leap 15.5 | 工作节点(Worker) |
1. 更新系统
OpenSUSE 示例:
zypper update -y
2. 安装系统补丁
OpenSUSE 示例:
zypper patch -y
3. 关闭防火墙
systemctl stop firewalld.service
systemctl status firewalld.service
4. 关闭 SWAP
sysctl -w vm.swappiness=0
vim /etc/sysctl.d/99-swappiness.conf
vm.swappiness=0
sysctl vm.swappiness
若显示 vm.swappiness = 0
表示关闭成功
5. 禁用 SELinux
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
在全部服务器中安装以下依赖包:
OpenSUSE 示例:
zypper install -y socat conntrack-tools ebtables ipset ipvsadm
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
其中
v3.0.13
为 KubeKey 的版本,可通过 GitHub 的 releases 页面( https://github.com/kubesphere/kubekey/releases )查看
1. 查看当前 KubeKey 支持的 Kubernetes 版本
./kk version --show-supported-k8s
2. 生成配置文件
./kk create config --with-kubernetes v1.26.5
其中
v1.26.5
为安装的 Kubernetes 版本,可通过官网 ( https://kubernetes.io/zh-cn/releases ) 或 GitHub 的 releases 页面( https://github.com/kubernetes/kubernetes/releases )查看
3. 编辑配置文件
vim config-sample.yaml
spec.hosts
中指定安装集群的主机spec.roleGroups
中指定集群主机角色
etcd
为安装 etcd 的主机,配置 spec.hosts
中的 name
control-plane
为安装 control-plane 的主机,配置 spec.hosts
中的 name
worker
为工作主机,配置 spec.hosts
中的 name
示例如下:
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: k8s-master, address: 192.168.10.100, internalAddress: 192.168.10.100, user: root, password: "123456"}
- {name: k8s-worker-1, address: 192.168.10.101, internalAddress: 192.168.10.101, user: root, password: "123456"}
- {name: k8s-worker-2, address: 192.168.10.102, internalAddress: 192.168.10.102, user: root, password: "123456"}
- {name: k8s-worker-3, address: 192.168.10.103, internalAddress: 192.168.10.103, user: root, password: "123456"}
roleGroups:
etcd:
- k8s-master
control-plane:
- k8s-master
worker:
- k8s-worker-1
- k8s-worker-2
- k8s-worker-3
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.26.5
clusterName: cluster.local
autoRenewCerts: true
containerManager: containerd
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
export KKZONE=cn
./kk create cluster -f config-sample.yaml
Installation is complete.
Please check the result using the command:
kubectl get pod -A
kubectl get nodes
kubectl get pods -A
此处使用 Docker ,安装方式请参考:Docker Engine(1):Docker Engine 的安装
vim /etc/zypp/repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
此处安装 v1.29.0 版本,如需安装其他版本,则将上方的 v1.29 替换为指定版本即可
zypper refresh
zypper install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
kubeadm init 参数
参数说明:
--apiserver-advertise-address IP地址
: 指定API服务器IP地址--apiserver-bind-port 端口号
: 指定API服务监听端口号--control-plane-endpoint IP地址
: 指定控制平面IP地址--pod-network-cidr IP地址段
: 指定pod使用的ip地址段--service-cidr IP地址段
: 指定service使用的ip地址段--node-name 名称
: 指定节点名称--image-repository 镜像仓库URL
: 指定镜像仓库地址--kubernetes-version 版本
: 指定部署Kubernetes版本号示例:
kubeadm init --apiserver-advertise-address "192.168.10.100" --apiserver-bind-port "6443" --control-plane-endpoint "192.168.10.100" --pod-network-cidr "10.244.0.0/16" --service-cidr "10.96.0.0/12" --node-name "master" --image-repository "m.daocloud.io/registry.k8s.io" --kubernetes-version "v1.29.0"
报错:
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-12-29T16:06:18+08:00" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
解决方案:
vim /etc/containerd/config.toml
# 将以下行注释:
disabled_plugins = ["cri"]
systemctl restart containerd
KubeOperator 支持的 Kubernetes 版本较老,且疑似停止维护,建议谨慎使用
1. 安装 Docker 和 Docker Compose
略,具体可参考 Docker 官方文档或本站文章 Docker Engine(1):Docker Engine 的安装
2. 创建部署目录
/opt/kubeoperator
/tmp/kubeoperator
mkdir -pv /opt/kubeoperator /tmp/kubeoperator
3. 下载 KubeOperator 离线安装包
此处以部署 KubeOperator v3.16.4 版本为例,部署其余版本时修改对应的版本号即可
安装 KubeOperator 时使用的文件(installer)
访问 GitHub 中的 KubeOperator 项目( https://github.com/KubeOperator/KubeOperator/releases ),下载 对应版本的 installer.tar.gz 文件(此处以 installer-v3.16.4.tar.gz
为例)
使用 KubeOperator 部署 Kubernetes 集群时使用的文件(ansible)
访问 GitHub 中的 KubeOperator 项目( https://github.com/KubeOperator/KubeOperator/releases ),下载 对应版本的 ansible.tar.gz 文件(此处以 ansible-v3.16.4.tar.gz
为例)
Nexus 文件
https://kubeoperator.fit2cloud.com/nexus/nexus-需要部署的KubeOperator版本.tar.gz (此处以 nexus-v3.16.4.tar.gz
为例)
命令示例:
cd /tmp/kubeoperator
wget https://github.com/KubeOperator/KubeOperator/releases/download/v3.16.4/installer-v3.16.4.tar.gz
wget https://github.com/KubeOperator/KubeOperator/releases/download/v3.16.4/ansible-v3.16.4.tar.gz
wget https://kubeoperator.fit2cloud.com/nexus/nexus-v3.16.4.tar.gz
4. 解压 KubeOperator 离线安装包
cd /tmp/kubeoperator
tar zxvf installer-v3.16.4.tar.gz
tar zxvf ansible-v3.16.4.tar.gz
tar zxvf nexus-v3.16.4.tar.gz
5. 复制 KubeOperator 相关文件至部署目录
cp -rpf /tmp/kubeoperator/installer/kubeoperator/* /opt/kubeoperator
cp -rpf /tmp/kubeoperator/ansible /opt/kubeoperator/data/kobe/project/ko
cp -rpf /tmp/kubeoperator/nexus-data /opt/kubeoperator/data/
chown -R 200.200 /opt/kubeoperator/data/nexus-data/
6. 修改 docker-compose.yml 文件
cd /opt/kubeoperator
vim docker-compose.yml
需要修改的地方如下:
${KO_TAG}
替换为 KubeOperator 的版本(此处示例为 v3.16.4
)${OS_ARCH}
替换为部署机器的 CPU 指令集,可选 amd64
和 arm64
(此处示例为 amd64
)${KP_TAG}
替换为 KubePi 的版本(可查看 kubeoperator.conf
中的 KP_TAG
,此处示例为 v1.6.4
)修改前示例:
registry.cn-qingdao.aliyuncs.com/kubeoperator/neeko:${KO_TAG}-${OS_ARCH}
registry.cn-qingdao.aliyuncs.com/kubeoperator/server:${KO_TAG}-${OS_ARCH}
registry.cn-qingdao.aliyuncs.com/kubeoperator/kobe:${KO_TAG}-${OS_ARCH}
registry.cn-qingdao.aliyuncs.com/kubeoperator/kotf:${KO_TAG}-${OS_ARCH}
registry.cn-qingdao.aliyuncs.com/kubeoperator/nginx:1.23.1-${OS_ARCH}
registry.cn-qingdao.aliyuncs.com/kubeoperator/mysql-server:8.0.29-${OS_ARCH}
registry.cn-qingdao.aliyuncs.com/kubeoperator/webkubectl:v2.10.6-${OS_ARCH}
registry.cn-qingdao.aliyuncs.com/kubeoperator/nexus3:3.38.1-02-${OS_ARCH}
registry.cn-qingdao.aliyuncs.com/kubeoperator/kubepi-server:${KP_TAG}-${OS_ARCH}
修改后示例:
registry.cn-qingdao.aliyuncs.com/kubeoperator/neeko:v3.16.4-amd64
registry.cn-qingdao.aliyuncs.com/kubeoperator/server:v3.16.4-amd64
registry.cn-qingdao.aliyuncs.com/kubeoperator/kobe:v3.16.4-amd64
registry.cn-qingdao.aliyuncs.com/kubeoperator/kotf:v3.16.4-amd64
registry.cn-qingdao.aliyuncs.com/kubeoperator/nginx:1.23.1-amd64
registry.cn-qingdao.aliyuncs.com/kubeoperator/mysql-server:8.0.29-amd64
registry.cn-qingdao.aliyuncs.com/kubeoperator/webkubectl:v2.10.6-amd64
registry.cn-qingdao.aliyuncs.com/kubeoperator/nexus3:3.38.1-02-amd64
registry.cn-qingdao.aliyuncs.com/kubeoperator/kubepi-server:v1.6.4-amd64
${KO_PORT}
修改为 KubeOperator 控制台的访问端口(此处示例为 80
)修改前示例:
ports:
- ${KO_PORT}:80
修改后示例:
ports:
- 80:80
${KO_REPO_PORT}
修改为 Nexus repo 端口(可查看 kubeoperator.conf
中的 KO_REPO_PORT
,此处示例为 8081
)${KO_REGISTRY_PORT}
修改为 Nexus registry 端口(可查看 kubeoperator.conf
中的 KO_REGISTRY_PORT
,此处示例为 8082
)${KO_REGISTRY_HOSTED_PORT}
修改为 Nexus hosted registry 端口(可查看 kubeoperator.conf
中的 KO_REGISTRY_HOSTED_PORT
,此处示例为 8083
)修改前示例:
ports:
- ${KO_REPO_PORT}:8081
- ${KO_REGISTRY_PORT}:8082
- ${KO_REGISTRY_HOSTED_PORT}:8083
修改后示例:
ports:
- 8081:8081
- 8082:8082
- 8083:8083
docker-compose.yml 完整示例:
version: "3.9"
services:
ui:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/neeko:v3.16.4-amd64
container_name: kubeoperator_ui
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- kubeoperator_network
healthcheck:
test: ["CMD", "test", "-f", "/var/run/nginx.pid"]
interval: 10s
timeout: 10s
retries: 20
server:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/server:v3.16.4-amd64
container_name: kubeoperator_server
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- ./conf/app.yaml:/etc/ko/app.yaml
- ./data/backup:/var/ko/data/backup
- ./data/ko:/var/ko/data
networks:
- kubeoperator_network
healthcheck:
test: ["CMD","curl","-f","http://localhost:8080/api/v1/health"]
interval: 10s
timeout: 10s
retries: 30
depends_on:
mysql:
condition: service_healthy
kobe:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/kobe:v3.16.4-amd64
container_name: kubeoperator_kobe
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- ./conf/kobe.yml:/etc/kobe/app.yml
- ./data/kobe:/var/kobe/data
- ./data/backup:/var/ko/data/backup
networks:
- kubeoperator_network
healthcheck:
test: ["CMD","kobe-inventory"]
interval: 10s
timeout: 10s
retries: 20
kotf:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/kotf:v3.16.4-amd64
container_name: kubeoperator_kotf
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/kotf:/var/kotf/data
networks:
- kubeoperator_network
healthcheck:
test: ["CMD","ps", "-ef", "|", "grep","kotf-server"]
interval: 10s
timeout: 10s
retries: 20
nginx:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/nginx:1.23.1-amd64
container_name: kubeoperator_nginx
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- ./conf/nginx.conf:/etc/nginx/conf.d/default.conf
- ./conf/wait.sh:/etc/nginx/conf.d/wait.sh
networks:
- kubeoperator_network
ports:
- 80:80
command: ["/bin/bash","/etc/nginx/conf.d/wait.sh","-t","10","server:8080","--","nginx","-g","daemon off;"]
healthcheck:
test: ["CMD", "test", "-f", "/var/run/nginx.pid"]
interval: 10s
timeout: 10s
retries: 30
depends_on:
- server
mysql:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/mysql-server:8.0.29-amd64
container_name: kubeoperator_mysql
env_file:
- ./conf/mysql.env
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- ./conf/my.cnf:/etc/my.cnf
- ./conf/init.sql:/docker-entrypoint-initdb.d/init.sql
- ./data/mysql:/var/lib/mysql
networks:
- kubeoperator_network
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
interval: 10s
timeout: 10s
retries: 20
webkubectl:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/webkubectl:v2.10.6-amd64
container_name: kubeoperator_webkubectl
restart: always
privileged: true
volumes:
- /etc/localtime:/etc/localtime:ro
networks:
- kubeoperator_network
healthcheck:
test: ["CMD","curl","localhost:8080"]
interval: 10s
timeout: 10s
retries: 20
nexus:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/nexus3:3.38.1-02-amd64
container_name: kubeoperator_nexus
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- ./data/nexus-data/:/nexus-data
networks:
- kubeoperator_network
ports:
- 8081:8081
- 8082:8082
- 8083:8083
healthcheck:
test: ["CMD","curl","localhost:8081"]
interval: 10s
timeout: 10s
retries: 20
kubepi:
image: registry.cn-qingdao.aliyuncs.com/kubeoperator/kubepi-server:v1.6.4-amd64
container_name: kubeoperator_kubepi
restart: always
privileged: true
volumes:
- ./data/kubepi:/var/lib/kubepi
networks:
- kubeoperator_network
healthcheck:
test: ["CMD","curl","localhost"]
interval: 10s
timeout: 10s
retries: 20
networks:
kubeoperator_network:
name: kubeoperator_network
driver: bridge
#driver: overlay
driver_opts:
encrypted: 'true'
ipam:
driver: default
config:
- subnet: 10.21.25.1/24
7. 修改 KubeOperator 的配置文件
app.yaml
db.password
: MySQL root 用户密码jwt.secret
: jwt 秘钥mysql.env
MYSQL_ROOT_PASSWORD
: MySQL root 用户密码,需要和 app.yaml
中的 db.password
一致8. 启动 KubeOperator
cd /opt/kubeoperator
docker-compose up -d
9. 访问 KubeOperator 控制台
浏览器访问 80 端口,默认用户名为 admin
,密码为 kubeoperator@admin123
1. 登录 KubeOperator 控制台
2. 在【系统设置】 —— 【仓库】中新增仓库
官方文档: https://kubeoperator.io/docs/user_manual/system_management
admin
admin123
KO_REPO_PORT
,此处示例为 8081
KO_REGISTRY_PORT
,此处示例为 8082
KO_REGISTRY_HOSTED_PORT
,此处示例为 8083
1. 登录 KubeOperator 控制台
2. 在【主机】中配置主机
官方文档:
配置项参考:
kubeoperator
,也可以使用自定义项目(在 【项目管理】中添加)1. 登录 KubeOperator 控制台
2. 在【集群】中新建集群
官方文档:
配置项参考:
名称: 集群的名称
提供商: 集群主机的提供商,此处示例使用本地主机,因此选择“裸金属”
节点命名规则: 集群主机的命名规则
项目: 集群所属项目,可使用默认的 kubeoperator
,也可以使用自定义项目(在 【项目管理】中添加)
版本: 集群的 Kubernetes 版本
架构: 集群使用主机的 CPU 架构
Yum 仓库: 集群使用主机中配置的 YUM 仓库
节点 IP 数量: 集群节点的 IP 数量
Pod 网络 CIDR: 集群 Pod 使用的网段,不能和主机网段冲突
Service 网络 CIDR:
此处使用二进制方式安装,其他安装方式请参考官方文档( https://kind.sigs.k8s.io/docs/user/quick-start/#installation )
wget https://github.com/kubernetes-sigs/kind/releases/download/v0.20.0/kind-linux-amd64
其中
v0.20.0
为 kind 版本,可在 releases 页面中查看
/usr/bin
目录,并赋予执行权限即可cp kind-linux-amd64 /usr/bin/kind
chmod +x /usr/bin/kind
zypper install -y bash-completion
kind completion bash > ~/.kind-completion
source ~/.kind-completion
此处安装 Kubernetes 1.29.0 版本(如需安装其他版本,替换
image
中指定的镜像即可,镜像见 GitHub 的 releases 页面);
集群名称为hty1024-test
;
集群规格为一个控制平面、三个工作节点。
vim ~/kind-config.yaml
# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: hty1024-test
nodes:
- role: control-plane
image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9227b8f201df25e55e268e5d619312131292e324d570
- role: worker
image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9227b8f201df25e55e268e5d619312131292e324d570
- role: worker
image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9227b8f201df25e55e268e5d619312131292e324d570
- role: worker
image: kindest/node:v1.29.0@sha256:eaa1450915475849a73a9227b8f201df25e55e268e5d619312131292e324d570
kind create cluster --config ~/kind-config.yaml
集群创建成功时的输出如下:
Creating cluster "hty1024-test" ...
✓ Ensuring node image (kindest/node:v1.29.0) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-hty1024-test"
You can now use your cluster with:
kubectl cluster-info --context kind-hty1024-test
Thanks for using kind! 😊
集群安装成功后可通过 kubectl 命令查看集群信息
kubectl cluster-info --context kind-kind
示例返回:
Kubernetes control plane is running at https://127.0.0.1:42561
CoreDNS is running at https://127.0.0.1:42561/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl get nodes
示例返回:
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 20h v1.29.0
kind-worker Ready <none> 20h v1.29.0
kind-worker2 Ready <none> 20h v1.29.0
kind-worker3 Ready <none> 20h v1.29.0
kind delete cluster --name [集群名称(默认为kind)]
示例:
kind delete cluster --name hty1024-test