Linux容器篇、第二章_01Ubuntu22 环境下 KubeSphere 容器平台高可用搭建全流程

发布于:2025-06-07 ⋅ 阅读:(24) ⋅ 点赞:(0)

Linux_k8s篇

欢迎来到Linux的世界,看笔记好好学多敲多打,每个人都是大神!

题目:Ubuntu22 环境下 KubeSphere 容器平台高可用搭建全流程

版本号: 1.0,0
作者: @老王要学习
日期: 2025.06.05
适用环境: Ubuntu22

文档说明

本文围绕 KubeSphere 容器平台搭建展开,适用于 Ubuntu22 环境。详细介绍了环境准备步骤,涵盖硬件、软件要求及更新、克隆、改主机名等操作。还阐述创建 Kubernetes 集群、安装 KubeSphere 流程,包含下载、配置、安装及存储卷设置等内容,助于用户完成平台搭建

环境准备

硬件要求

  • 服务器: 2核CPU、2GB内存,20GB硬盘空间
  • 网络: 确保服务器具有固定的IP地址,并且防火墙允许FTP端口(默认22端口)的通信

软件要求

  • 操作系统:Ubuntu22
  • FTP软件:SecureCRT
  • 软件包:

KubeSphere Ubuntu22/IP
master01 192.168.174.10
master02 192.168.174.20
master03 192.168.174.30
storage(单点存储)
harbor(私有镜像)
192.168.174.50

一、环境准备

1.1更新

# 进入root用户
sudo -i

# 更新软件包列表
apt update

# 升级已安装的软件包
apt upgrade -y

# 安装防火墙
apt install ufw lrzsz -y

1.2克隆

# 修改每一天主机IP(20)
sudo -i 

sed -i 's|\(^[[:space:]]*addresses:[[:space:]]*\)\[192.168.174.10/24\]|\1[192.168.174.20/24]|' /etc/netplan/00-installer-config.yaml

netplan apply

# 修改每一天主机IP(30)
sudo -i 

sed -i 's|\(^[[:space:]]*addresses:[[:space:]]*\)\[192.168.174.10/24\]|\1[192.168.174.30/24]|' /etc/netplan/00-installer-config.yaml

netplan apply

# 修改每一天主机IP(50)
sudo -i 

sed -i 's|\(^[[:space:]]*addresses:[[:space:]]*\)\[192.168.174.10/24\]|\1[192.168.174.50/24]|' /etc/netplan/00-installer-config.yaml

netplan apply

1.3修改主机名

hostnamectl set-hostname master-10
bash

hostnamectl set-hostname master-20
bash

hostnamectl set-hostname master-30
bash

hostnamectl set-hostname sh-50
bash

1.4关闭防火墙

# 禁用防火墙
ufw disable 

# 查看状态
ufw status

1.5安装相关依赖(全部主机执行)

apt install socat conntrack ebtables ipset -y
apt install lrzsz -y

二、创建Kubernetes 集群(外部持久化存储)

2.1下载kubekey-v3.1.9

# 创建一个目录
mkdir /mysvc
cd /mysvc

# 下载安装包
https://github.com/kubesphere/kubekey/releases/download/v3.1.9/kubekey-v3.1.9-linux-amd64.tar.gz

# 解包
tar zxf kubekey-v3.1.9-linux-amd64.tar.gz 

2.2创建配置文件安装k8s

# 查看支持安装的k8s
./kk version --show-supported-k8s

# 创建一个配置文件(1.32.2)
./kk create config -f k8econfig.yml --with-kubernetes v1.32.2
Generate KubeKey config file successfully

2.3修改配置文件

cat>/mysvc/k8econfig.yml<<LW
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: laowang
spec:
  hosts:
  - {name: master-10, address: 192.168.174.10, internalAddress: 192.168.174.10, user: laowang,     password: "1"} #设置为三台主机的对应[主机名],[IP地址],[用户],[用户的密码]
  - {name: master-20, address: 192.168.174.20, internalAddress: 192.168.174.20, user: laowang,     password: "1"}
  - {name: master-30, address: 192.168.174.30, internalAddress: 192.168.174.30, user: laowang,     password: "1"}
  roleGroups:
    etcd:
    - master-10 #三台主机的主机名
    - master-20
    - master-30
    control-plane: 
    - master-10 #三台主机的主机名
    - master-20
    - master-30
    worker:
    - master-10 #三台主机的主机名
    - master-20
    - master-30
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    internalLoadbalancer: haproxy
    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.32.2
    clusterName: laowang.cn
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  storage:
	openebs:
		basePath: /data/openebs/local
  registry:
    privateRegistry: "registry.cn-beijing.aliyuncs.com" #这是阿里云容器镜像服务(ACR)的北京地域镜像仓库域名
    namespaceOverride: "k8eio"
    registryMirrors: []
    insecureRegistries: []
  addons: []
LW

2.4安装k8s(3.1.9)

# 下载yamllint
apt install yamllint -y

# 验证YAML文件语法(输出为空时代表文件正确)
yamllint /mysvc/k8econfig.yml

# 安装k8s
export KKZONE=cn
./kk create cluster -f k8econfig.yml 
#输出如下: (开头)


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

01:00:49 UTC [GreetingsModule] Greetings
01:00:49 UTC message: [master-30]
Greetings, KubeKey!
01:00:49 UTC message: [master-10]
Greetings, KubeKey!
01:00:49 UTC message: [master-20]
Greetings, KubeKey!
01:00:49 UTC success: [master-30]
01:00:49 UTC success: [master-10]
01:00:49 UTC success: [master-20]
01:00:49 UTC [NodePreCheckModule] A pre-check on nodes
01:00:49 UTC success: [master-20]
01:00:49 UTC success: [master-30]
01:00:49 UTC success: [master-10]
01:00:49 UTC [ConfirmModule] Display confirmation form
+-----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name      | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+-----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| master-10 | y    | y    | y       | y        | y     | y     |         | y         |        |        | v1.7.13    |            |             |                  | UTC 01:00:49 |
| master-20 | y    | y    | y       | y        | y     | y     |         | y         |        |        | v1.7.13    |            |             |                  | UTC 01:00:49 |
| master-30 | y    | y    | y       | y        | y     | y     |         | y         |        |        | v1.7.13    |            |             |                  | UTC 01:00:49 |
+-----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+

# 结尾输出如下:
02:11:43 UTC skipped: [master-30]
02:11:43 UTC skipped: [master-20]
02:11:43 UTC success: [master-10]
02:11:43 UTC [ConfigureKubernetesModule] Configure kubernetes
02:11:43 UTC success: [master-10]
02:11:43 UTC skipped: [master-20]
02:11:43 UTC skipped: [master-30]
02:11:43 UTC [ChownModule] Chown user $HOME/.kube dir
02:11:43 UTC success: [master-20]
02:11:43 UTC success: [master-30]
02:11:43 UTC success: [master-10]
02:11:43 UTC [AutoRenewCertsModule] Generate k8s certs renew script
02:11:43 UTC success: [master-20]
02:11:43 UTC success: [master-30]
02:11:43 UTC success: [master-10]
02:11:43 UTC [AutoRenewCertsModule] Generate k8s certs renew service
02:11:44 UTC success: [master-10]
02:11:44 UTC success: [master-20]
02:11:44 UTC success: [master-30]
02:11:44 UTC [AutoRenewCertsModule] Generate k8s certs renew timer
02:11:44 UTC success: [master-10]
02:11:44 UTC success: [master-20]
02:11:44 UTC success: [master-30]
02:11:44 UTC [AutoRenewCertsModule] Enable k8s certs renew service
02:11:45 UTC success: [master-20]
02:11:45 UTC success: [master-10]
02:11:45 UTC success: [master-30]
02:11:45 UTC [SaveKubeConfigModule] Save kube config as a configmap
02:11:45 UTC success: [LocalHost]
02:11:45 UTC [AddonsModule] Install addons
02:11:45 UTC message: [LocalHost]
[0/0] enabled addons
02:11:45 UTC success: [LocalHost]
02:11:45 UTC Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

        kubectl get pod -A

root@master-10:/mysvc# 

2.5查看k8s节点信息

kubectl get nodes -owide
#输出如下: 
NAME        STATUS   ROLES                  AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
master-10   Ready    control-plane,worker   116s   v1.32.2   192.168.174.10   <none>        Ubuntu 22.04.5 LTS   5.15.0-141-generic   containerd://1.7.13
master-20   Ready    control-plane,worker   100s   v1.32.2   192.168.174.20   <none>        Ubuntu 22.04.5 LTS   5.15.0-141-generic   containerd://1.7.13
master-30   Ready    control-plane,worker   100s   v1.32.2   192.168.174.30   <none>        Ubuntu 22.04.5 LTS   5.15.0-141-generic   containerd://1.7.13

2.6在存储节点

# k8s自动补全功能(174.50)
cat>>~/.bashrc<<LW
source <(kubectl completion bash)
LW

# 安装相关网络服务组件(174.50)
apt install nfs-common nfs-kernel-server

# 安装相关网络服务组件(所以节点)
apt install nfs-common -y

# 查看磁盘使用率
df -Th

# 查看所有逻辑卷信息
lvdisplay 
#输出如下: 
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/ubuntu-lv
  LV Name                ubuntu-lv
  VG Name                ubuntu-vg
  LV UUID                PIAIdL-MJYE-uXb1-ewO3-GSC5-KQtT-0qC90F
  LV Write Access        read/write
  LV Creation host, time ubuntu-server, 2025-06-05 03:06:55 +0000
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

# 将磁盘剩余空间分配给k8s
lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
  Size of logical volume ubuntu-vg/ubuntu-lv changed from 10.00 GiB (2560 extents) to 18.22 GiB (4665 extents).
  Logical volume ubuntu-vg/ubuntu-lv successfully resized.

# 再次查看磁盘信息
lvdisplay
#输出如下: 
  LV Size                18.22 GiB

# 让文件系统利用新增加的空间
/mysvc# resize2fs /dev/ubuntu-vg/ubuntu-lv
#输出如下: 
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 3
The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 4776960 (4k) blocks long.

# 174.10/174.20/174.30都进行如上操作
lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv

resize2fs /dev/ubuntu-vg/ubuntu-lv

# 创建目录(174.50)
mkdir /k8s/dynfsclass -p

# NFS 服务端配置文件中添加一条共享规则
#共享目录路径    客户端权限配置(逗号分隔的选项)
cat>>/etc/exports<<LW
/k8s/dynfsclass   *(rw,sync,no_root_squash)
LW

# 远程调用开机自启
systemctl enable --now nfs-server
systemctl enable --now rpcbind
reboot

#  NFS(网络文件系统)服务器共享目录列表
systemctl enable --now rpcbind
showmount -e 192.168.174.50
#输出如下: 
/k8s/dynfsclass *

2.7在控制节点

# 下载 Kubernetes NFS 动态存储卷插件
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/releases/download/nfs-subdir-external-provisioner-4.0.18/nfs-subdir-external-provisioner-4.0.18.tgz

# 解压
tar zxf nfs-subdir-external-provisioner-4.0.18.tar.gz

# 修改配置文件
cat >/mysvc/nfs-subdir-external-provisioner-nfs-subdir-external-provisioner-4.0.18/deploy/deployment.yaml<<LW
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.174.50 #####storage+harbor服务器
            - name: NFS_PATH
              value: /k8s/dynfsclass ##### 共享目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.174.50 ##### storage+harbor服务器
            path: /k8s/dynfsclass ##### 共享目录
LW

2.8拉取 NFS 存储卷插件

# 使用 Containerd(ctr) 从华为云 SWR 镜像仓库拉取 NFS 存储卷插件的容器镜像
https://docker.aityp.com/image/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

# 三台master执行
ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
#输出如下: 
swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2: resolved       |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423:                            done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:932b0bface75b80e713245d7c2ce8c44b7e127c075bd2d27281a16677c8efef3:                              done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:528677575c0b965326da0c29e21feb548e5d4c2eba8c48a611e9a50af6cf3cdc:                               done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:60775238382ed8f096b163a652f5457589739d65f1395241aba12847e7bdc2a1:                               done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 3.2 s                                                                                               total:  16.6 M (5.2 MiB/s)                                       
unpacking linux/amd64 sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423...
done: 939.146543ms

# 为容器镜像打标签
ctr -n k8s.io images tag  swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2  registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
#输出如下: 
registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

# 删除源镜像标签
ctr -n k8s.io images remove swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
#输出如下: 
swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

2.9创建k8s部署资源

# 依据 rbac.yaml 文件里定义的内容来创建相应的资源
kubectl create -f rbac.yaml 
#输出如下: 
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

# 管理应用的副本集和 Pod
kubectl create -f deployment.yaml
#输出如下: 
deployment.apps/nfs-client-provisioner created

# 列出当前命名空间下的所有 Deployment 资源
kubectl get deployment.apps
#输出如下: (创建成功)
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           8m38s

# 存储类用于定义集群中的存储类型和配置,允许动态创建持久卷
kubectl create -f class.yaml 
#输出如下: 
storageclass.storage.k8s.io/nfs-client created

# 列出当前命名空间下的所有 Pod
kubectl get pod
#输出如下: 
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7bcc898c94-mjskl   1/1     Running   0          13m

# 会列出集群中所有的 StorageClass 资源
kubectl get storageclasses.storage.k8s.io 
#输出如下: 
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  4m1s

2.10安装KubeSphere

# 修改默认的镜像拉取地址
helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.com.cn/main/ks-core-1.1.4.tgz --debug --wait --set global.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks --set extension.imageRegistry=swr.cn-southwest-2.myhuaweicloud.com/ks --set hostClusterName=laowang

#输出如下: (安装成功)
NOTES:
Thank you for choosing KubeSphere Helm Chart.

Please be patient and wait for several seconds for the KubeSphere deployment to complete.

1. Wait for Deployment Completion

    Confirm that all KubeSphere components are running by executing the following command:

    kubectl get pods -n kubesphere-system
2. Access the KubeSphere Console

    Once the deployment is complete, you can access the KubeSphere console using the following URL:  

    http://192.168.174.10:30880

3. Login to KubeSphere Console

    Use the following credentials to log in:

    Account: admin
    Password: P@88w0rd

NOTE: It is highly recommended to change the default password immediately after the first login.
For additional information and details, please visit https://kubesphere.io.

分析:
KubeSphere Core是KubeSphere容器平台的基础版本,专注于提供核心功能集,支持企业快速搭建轻量级容器管理平台。相比完整版,Core版本更精简,更适合资源有限的环境或仅需基础功能的用户


网站公告

今日签到

点亮在社区的每一天
去签到