Kubernetes 集群搭建(一):k8s 从环境准备到 Calico 网络插件部署(1.16版本)

发布于:2025-04-08 ⋅ 阅读:(36) ⋅ 点赞:(0)

(一)虚拟环境准备

名称 ip 备注
m1 192.168.101.131 master
n1 192.168.101.132 worker
n2 192.168.101.133 worker

(二)集群统一配置

  • 2.1 关闭防火墙和selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
  • 2.2 关闭swap 分区
swapoff -a

sed -i '/\bswap\b/s/^/#/' /etc/fstab
  • 2.3 修改hosts文件
echo '''
192.168.101.131 m1
192.168.101.132 n1
192.168.101.133 n2
''' >> /etc/hosts

  • 2.4 配置内核参数
echo '''
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_recycle = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
''' > /etc/sysctl.d/k8s.conf 

sysctl --system
  • 2.5 安装docker/kubeadm/kubelet
  • 安装docker,此处版本为18.09
yum install -y docker
  • 配置 Docker 的 cgroup 驱动设置为 systemd,配置docker的镜像、开机自启动等
echo '''
{
  "graph": "/data/docker",
  "registry-mirrors": ["https://01xxgaft.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

''' >> /etc/docker/daemon.json

systemctl restart docker
docker info | grep -i cgroup
  • 添加kubernets的yum源
 cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubernets

yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2

systemctl start kubelet

systemctl enable kubelet

(三)主节点配置

  • 配置主机名,主机名是节点与集群通信的重要标识之一,尤其是在日志、监控和调度中,分别在各个节点中配置它的主机名
hostnamectl set-hostname m1

hostnamectl set-hostname n1

hostnamectl set-hostname n2

(四)初始化kubernets配置

  • 4.1 检查主机名称的hosts里的主机名
  • 4.2 执行重置(可选) kubeadm reset
  • 4.3 手动清理残留文件(可选)
# /var/lib/etcd/ 是 etcd 数据库的存储目录,用于保存 Kubernetes 集群中的关键元数据和配置信息
rm -rf /etc/kubernetes/manifests/*
rm -rf /var/lib/etcd/*  
rm -rf $HOME/.kube/config
  • 4.4 配置kubeadm-config.yaml文件
  • 确保 serviceSubnet 和 podSubnet 不与现有的网络配置冲突。
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "192.168.101.131"
  bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
controlPlaneEndpoint: ""  # 如果需要高可用控制平面,可以指定负载均衡器的地址
apiServer:
  certSANs:
  - "192.168.101.131"  # 添加额外的 SAN(Subject Alternative Name)
networking:
  serviceSubnet: "192.140.0.0/16"
  podSubnet: "192.240.0.0/16"
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "iptables"  # 可以根据需要选择 "ipvs" 或 "iptables"
  • 初始化集群
    kubeadm init --config=kubeadm-config.yaml

  • 复制初始化集群后终端显示的命令

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm join 192.168.101.131:6443 --token 34fhhl.cn95mp7q7pr82xyv --discovery-token-ca-cert-hash sha256:7eecc5cbb3eba936cb1cf4083315c9f56c13aa9023f50cfa1a76e45916e1f093 
  • 4.5 安装CNI插件,此处选择calico
    官方文档地址 https://docs.tigera.io/archive,里面可以查到k8s 1.16 对应的calico版本为3.12
    下载官方yaml文件
    wget https://docs.projectcalico.org/archive/v3.12/manifests/calico.yaml

  • 注意直接apply这个yaml文件是不行的,因为国内无法拉取calico的docker镜像,解决办法有两种
    一种是先手动拉取所有镜像
    docker pull docker.m.daocloud.io/calico/cni:v3.12.3
    docker tag docker.m.daocloud.io/calico/cni:v3.12.3 calico/cni:v3.12.3
    继续拉取yaml中所需的其他镜像...........
    另一种是修改yaml文件,改为国内镜像源

  • 如果报错使用kubctl describe pod <name> -n kube-system查看报错信息,对症下药

...
      initContainers:
        # This container performs upgrade from host-local IPAM to calico-ipam.
        # It can be deleted if this is a fresh installation, or if you have already
        # upgraded to use calico-ipam.
        - name: upgrade-ipam
          # image: calico/cni:v3.12.3
          # 改为国内镜像源,有多少镜像改多少个
          image: docker.m.daocloud.io/calico/cni:v3.12.3
...
kubectl apply -f calico.yaml

kubectl get pods -A

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-59bb4f77b8-pcjtn   1/1     Running   0          100s
kube-system   calico-node-vx2q7                          1/1     Running   0          100s
kube-system   coredns-58cc8c89f4-jmgbc                   1/1     Running   0          42m
kube-system   coredns-58cc8c89f4-mm5pr                   1/1     Running   0          42m
kube-system   etcd-m1                                    1/1     Running   0          41m
kube-system   kube-apiserver-m1                          1/1     Running   0          41m
kube-system   kube-controller-manager-m1                 1/1     Running   0          41m
kube-system   kube-proxy-k9k4v                           1/1     Running   0          42m
kube-system   kube-scheduler-m1                          1/1     Running   0          41m

所有pod 为running则正常


网站公告

今日签到

点亮在社区的每一天
去签到