K8s 网络-基本环境准备 Day01

发布于:2025-03-06 ⋅ 阅读:(18) ⋅ 点赞:(0)

1. 环境准备

1.1 资源准备

准备3台Ubuntu 20.04,我用的是2C4G。

  • 192.168.2.2 bpf1
  • 192.168.2.3 bpf2
  • 192.168.2.4 bpf3

1.2 环境初始化(所有节点操作)

1.2.1 安装必要工具

apt update
apt install -y net-tools tcpdump chrony bridge-utils tree wget iftop ethtool curl

1.2.2 配置hosts文件

~# cat /etc/hosts
127.0.0.1       localhost
192.168.2.2     bpf1
192.168.2.3     bpf2
192.168.2.4     bpf3

1.2.3 关闭swap

sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a

1.2.4 调整内核参数

# vi /etc/sysctl.conf
……省略部分内容
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1

root@bpf1:~# modprobe br_netfilter # 临时生效
root@bpf1:~# echo br_netfilter > /etc/modules-load.d/br_netfilter.conf # 永久生效

root@bpf1:~# lsmod | grep br_netfilter
br_netfilter           28672  0
bridge                176128  1 br_netfilter

root@bpf1:~# sysctl -p
  • net.ipv4.ip_forward = 1
    这个参数启用了 IP 转发功能。当设置为 1 时,Linux 内核将会转发收到的、目的地不是本机的数据包。这对于路由器和网关设备来说是必要的。
  • net.bridge.bridge-nf-call-ip6tables = 1
    这个参数允许在 Linux 桥接设备上对 IPv6 数据包应用 ip6tables 规则。默认情况下,桥接设备不会将数据包传递给 ip6tables,设置为 1 可以启用这个功能。
  • net.bridge.bridge-nf-call-iptables = 1
    类似于上面的 IPv6 参数,这个参数允许在 Linux 桥接设备上对 IPv4 数据包应用 iptables 规则。
  • net.bridge.bridge-nf-call-arptables = 1
    这个参数允许在 Linux 桥接设备上对 ARP 数据包应用 arptables 规则。

1.2.5 添加阿里的repo

apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update

1.2.6 安装docker

# 多执行几次,直到返回ok。
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt update
apt install docker-ce docker-ce-cli containerd.io

1.2.7 启动docker

systemctl daemon-reload
systemctl restart docker
systemctl enable docker

2. Kubeadm安装k8s集群

2.1 安装kubelet kubectl kubeadm并启动

工作节点可以不用安装kubectl

root@bpf1:~# apt-get install -y kubelet=1.23.4-00 kubeadm=1.23.4-00 kubectl=1.23.4-00 --allow-unauthenticated
root@bpf1:~#  systemctl enable kubelet && systemctl restart kubelet

2.2 配置 kubelet 的 cgroup 驱动

root@bpf1:~# cat /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://cu2yw19m.mirror.aliyuncs.com"]
}

2.3 调整kubelet配置

root@bpf1:~# cat > /var/lib/kubelet/config.yaml <<EOF      
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

systemctl daemon-reload
systemctl restart docker
systemctl enable kubelet && systemctl restart kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable docker

root@bpf1:~# docker info |grep "Cgroup Driver"
 Cgroup Driver: systemd

2.4 初始化集群(master节点操作)

2.4.1 拉取镜像

kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers

2.4.2 初始化master节点

root@bpf1:~# kubeadm init --kubernetes-version=v1.23.5 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
…………省略部分输出
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.2:6443 --token 7vjk7y.2ztfqonr9miybhbv \
        --discovery-token-ca-cert-hash sha256:42ad7eb8fa479f9b9ee9e93c735863ee64c3c7d3f7ccd403a21dc301987a3d74
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

root@bpf1:~# kubectl get no -owide
NAME   STATUS     ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
bpf1   NotReady   control-plane,master   3m24s   v1.23.4   192.168.2.2   <none>        Ubuntu 20.04.6 LTS   5.4.0-202-generic   docker://27.3.1


2.4.3 配置env

root@bpf1:~# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

root@bpf1:~# kubectl get po -A
NAMESPACE      NAME                           READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-m86vx          1/1     Running   0             17s
kube-system    coredns-6d8c4cb4d-5ll9c        1/1     Running   0             57m
kube-system    coredns-6d8c4cb4d-hh2rk        1/1     Running   0             57m
kube-system    etcd-bpf1                      1/1     Running   3 (19m ago)   57m
kube-system    kube-apiserver-bpf1            1/1     Running   3 (18m ago)   57m
kube-system    kube-controller-manager-bpf1   1/1     Running   3 (19m ago)   57m
kube-system    kube-proxy-pwsv5               1/1     Running   3 (19m ago)   57m
kube-system    kube-scheduler-bpf1            1/1     Running   3 (19m ago)   57m

root@bpf1:~# kubectl get no
NAME   STATUS   ROLES                  AGE   VERSION
bpf1   Ready    control-plane,master   58m   v1.23.4

root@bpf1:~# kubectl taint nodes --all node-role.kubernetes.io/master-

2.5 添加工作节点

kubeadm join 192.168.2.2:6443 --token 7vjk7y.2ztfqonr9miybhbv \
        --discovery-token-ca-cert-hash sha256:42ad7eb8fa479f9b9ee9e93c735863ee64c3c7d3f7ccd403a21dc301987a3d74


root@bpf1:~# kubectl get no
NAME   STATUS   ROLES                  AGE   VERSION
bpf1   Ready    control-plane,master   78m   v1.23.4
bpf2   Ready    <none>                 70s   v1.23.4
bpf3   Ready    <none>                 22s   v1.23.4

2.6 创建flannel_cni

root@bpf1:~# cat flannel_cni.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: flannel-udp
  name: flannel-udp
spec:
  #replicas: 2
  selector:
    matchLabels:
      app: flannel-udp
  template:
    metadata:
      labels:
        app: flannel-udp
    spec:
      containers:
      - image: registry.cn-beijing.aliyuncs.com/sanhua-k8s/nettool:latest
        name: nettoolbox
        env:
          - name: NETTOOL_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
        securityContext:
          privileged: true
---
apiVersion: v1
kind: Service
metadata:
  name: flannel-udp
spec:
  type: NodePort
  selector:
    app: flannel-udp
  ports:
  - name: flannel-udp
    port: 80
    targetPort: 80
    nodePort: 32000

root@bpf1:~# kubectl apply -f flannel_cni.yaml
daemonset.apps/flannel-udp created
service/flannel-udp created

root@bpf1:~# kubectl get po -A |grep udp
default        flannel-udp-47tbl              1/1     Running   0             108s
default        flannel-udp-686k4              1/1     Running   0             108s
default        flannel-udp-ffrnz              1/1     Running   0             108s


root@bpf1:~# kubectl get svc -A
NAMESPACE     NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       flannel-udp   NodePort    10.100.186.33   <none>        80:32000/TCP             16h
default       kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP                  18h
kube-system   kube-dns      ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   18h

root@bpf1:~# curl 192.168.2.4:32000
PodName: flannel-udp-rxkbp | PodIP: eth0 10.244.2.6/24

3. Kind安装k8s集群

单独找一个虚拟机即可,因为kind是在一个节点上模拟安装多套k8s集群。
 

后续的大部分实验,都是使用kind来切换环境。

3.1 Kind介绍

官网:kind

kind 是一种使用 Docker 容器“节点”运行本地 Kubernetes 集群(单个宿主机节点)的工具。主要用与测试或本地开发。

3.2 安装docker

3.3 二进制安装kind

root@localhost:~# wget https://github.com/kubernetes-sigs/kind/releases/download/v0.26.0/kind-linux-amd64

root@localhost:~# mv kind-linux-amd64 kind
root@localhost:~# chmod +x kind

root@localhost:~# mv kind /usr/local/bin/

3.4 安装k8s集群

3.4.1 克隆所需文件

root@localhost:~# git clone https://gitee.com/rowan-wcni/wcni-kind.git
root@localhost:~/wcni-kind/LabasCode/flannel/5-flannel-ipip# ll
total 32
drwxr-xr-x  3 root root 4096 Jan 30 10:27 ./
drwxr-xr-x 14 root root 4096 Jan 29 06:31 ../
-rwxr-xr-x  1 root root 6028 Jan 29 06:34 1-setup-env.sh*
drwxr-xr-x  2 root root 4096 Jan 29 06:31 2-datapath/
-rw-r--r--  1 root root  690 Jan 29 06:31 cni.yaml
-rw-r--r--  1 root root 4726 Jan 29 06:31 flannel.yaml

3.4.2 调整yaml

root@localhost:~/wcni-kind/LabasCode/flannel/5-flannel-ipip# cat 1-setup-env.sh
……省略部分内容
cat <<EOF | KIND_EXPERIMENTAL_DOCKER_NETWORK=kind kind create cluster --name=flannel-ipip --image=registry.cn-beijing.aliyuncs.com/sanhua-k8s/kindest:v1.27.3 # 把这个镜像改成私有仓库的
……省略部分内容

root@localhost:~/wcni-kind/LabasCode/flannel/5-flannel-ipip# grep 'image:' cni.yaml # 这个也要改
      - image: registry.cn-beijing.aliyuncs.com/sanhua-k8s/nettool


root@localhost:~/wcni-kind/LabasCode/flannel/5-flannel-ipip# grep 'image:' flannel.yaml # 这个也要改
        image: registry.cn-beijing.aliyuncs.com/sanhua-k8s/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        image: registry.cn-beijing.aliyuncs.com/sanhua-k8s/mirrored-flannelcni-flannel:v0.19.2
        image: registry.cn-beijing.aliyuncs.com/sanhua-k8s/mirrored-flannelcni-flannel:v0.19.2

3.4.3 刷新内核参数

root@localhost:~/wcni-kind/LabasCode/flannel/5-flannel-ipip# modprobe br_netfilter
root@localhost:~/wcni-kind/LabasCode/flannel/5-flannel-ipip# sysctl -p
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1

# 永久生效内核参数
root@localhost:~/wcni-kind/LabasCode/flannel/5-flannel-ipip# echo "br_netfilter" | sudo tee /etc/modules-load.d/br_netfilter.conf
br_netfilter

3.4.4 安装集群

脚本执行前,需要调整一下下载kubeclt的那里,否则下载失败,后续kubectl相关的操作都会报错

root@localhost:~/wcni-kind/LabasCode/flannel/5-flannel-ipip# sh 1-setup-env.sh

3.4.5 查看集群状态

4. Containerlab与KinD结合使用

4.1 Containerlab介绍

官网:containerlab

ContainerLab 是一个用于网络实验室环境的开源工具,专注于容器化网络设备的快速部署和测试。

主要就是通过容器技术来模拟各种网络设备(路由交换啥的)。

4.2 二进制安装containerlab

这个一定要装,后面会用到

wget https://github.com/srl-labs/containerlab/releases/download/v0.64.0/containerlab_0.64.0_linux_amd64.tar.gz

mkdir -p /etc/containerlab

tar -zxvf /tmp/clab.tar.gz -C /etc/containerlab

mv /etc/containerlab/containerlab /usr/bin && chmod a+x /usr/bin/containerlab


网站公告

今日签到

点亮在社区的每一天
去签到