最后编辑时间:2024/3/26
单节点配置
检查是否已经安装kubectl, kubelet, kubeadm直接输入命令确定,如果提示没有该指令则正确
kubectl kubelet kubeadm
如果安装,使用apt remove和snap remove删除
sudo apt remove kubectl kubelet kubeadm sudo snap remove kubectl kubelet kubeadm
关闭防火墙
查看防火墙状态 inactive说明是未激活
sudo ufw status
开机不启动防火墙,重启即可生效
sudo ufw disable
确保docker已经安装,并正确配置cgroup管理器,例如
配置docker
sudo mkdir -p /etc/docker sudo vi /etc/docker/daemon.json
#{ # "registry-mirrors": ["https://2m9jza5s.mirror.aliyuncs.com"], # "insecure-registries": ["localhost:32000"], # "exec-opts": [ "native.cgroupdriver=systemd" ], # "data-root": "/data/wzh/docker/image", # "default-runtime": "nvidia", # "runtimes": { # "nvidia": { # "path": "/usr/bin/nvidia-container-runtime", # "runtimeArgs": [] # } # } #} { "registry-mirrors": ["https://2m9jza5s.mirror.aliyuncs.com"], # 必要 "insecure-registries": ["localhost:32000"], "exec-opts": [ "native.cgroupdriver=systemd" ], # 必要 "data-root": "/data/wzh/docker/image", # 配置镜像目录 }
"https://???.mirror.aliyuncs.com"配成自己的,见链接。
systemctl restart docker systemctl restart kubelet
安装kubectl, kubelet, kubeadm
# 检查这个kubernetes-cni sudo apt install -y kubelet=1.22.7-00 kubectl=1.22.7-00 kubeadm=1.22.7-00 # apt list kubernetes-cni -a,可以查找有什么版本 # sudo journalctl -u kubelet # 查看kubelet状态 # systemctl status kubelet # 查看kubelet状态
禁用swap
vim /etc/default/kubelet # 添加下面这行 KUBELET_EXTRA_ARGS="--fail-swap-on=false" systemctl daemon-reload && systemctl restart kubelet
vi /etc/fstab
注释掉带
/swap.img
的那行kubeadm版本不应该超过1.25.0,因为1.25.0之后k8s使用了containerd,因此注意以后不要使用apt upgrade,否则上面需要重新来一遍。
出错后首先重置:
sudo kubeadm reset
init
sudo kubeadm init --kubernetes-version=v1.22.7 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.24.0.0/16 --ignore-preflight-errors=Swap --apiserver-advertise-address=0.0.0.0
init成功后,提示如下,表示成功了,如果卡住了,去查上述步骤有没有跟着做:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.181.8.94:6443 --token 0desqq.a4oq0rwqyursqah9 \ --discovery-token-ca-cert-hash sha256:7e181cd0f0a435adf7746b17b09b10dba5c9d83936e92fffdc1e67cbf4a9cc06
配置登录选项
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
init成功后,检查kubectl
$ kubectl get pod -A
此时仍有两个没有打开
- 需要配置网络
创建文件flannel.yaml,内容如下,
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- networking.k8s.io
resources:
- clustercidrs
verbs:
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
k8s-app: flannel
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
#image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: lizhenliang/flannel:v0.11.0-amd64
#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: lizhenliang/flannel:v0.11.0-amd64
#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
创建完成后执行kubectl apply -f flannel.yaml,执行很快,但是需要等待一会才会启动,一会会出现
wzh@chen:~$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-xqpqb 1/1 Running 0 11h
kube-system coredns-7f6cbbb7b8-w5lp8 1/1 Running 0 12h
kube-system coredns-7f6cbbb7b8-xmps6 1/1 Running 0 12h
kube-system etcd-chen 1/1 Running 0 12h
kube-system kube-apiserver-chen 1/1 Running 0 12h
kube-system kube-controller-manager-chen 1/1 Running 0 12h
kube-system kube-proxy-c5tks 1/1 Running 0 12h
kube-system kube-scheduler-chen 1/1 Running 0 12h
wzh@chen:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
chen Ready control-plane,master 13h v1.22.7
现在master可以在去除所有污点后执行(“:…” -> “-” ),以下未去除污点操作,可以使用kubectl describe进行查看是否有污点:
$ kubectl taint nodes --all node-role.kubernetes.io/master-
$ kubectl taint nodes --all foo-
本文含有隐藏内容,请 开通VIP 后查看