k8s高级调度02

发布于:2025-07-18 ⋅ 阅读:(17) ⋅ 点赞:(0)

一、理论
污点:
  当节点拥有污点时,pod如果没有容忍则不会部署到该节点上。
  污点属性所有节点都可设置

污点effect的三种选项
 NoSchedule:不会将pod部署到该节点,不驱逐已有节点。

 PreferNoSchedule:尽量不将pod部署到该节点,不驱逐已有节点。

 NoExecute:不会将pod部署到该节点,会将已有pod驱逐出去。

容忍:
  当节点拥有污点时,想让pod部署在污点节点上,可以为pod设置容忍,使其部署到污点节点上。
  只能为pod设置容忍


警戒:
  将节点设置为不可调度的状态,新创建的pod不会在此节点上运行。

转移(驱逐):
  当节点进行升级时,为了避免影响节点上的pod,可使用转移将节点上的pod转移到其他节点,同时会将节点设置为警戒状态。

亲和性:
  节点亲和性
    硬节点亲和:必须将pod部署到某个节点。
    软节点亲和:尽量将pod部署到某个节点,取决于yaml文件中节点的顺序。
  pod亲和性
    硬pod亲和:必须将pod部署到亲和pod所在节点。
    软pod亲和:尽量将pod部署到亲和pod所在节点,取决于权重。

亲和性三种策略:
nodeAffinity:节点亲和性,用于控制pod部署在哪些节点上,以及不能部署在哪些节点上,这个是pod与node之间匹配规则的。

podAffinity:pod亲和性,这是pod与pod之间匹配规则的。

podAntiAffinity:pod反亲和性,用于指定pod不要部署到哪些节点上。


非亲和性:
  pod非亲和性:指定pod不要部署到哪些节点上。

二、实践

导入镜像及资源清单

-- 污点与容忍测试 --
1、创建污点
[root@k8s-master ~]# ku taint nodes k8s-node01 check=mycheck:NoExecute
node/k8s-node01 tainted
[root@k8s-master ~]# ku taint nodes k8s-node02 check=mycheck:NoExecute
node/k8s-node02 tainted

2、创建pod
[root@k8s-master ~]# vim pod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: myapp01
  labels:
    app: myapp01
spec:
  containers:
  - name: with-node-affinity
    image: nginx:1.7.9

[root@k8s-master ~]# ku create -f pod1.yaml 
pod/myapp01 created

3、验证结果
[root@k8s-master ~]# ku get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
myapp01   0/1     Pending   0          27s   <none>   <none>   <none>           <none>

发现pod没有被创建,因为工作节点都有污点,pod没有容忍度,所以无法创建。

4、更改pod资源清单以及创建
[root@k8s-master ~]# vim pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp01
  labels:
    app: myapp01
spec:
  containers:
  - name: with-node-affinity
    image: nginx:1.7.9
  tolerations:		# 容忍
  - key: check		# 污点节点的key
    operator: Equal	# 操作符为相等,这里设置的key、value、offect要与污点相同
    value: mycheck	# 污点节点的value
    effect: NoExecute	# 污点节点的offect
    tolerationSeconds: 60	# 容忍多少秒

[root@k8s-master ~]# ku create -f pod1.yaml 
pod/myapp01 created


5、验证结果
[root@k8s-master ~]# ku get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
myapp01   1/1     Running   0          11s   10.244.85.196   k8s-node01   <none>           <none>

发现pod增加容忍之后,可以在有污点的节点上创建。

[root@k8s-master ~]# ku get pod -o wide
^[[ANAME      READY   STATUS        RESTARTS   AGE   IP       NODE         NOMINATED NODE   READINESS GATES
myapp01   0/1     Terminating   0          61s   <none>   k8s-node02   <none>           <none>
[root@k8s-master ~]# ku get pod -o wide
No resources found in default namespace.

超过60秒自动被移除

PS:完成此实验后,删除节点上的污点。
[root@k8s-master ~]# ku taint nodes k8s-node02 check=mycheck:NoExecute-
node/k8s-node02 untainted
[root@k8s-master ~]# ku taint nodes k8s-node01 check=mycheck:NoExecute-
node/k8s-node01 untainted


-- 警戒及转移 --
1、设置警戒
[root@k8s-master ~]# ku cordon k8s-node01
node/k8s-node01 cordoned

[root@k8s-master ~]# ku get nodes
NAME         STATUS                     ROLES                  AGE   VERSION
k8s-master   Ready                      control-plane,master   17d   v1.23.0
k8s-node01   Ready,SchedulingDisabled   <none>                 17d   v1.23.0
k8s-node02   Ready                      <none>                 17d   v1.23.0

2、取消警戒
[root@k8s-master ~]# ku uncordon k8s-node01
node/k8s-node01 uncordoned
[root@k8s-master ~]# ku get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   17d   v1.23.0
k8s-node01   Ready    <none>                 17d   v1.23.0
k8s-node02   Ready    <none>                 17d   v1.23.0

3、转移
[root@k8s-master ~]# ku drain k8s-node01 --ignore-daemonsets --delete-local-data --force 
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/k8s-node01 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-dlmxv, kube-system/kube-proxy-v9cck
node/k8s-node01 drained

[root@k8s-master ~]# ku get nodes
NAME         STATUS                     ROLES                  AGE   VERSION
k8s-master   Ready                      control-plane,master   17d   v1.23.0
k8s-node01   Ready,SchedulingDisabled   <none>                 17d   v1.23.0
k8s-node02   Ready                      <none>                 17d   v1.23.0

转移设置成功后,该节点会自动被设置为警戒状态。

PS:完成实验后,取消节点的警戒。



-- 节点亲和性(硬策略) --
1、设置节点标签
[root@k8s-master ~]# ku label nodes k8s-node01 type=node01
node/k8s-node01 labeled
[root@k8s-master ~]# ku label nodes k8s-node02 type=node02
node/k8s-node02 labeled

查看节点标签
[root@k8s-master ~]# ku get node --show-labels
NAME         STATUS   ROLES                  AGE   VERSION   LABELS
k8s-master   Ready    control-plane,master   17d   v1.23.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=

k8s-node01   Ready    <none>                 17d   v1.23.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux,type=node01

k8s-node02   Ready    <none>                 17d   v1.23.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux,type=node02

2、创建pod
[root@k8s-master ~]# vim test01.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod01
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:   # 硬亲和
        nodeSelectorTerms:
        - matchExpressions:
          - {key: type, operator: In, values: ["node01"]}  # type=node01
  containers:
    - name: pod01
      image: nginx:1.7.9

[root@k8s-master ~]# ku create -f test01.yaml 
pod/pod01 created

[root@k8s-master ~]# vim test02.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod02
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:  # 硬亲和
        nodeSelectorTerms:
        - matchExpressions:
          - {key: type, operator: In, values: ["node02"]}  # type=node02
  containers:
    - name: pod02
      image: nginx:1.7.9

[root@k8s-master ~]# ku create -f test02.yaml 
pod/pod02 created


3、验证
[root@k8s-master ~]# ku get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
pod01   1/1     Running   0          2m45s   10.244.85.197   k8s-node01   <none>           <none>
pod02   1/1     Running   0          5s      10.244.58.196   k8s-node02   <none>           <none>


根据设置的亲和策略,pod01部署到了node01上(node01标签为type=node01)。pod02部署到了node02上(node02标签为type=node02)。


-- 节点亲和性(软策略) --
1、设置节点标签
[root@k8s-master ~]# ku label nodes k8s-node01 type=node01
node/k8s-node01 labeled
[root@k8s-master ~]# ku label nodes k8s-node02 type=node02
node/k8s-node02 labeled


2、创建pod
[root@k8s-master ~]# vim test03.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod03
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 60
          preference:
            matchExpressions:
            - {key: type, operator: In, values: ["node02"]}
        - weight: 90
          preference:
            matchExpressions:
            - {key: tpye, operator: In, values: ["node01"]}
  containers:
    - name: pod03
      image: nginx:1.7.9

[root@k8s-master ~]# ku create -f test03.yaml 
pod/pod03 created

3、验证
[root@k8s-master ~]# ku get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
pod03   1/1     Running   0          29s     10.244.58.197   k8s-node02   <none>           <none>

pod03创建到了node02上(软亲和中按yaml文件中节点顺序进行分配,与权重值无关)。


-- pod硬亲和 --
1、创建基础pod
[root@k8s-master ~]# vim test04.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod04
  labels:
    app: pod04
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - {key: type, operator: In, values: ["node01"]}
  containers:
    - name: pod04
      image: nginx:1.7.9

[root@k8s-master ~]# vim test05.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod05
  labels:
    app: pod05
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - {key: type, operator: In, values: ["node02"]}
  containers:
    - name: pod05
      image: nginx:1.7.9

[root@k8s-master ~]# ku create -f test04.yaml 
pod/pod04 created
[root@k8s-master ~]# ku create -f test05.yaml 
pod/pod05 created

[root@k8s-master ~]# ku get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
pod04   1/1     Running   0          13s   10.244.85.198   k8s-node01   <none>           <none>
pod05   1/1     Running   0          10s   10.244.58.198   k8s-node02   <none>           <none>

2、创建硬亲和pod
[root@k8s-master ~]# vim test06.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: pod06
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:  # 硬亲和
      - labelSelector:
          matchExpressions:
          - {key: app, operator: In, values: ["pod05"]}	# app=pod05
        topologyKey: kubernetes.io/hostname
  containers:
  - name: pod06
    image: nginx:1.7.9

ps:pod亲和会使用被亲和pod的标签进行亲和。

[root@k8s-master ~]# ku create -f test06.yaml 
pod/pod06 created

3、验证
[root@k8s-master ~]# ku get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
pod05   1/1     Running   0          2m35s   10.244.58.198   k8s-node02   <none>           <none>
pod06   1/1     Running   0          4s      10.244.58.199   k8s-node02   <none>           <none>

pod06亲和pod05,会去pod05所在的节点(node02)。


-- pod软亲和 --
1、创建软亲和pod
[root@k8s-master ~]# vim test07.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod07
spec:
  affinity:
    podAffinity:		# pod亲和
      preferredDuringSchedulingIgnoredDuringExecution:  # 软策略
      - weight: 20	# 权重
        podAffinityTerm:	# 亲和标签
          labelSelector:	# 标签选择
            matchExpressions:  # 正则匹配
              - {key: app, operator: In, values: ["pod05"]}  # app=pod05
          topologyKey: kubernetes.io/hostname
      - weight: 80
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - {key: app, operator: In, values: ["pod04"]}
          topologyKey: kubernetes.io/hostname
  containers:
    - name: pod07
      image: nginx:1.7.9

[root@k8s-master ~]# ku create -f test07.yaml 
pod/pod07 created

2、验证
[root@k8s-master ~]# ku get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
pod04   1/1     Running   0          7m39s   10.244.85.198   k8s-node01   <none>           <none>
pod05   1/1     Running   0          7m36s   10.244.58.198   k8s-node02   <none>           <none>
pod06   1/1     Running   0          5m5s    10.244.58.199   k8s-node02   <none>           <none>
pod07   1/1     Running   0          11s     10.244.85.199   k8s-node01   <none>           <none>

pod07亲和pod05和pod04,pod05权重为20,pod04权重为80,因此部署到node01节点上(pod04所在节点)。
注意:pod亲和的权重与节点亲和权重不同,节点亲和不看权重,看yaml文件中的顺序
pod亲和看权重。

-- pod反亲和 --
1、创建反亲和pod
[root@k8s-master ~]# vim test09.yaml
apiVersion: v1
kind: Pod
metadata: 
  name: pod09
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - {key: app, operator: In, values: ["pod05"]}
        topologyKey: kubernetes.io/hostname
  containers:
  - name: test09
    image: nginx:1.7.9

[root@k8s-master ~]# ku create -f test09.yaml 
pod/pod09 created

2、验证
[root@k8s-master ~]# ku get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
pod04   1/1     Running   0          11m     10.244.85.198   k8s-node01   <none>           <none>
pod05   1/1     Running   0          11m     10.244.58.198   k8s-node02   <none>           <none>
pod09   1/1     Running   0          10s     10.244.85.200   k8s-node01   <none>           <none>

pod09反亲和pod05,因此部署到了pod04所在节点。


网站公告

今日签到

点亮在社区的每一天
去签到