一、K8s安全框架(鉴权、授权、准入控制)
K8S安全控制框架主要由下面3个阶段进行控制,每一个阶段都支持 插件方式,通过API Server配置来启用插件。
- Authentication(鉴权)
- Authorization(授权)
- Admission Control(准入控制)
1)鉴权(Authentication)
K8s Apiserver提供三种客户端身份认证:
- HTTPS 证书认证:基于CA证书签名的数字证书认证(kubeconfig)
- HTTP Token认证:通过一个Token来识别用户(serviceaccount)
- HTTP Base认证:用户名+密码的方式认证(1.19版本弃用)
2)授权(Authorization)
RBAC(Role-Based Access Control,基于角色的访问控制):负责完成授权(Authorization)工作。 RBAC根据API请求属性,决定允许还是拒绝。 比较常见的授权维度:
- user:用户名
- group:用户分组
- 资源,例如pod、deployment
- 资源操作方法:get,list,create,update,patch,watch,delete
- 命名空间
- API组
3)准入控制(Admission Control)
Adminssion Control 实际上是一个准入控制器插件列表,发送到 API Server的请求都需要经过这个列表中的每个准入控制器 插件的检查,检查不通过,则拒绝请求。
① 启用一个准入控制器
- 命令:kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger ...
② 关闭一个准入控制器:
- 命令:kube-apiserver --disable-admission-plugins=PodNodeSelector,AlwaysDeny ...
③ 查看默认启用:
- 命令:kubectl exec kube-apiserver-k8s-master -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
1.1 基于角色的权限访问控制:RBAC
RBAC(Role-Based Access Control,基于角色的访问控制), 是K8s默认授权策略,并且是动态配置策略(修改即时生效)。
主体(subject)
- User:用户
- Group:用户组
- ServiceAccount:服务账号
角色
- Role:授权特定命名空间的访问权限
- ClusterRole:授权所有命名空间的访问权限
角色绑定
- RoleBinding:将角色绑定到主体(即subject)
- ClusterRoleBinding:将集群角色绑定到主体
注:RoleBinding 在指定命名空间中执行授权,ClusterRoleBinding在集群范围执行授权。
k8s预设好了四个集群角色供用户使用,使用 kubectl get clusterrole 查看,其中systemd:开头的为系统内部使用
内置集群角色 |
描述 |
cluster-admin |
超级管理员,对集群所有权限 |
admin |
主要用于授权命名空间所有读写权限 |
edit |
允许对命名空间大多数对象读写操作,不允许 查看或者修改角色、角色绑定 |
view |
允许对命名空间大多数对象只读权限,不允许 查看角色、角色绑定和Secret |
二、RBAC认证授权案例
2.1 对用户 / 用户组授权访问k8s(TLS证书)
需求一:对用户授权访问k8s
案例:为指定用户授权访问不同命名空间权限,例如新入职一个小弟,希望让他先熟悉K8s集 群,为了安全性,先不能给他太大权限,因此先给他授权访问default命名空间Pod读取权限。
实施大致步骤:
1.用K8S CA根证书去签发客户端证书
2.生成kubeconfig授权文件
3.创建RBAC权限策略
4.指定kubeconfig文件测试权限:kubectl get pods --kubeconfig=./aliang.kubeconfig
TLS证书认证流程:
步骤1:使用cfsll工具,拿K8S CA根证书去签发客户端证书
# 查看 k8s CA 根证书目录
[root@k8s-master-1-71 ~]# ls /etc/kubernetes/pki/
apiserver.crt apiserver-etcd-client.key apiserver-kubelet-client.crt ca.crt etcd front-proxy-ca.key front-proxy-client.key sa.pub
apiserver-etcd-client.crt apiserver.key apiserver-kubelet-client.key ca.key front-proxy-ca.crt front-proxy-client.crt sa.key
## ca.crt 根证书,ca.key 根证书密钥
[root@k8s-master-1-71 ~]# mkdir rbac-ssl ; mv rbac.zip rbac-ssl/
[root@k8s-master-1-71 ~]# unzip rbac.zip -d rbac-ssl/
[root@k8s-master-1-71 rbac]# bash cert.sh //运行签发客户端证书脚本
cat > ca-config.json <<EOF ## ca根证书的配置文件
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
cat > aliang-csr.json <<EOF ## 客户端证书请求配置文件
{
"CN": "aliang",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key -config=ca-config.json -profile=kubernetes aliang-csr.json | cfssljson -bare aliang
步骤2:生成kubeconfig授权文件
补充:kubectl config 命令用来生成或修改 kubeconfig 配置文件用途
[root@k8s-master-1-71 rbac]# bash kubeconfig.sh //运行脚本
# 设置集群
kubectl config set-cluster kubernetes \ ## set-cluster名称可自定义
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \ ## 将证书嵌套kubeconfig配置文件里,false则是路径形式
--server=https://192.168.1.71:6443 \ ## 指定APIserver地址
--kubeconfig=aliang.kubeconfig ## 指定生成的kubeconfig,文件名自定义
# 设置客户端认证
kubectl config set-credentials aliang \ ## set-credentials名称可自定义
--client-key=aliang-key.pem \ ## 指定客户端证书
密钥
--client-certificate=aliang.pem \ ## 指定客户端证书
--embed-certs=true \
--kubeconfig=aliang.kubeconfig
# 设置默认上下文
kubectl config set-context kubernetes \ ## set-context名称可自定义
--cluster=kubernetes \ ## 指定上述的cluster
--user=aliang \ ## 指定上述的user
--kubeconfig=aliang.kubeconfig
# 设置当前使用配置,指定上述的默认上下文
[root@k8s-master-1-71 rbac]#
kubectl config use-context kubernetes --kubeconfig=aliang.kubeconfig
步骤3:创建角色并将用户与角色绑定
方式1:命令行
使用 clusterrole 为 view 的默认角色,绑定default命名空间下的用户aliang
- 命令:kubectl create rolebinding <角色名> --clusterrole=view --user=<username> --dry-run=client -o yaml > rbac-command.yaml
[root@k8s-master-1-71 rbac]# kubectl create rolebinding aliang --clusterrole=view --user=aliang --dry-run=client -o yaml > rbac-command.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: aliang
roleRef: # 绑定的为k8s集群预设角色view(只读权限)
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: aliang
#测试1:查看权限
[root@k8s-master-1-71 rbac]# kubectl get pods,deploy,svc --kubeconfig=./aliang.kubeconfig
NAME READY STATUS RESTARTS AGE
pod/httpd-web-fd84784fb-9cz5p 1/1 Running 1 (23h ago) 2d18h
pod/httpd-web-fd84784fb-jnc4l 1/1 Running 1 (23h ago) 2d18h
pod/httpd-web-fd84784fb-p9gjq 1/1 Running 1 (23h ago) 2d18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/httpd-web 3/3 3 3 2d18h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/httpd-web ClusterIP 10.103.151.134 <none> 80/TCP 2d19h
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 99d
#测试2:删除权限(权限拒绝),因view角色只有只读权限
[root@k8s-master-1-71 rbac]# kubectl delete pods httpd-web-fd84784fb-9cz5p --kubeconfig=./aliang.kubeconfig
Error from server (Forbidden): pods "httpd-web-fd84784fb-9cz5p" is forbidden: User "aliang" cannot delete resource "pods" in API group "" in the namespace "default"
使用 clusterrole 为 admin 的默认角色,绑定default命名空间下的用户aliang
[root@k8s-master-1-71 rbac]# kubectl create rolebinding aliang --clusterrole=admin --user=aliang
#测试:删除权限
[root@k8s-master-1-71 rbac]# kubectl delete pods httpd-web-fd84784fb-9cz5p --kubeconfig=./aliang.kubeconfig
pod "httpd-web-fd84784fb-9cz5p" deleted
方式2:自定义YAML(role、rolebinding)
# 角色权限分配:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # api组,例如apps组,空值表示是核心API组,像 namespace、pod、service、pv、pvc都在里面
resources: ["pods"] #资源名称(复数),例如 pods、deployments、services
verbs: ["get","watch","list"] # 资源操作方法
---
# 将主体与角色绑定:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User # 主体
name: aliang # 主体名称
apiGroup: rbac.authorization.k8s.io
roleRef: # 绑定的角色
kind: Role
name: pod-reader # 角色名称
apiGroup: rbac.authorization.k8s.io
例如:想查看deployments资源,则需要在apiGroups添加“apps组”;如想增加删除Pods,可添加多个rules角色
rules:
- apiGroups: ["","apps"] # 根据想看的资源添加apiGroups组
resources: ["pods","services","deployments"]
verbs: ["get","watch","list"]
- apiGroups: [""] # 添加多个rules角色,例如删除Pods
resources: ["pods"]
verbs: ["delete"]
需求二:对用户组授权访问k8s
用户组:用户组的好处是无需单独为某个用户创建权限,统一为这个组名进 行授权,所有的用户都以组的身份访问资源。
案例:为dev用户组统一授权
实施大致步骤:
1.将certs.sh文件中的aliang-csr.json下的O字段改为dev,并重新生成证 书和kubeconfig文件
2.将dev用户组绑定Role(pod-reader)
3.测试,只要O字段都是dev,这些用户持有的kubeconfig文件都拥有相 同的权限
步骤1:使用cfsll工具,拿K8S CA根证书去签发客户端证书
# 删除前面创建的客户端证书(aliang-key.pem、aliang.pem)
[root@k8s-master-1-71 rbac]# rm -rf *.pem
# 运行签发客户端证书脚本
[root@k8s-master-1-71 rbac]# bash cert.sh
cat > ca-config.json <<EOF ## ca根证书的配置文件
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
EOF
cat > dev-csr.json <<EOF ## 客户端证书请求配置文件(修改文件名))
{
"CN": "k8s", # 修改CN字段,随意定义(用户检查CN字段)
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "dev", # 修改O字段(组检查O字段)
"OU": "System"
}
]
}
EOF
# 执行cfssl生产客户端证书、密钥
cfssl gencert -ca=/etc/kubernetes/pki/ca.crt \
-ca-key=/etc/kubernetes/pki/ca.key \
-config=ca-config.json \
-profile=kubernetes dev-csr.json | cfssljson -bare dev
步骤2:生成kubeconfig授权文件
补充:kubectl config 命令用来生成或修改 kubeconfig 配置文件用途
[root@k8s-master-1-71 rbac]# bash kubeconfig.sh //运行脚本
# 设置集群
kubectl config set-cluster kubernetes \ ## set-cluster名称可自定义
--certificate-authority=/etc/kubernetes/pki/ca.crt \
--embed-certs=true \ ## 将证书嵌套kubeconfig配置文件里,false则是路径形式
--server=https://192.168.1.71:6443 \ ## 指定APIserver地址
--kubeconfig=dev.kubeconfig ## 指定生成的kubeconfig,文件名自定义
# 设置客户端认证
kubectl config set-credentials dev \ ## set-credentials名称可自定义
--client-key=dev-key.pem \ ## 指定客户端证书
密钥
--client-certificate=dev.pem \ ## 指定客户端证书
--embed-certs=true \
--kubeconfig=dev.kubeconfig
# 设置默认上下文
kubectl config set-context kubernetes \ ## set-context名称可自定义
--cluster=kubernetes \ ## 指定上述的cluster
--user=dev \ ## 指定上述的user
--kubeconfig=dev.kubeconfig
# 设置当前使用配置,指定上述的默认上下文
[root@k8s-master-1-71 rbac]#
kubectl config use-context kubernetes --kubeconfig=dev.kubeconfig
步骤3:创建角色,并将用户与角色绑定
- 命令:kubectl create rolebinding <角色名> --clusterrole=view --group=<groupname> --dry-run=client -o yaml > rbac-command.yaml
[root@k8s-master-1-71 rbac]# kubectl create rolebinding dev --clusterrole=view --group=dev
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group //kind为Group组
name: dev
# 测试
[root@k8s-master-1-71 rbac]# kubectl get pods --kubeconfig=./dev.kubeconfig
NAME READY STATUS RESTARTS AGE
httpd-web-fd84784fb-jnc4l 1/1 Running 1 (24h ago) 2d19h
httpd-web-fd84784fb-p9gjq 1/1 Running 1 (24h ago) 2d19h
httpd-web-fd84784fb-qdthv 1/1 Running 0 56m
2.2 对应用程序授权访问K8s(ServiceAccount)
ServiceAccount,简称SA,是一种用于让程序访问K8s API的服务账号。
- 当创建namespace时,会自动创建一个名为 default 的SA,这个SA没有绑定任何权限
- 当default SA创建时,会自动创建一个 default-token-xxx 的secret,并自动关联到SA
- 当创建Pod时,如果没有指定SA,会自动为pod以volume方式挂载这个default SA,在容 器目录:/var/run/secrets/kubernetes.io/serviceaccount
- 验证默认SA权限:kubectl --as=system:serviceaccount:default:default get pods
需求:在test命名空间里,授权容器中 Python程序 对K8s API访问权限
实施大致步骤:
1.创建ServiceAccount、创建Role、创建RoleBinding
2. 将ServiceAccount与Role绑定
3.为Pod指定自定义的SA
4.进入容器里执行Python程序测试操作K8s API权限
ServiceAccount认证流程:
命令行使用:授权SA只能查看test命名空间控制器的权限
# 创建命名空间
- 命令:kubectl create namespace test
# 创建服务账号
- 命令:kubectl create serviceaccount py-k8s -n test
# 创建角色
- 命令:kubectl create role py-role --verb=get,list --resource=deployments,daemonsets,statefulsets -n test
# 将服务账号绑定角色
- 命令:kubectl create rolebinding py-rolebinding --serviceaccount=test:py-k8s --role=py-role -n test
命令对应YAML:
# 创建服务账号
apiVersion: v1
kind: ServiceAccount
metadata:
name: py-k8s
namespace: test
# 创建角色
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: py-role
namespace: test
rules:
- apiGroups: ["apps"]
resources: ["deployments","daemonsets","statefulsets"]
verbs: ["get"
,
"list"]
# 将服务账号绑定角色
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: py-rolebinding
namespace: test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: py-role
subjects:
- kind: ServiceAccount
name: py-k8s
namespace: test
测试SA账号:
# 查看SA账号、角色、角色绑定
[root@k8s-master-1-71 ~]# kubectl get sa -n test
NAME SECRETS AGE
default 0 60m
py-k8s 0 59m
[root@k8s-master-1-71 ~]# kubectl get role -n test
NAME CREATED AT
py-role 2023-05-27T10:51:04Z
[root@k8s-master-1-71 ~]# kubectl get rolebinding -n test
NAME ROLE AGE
py-rolebinding Role/py-role 7m4s
# 查看权限
kubectl --as=system:serviceaccount:test:py-k8s get deployment -n test
为Pod指定自定义的SA:
apiVersion: v1
kind: Pod
metadata:
name: py-k8s
namespace: test
spec:
serviceAccountName: py-k8s # 指定服务账号,注入到Pod里
containers:
- image: python:3
name: python
command:
- sleep
- 24h
授权容器中 Python程序 对K8s API访问权限:
from kubernetes import client, config
with open('/var/run/secrets/kubernetes.io/serviceaccount/token') as f:
token = f.read()
configuration = client.Configuration()
configuration.host = "https://kubernetes" # APISERVER地址
configuration.ssl_ca_cert="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" # CA证书
configuration.verify_ssl = True # 启用证书验证
configuration.api_key = {"authorization": "Bearer " + token} # 指定Token字符串
client.Configuration.set_default(configuration)
apps_api = client.AppsV1Api()
core_api = client.CoreV1Api()
try:
print("###### Deployment列表 ######")
#列出default命名空间所有deployment名称
for dp in apps_api.list_namespaced_deployment("default").items:
print(dp.metadata.name)
except:
print("没有权限访问Deployment资源!")
try:
#列出default命名空间所有pod名称
print("###### Pod列表 ######")
for po in core_api.list_namespaced_pod("default").items:
print(po.metadata.name)
except:
print("没有权限访问Pod资源!")
[root@k8s-master-1-71 ~]# kubectl cp k8s-api-test.py py-k8s:/ -n test
[root@k8s-master-1-71 ~]# kubectl exec -it py-k8s -n test -- bash
root@py-k8s:/# ls
bin dev home lib media opt root sbin sys usr
boot etc k8s-api-test.py lib64 mnt proc run srv tmp var
# 为容器安装python模块
root@py-k8s:/# pip install kubernetes -i https://mirrors.aliyun.com/pypi/simple
# 测试API调用
root@py-k8s:/# python k8s-api-test.py
###### Deployment列表 ######
test
###### Pod列表 ######
没有权限访问Pod资源!
补充:创建一个ServiceAccount服务账号,在部署Deployment或者Pod的时候去指定该SA服务账号,进行授权就是基于这个SA服务账号授权,而容器里面的项目携带的是注入的服务账号映射的token去连接K8S API,就可以调用K8S API。
三、资源配额 ResourceQuota
当多个团队、多个用户共享使用K8s集群时,会出现不均匀资源使用,默认情况下先到先得,这时可以通过 ResourceQuota来对命名空间资源使用总量做限制,从而解决这个问题。
使用流程:k8s管理员为每个命名空间创建一个或多个ResourceQuota对象,定义资源使用总量,K8s会跟踪命名空间 资源使用情况,当超过定义的资源配额会返回拒绝。ResourceQuota 功能是一个准入控制插件,默认已经启用。
查看默认启用:
- 命令:kubectl exec kube-apiserver-k8s-master -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
查看配额:
- 命令:kubectl get quota -n namespaces
补充:ResourceQuota是基于命名空间的,配置的策略在哪个命名空间下则就算对该命名空间做资源配额。
3.1 计算资源配额
示例:目标限制test命名空间,可创建的 n个pod requests可请求 和 limits上限 的CPU、MEM计算资源总量
[root@k8s-master-1-71 ~]# kubectl create ns test
[root@k8s-master-1-71 ~]# kubectl apply -f test-RQuota1.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: test
spec:
hard:
requests.cpu: "1" # pod可请求的CPU资源总量不超过1核
requests.memory: 1Gi # pod可请求的MEM资源总量不超过1Gi
limits.cpu: "2" # pod上限的CPU资源总量不超过2核
limits.memory: 2Gi # pod上限的MEM资源总量不超过2Gi
# 查看配额:
[root@k8s-master-1-71 ~]# kubectl get quota -n test
NAME AGE REQUEST LIMIT
compute-resources 13s requests.cpu: 0/1, requests.memory: 0/1Gi limits.cpu: 0/2, limits.memory: 0/2Gi
测试1:
[root@k8s-master-1-71 ~]# kubectl apply -f test-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
namespace: test
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: 0.2
memory: 200Mi
limits:
cpu: 0.5
memory: 500Mi
[root@k8s-master-1-71 ~]# kubectl get quota -n test
NAME AGE REQUEST LIMIT
compute-resources 2m47s requests.cpu: 600m/1, requests.memory: 600Mi/1Gi limits.cpu: 1500m/2, limits.memory: 1500Mi/2Gi
## 通过Deployent部署3个Pods,配额信息如下:
- requests.cpu: 0.2 x 3 = 600m
- requests.memory: 200Mi x 3 = 600Mi
- limits.cpu: 0.5 x 3 = 1500m/2
- limits.memory: 500Mi x 3 = 1500Mi/2Gi
测试2:新增3台Pods,但只能满足1台创建,剩余2台由于无法满足资源配额要求,所以最终无法创建
[root@k8s-master-1-71 ~]# kubectl apply -f test2-pod.yaml
...
resources:
requests:
cpu: 0.3
memory: 300Mi
limits:
cpu: 0.5
memory: 500Mi
[root@k8s-master-1-71 ~]# kubectl get deploy,rs -n test
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 3/3 3 3 6m12s
deployment.apps/web2 1/3 1 1 5m33s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-8d7db6f9c 3 3 3 6m12s
replicaset.apps/web2-5b9fd67484 3 1 1 5m33s
[root@k8s-master-1-71 ~]# kubectl describe rs web2-5b9fd67484 -n test
Warning FailedCreate 2m6s (x8 over 4m33s) replicaset-controller (combined from similar events): Error creating: pods "web2-5b9fd67484-zplxx" is forbidden: exceeded quota: compute-resources, requested: limits.cpu=500m,limits.memory=500Mi,requests.cpu=300m,requests.memory=300Mi, used: limits.cpu=2,limits.memory=2000Mi,requests.cpu=900m,requests.memory=900Mi, limited: limits.cpu=2,limits.memory=2Gi,requests.cpu=1,requests.memory=1Gi
[root@k8s-master-1-71 ~]# kubectl get quota -n test
NAME AGE REQUEST LIMIT
compute-resources 18m requests.cpu: 900m/1, requests.memory: 900Mi/1Gi limits.cpu: 2/2, limits.memory: 2000Mi/2Gi
3.2 存储资源配额
示例:目标限制test命名空间,可申请多个PVC,对 requests.storage 可请求存储资源总量
[root@k8s-master-1-71 ~]# kubectl apply -f test-RQuota2.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: storage-resources
namespace: test
spec:
hard:
requests.storage: "10G"
# managed-nfs-storage.storageclass.storage.k8s.io/requ
ests.storage: "5G"
# 查看配额:
[root@k8s-master-1-71 ~]# kubectl get quota -n test
NAME AGE REQUEST LIMIT
storage-resources 39s requests.storage: 0/10G
测试1:
[root@k8s-master-1-71 ~]# kubectl apply -f test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
namespace: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
[root@k8s-master-1-71 ~]# kubectl get quota -n test
NAME AGE REQUEST LIMIT
storage-resources 118m requests.storage: 8Gi/10G
测试2:再新增1个8Gi的PVC,由于资源不足,无法满足资源配额要求,所以最终无法创建
[root@k8s-master-1-71 ~]# kubectl apply -f test-pvc2.yaml
Error from server (Forbidden): error when creating "test-pvc2.yaml": persistentvolumeclaims "myclaim2" is forbidden: exceeded quota: storage-resources, requested: requests.storage=8Gi, used: requests.storage=8Gi, limited: requests.storage=10G
## requests.storage限制10G,已经请求8Gi,再次申请8Gi时已无可用资源
3.3 对象数量配额
示例:目标限制test命名空间,可创建的 n个pod requests可请求 和 limits上限 的CPU、MEM计算资源总量
[root@k8s-master-1-71 ~]# kubectl apply -f test-RQuota3.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: test
spec:
hard:
pods: "3"
count/deployments.apps: "3"
count/services: "3"
# 查看配额:
[root@k8s-master-1-71 ~]# kubectl get quota -n test
NAME AGE REQUEST LIMIT
object-counts 51s count/deployments.apps: 0/3, count/services: 0/3, pods: 0/5
测试:限制Pods数量为3,再次创建未能满足资源配额需求,则无无法创建
# 测试:创建Pods
[root@k8s-master-1-71 ~]# kubectl run bs1 --image=busybox -n test -- sleep 24h
[root@k8s-master-1-71 ~]# kubectl run bs2 --image=busybox -n test -- sleep 24h
[root@k8s-master-1-71 ~]# kubectl run bs3 --image=busybox -n test -- sleep 24h
[root@k8s-master-1-71 ~]# kubectl run bs4 --image=busybox -n test -- sleep 24h
Error from server (Forbidden): pods "bs4" is forbidden: exceeded quota: object-counts, requested: pods=1, used: pods=3, limited: pods=3
四、资源限制 LimitRange
默认情况下,K8s集群上的容器对计算资源没有任何限制,可能会导致个别容器资源过大导致影响其他容器正常工 作,这时可以使用LimitRange 定义容器默认CPU和内存请求值或者最大上限。
LimitRange限制维度:
- 限制容器配置 requests.cpu/memory,limits.cpu/memory的最小、最大值
- 限制容器配置 requests.cpu/memory,limits.cpu/memory的默认值
- 限制PVC配置 requests.storage的最小、最大值
查看限制:
- 命令:kubectl get limits -n namespaces
- 命令:kubectl describe limits -n namespaces
4.1 计算资源最大、最小限制
示例:目标限制test命名空间定义容器默认CPU和内存请求值或者最大上限
[root@k8s-master-1-71 ~]# kubectl apply -f test-LRange1.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-memory-min-max
namespace: test
spec:
limits:
- max: # 容器能设置limit的最大值
cpu: 1
memory: 1Gi
min: # 容器能设置request的最小值
cpu: 200m
memory: 200Mi
type: Container
[root@k8s-master-1-71 ~]# kubectl get limits -n test
NAME CREATED AT
cpu-memory-min-max 2023-05-28T01:07:49Z
[root@k8s-master-1-71 ~]# kubectl describe limits -n test
Name: cpu-memory-min-max
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container memory 200Mi 1Gi 1Gi 1Gi -
Container cpu 200m 1 1 1 -
测试1:
[root@k8s-master-1-71 ~]# kubectl apply -f test-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
namespace: test
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
resources:
requests:
cpu: 0.5
memory: 200Mi
limits:
cpu: 0.5
memory: 500Mi
[root@k8s-master-1-71 ~]# kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
web-679d596c75-cltck 1/1 Running 0 5s
web-679d596c75-tprtv 1/1 Running 0 10s
web-679d596c75-zd7x2 1/1 Running 0 15s
补充:限制该容器请求的CPU、内存资源均未超过限制要求, request的最小值 ~ limit的最大值:
- cpu: 0.2 ~ 1
- memory: 200Mi ~ 1Gi
测试2:设置requests.cpu、requests.memory小于请求值,最终无法创建
[root@k8s-master-1-71 ~]# kubectl apply -f test2-pod.yaml
...
requests:
cpu: 0.1
memory: 100Mi
limits:
cpu: 0.5
memory: 500Mi
[root@k8s-master-1-71 ~]# kubectl get deploy,rs -n test
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 0/3 0 0 90s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-858889f599 3 0 0 90s
[root@k8s-master-1-71 ~]# kubectl describe rs web-858889f599 -n test
Warning FailedCreate 4s (x4 over 22s) replicaset-controller (combined from similar events): Error creating: pods "web-858889f599-fj8jd" is forbidden: [minimum cpu usage per Container is 200m, but request is 100m, minimum memory usage per Container is 200Mi, but request is 100Mi]
[每个容器的最小cpu占用是200m,但请求是100m;每个容器的最小内存占用是200Mi,但请求是100Mi]
4.2 计算资源默认值限制
示例:设置默认值限制,以防创建Pods忘记限制资源
[root@k8s-master-1-71 ~]# kubectl apply -f test-LRange2.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-memory-min-max
namespace: test
spec:
limits:
- default:
cpu: 500m
memory: 500Mi
defaultRequest:
cpu: 300m
memory: 300Mi
type: Container
[root@k8s-master-1-71 ~]# kubectl get limits -n test
NAME CREATED AT
cpu-memory-min-max 2023-05-28T01:07:49Z
[root@k8s-master-1-71 ~]# kubectl describe limits -n test
Name: cpu-memory-min-max
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 300m 500m -
Container memory - - 300Mi 500Mi -
4.3 存储资源最大、最小限制
[root@k8s-master-1-71 ~]# kubectl apply -f test-LRange3.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: storage-min-max
namespace: test
spec:
limits:
- type: PersistentVolumeClaim
max:
storage: 10Gi
min:
storage: 1Gi
[root@k8s-master-1-71 ~]# kubectl get limits -n test
NAME CREATED AT
cpu-memory-min-max 2023-05-28T01:07:49Z
storage-min-max 2023-05-28T02:27:10Z
[root@k8s-master-1-71 ~]# kubectl describe limits -n test
Name: cpu-memory-min-max
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu - - 300m 500m -
Container memory - - 300Mi 500Mi -
Name: storage-min-max
Namespace: test
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
PersistentVolumeClaim storage 1Gi 10Gi - - -
测试1:
[root@k8s-master-1-71 ~]# kubectl apply -f test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim3
namespace: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
[root@k8s-master-1-71 ~]# kubectl get pvc -n test //可申请
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myclaim3 Pending 16s
测试2:申请的11Gi资源,已超过请求的限制存储资源10Gi
[root@k8s-master-1-71 ~]# kubectl apply -f test-pvc.yaml
...
resources:
requests:
storage: 11Gi
The PersistentVolumeClaim "myclaim3" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
总结:两者维度不同
- ResourceQuota 资源配额:是定义命名空间里资源使用的总量,K8s会跟踪命名空间 资源使用情况,当超过定义的资源配额会返回拒绝。
- LimitRange 资源限制:定义单个容器默认CPU和内存请求值或者最大上限。
课后作业
1、创建一个名为backend-sa的serviceaccount,授权只能查看default命名空 间下pod,再创建一个deployment使用这个serviceaccount。
2、为default命名空间下创建的容器默认请求值(resources.requests)
小结
本篇为 【Kubernetes CKS认证 Day2】的学习笔记,希望这篇笔记可以让您初步了解到 RBAC认证授权、资源配额 ResourceQuota、资源限制 LimitRange 等;课后还有扩展实践,不妨跟着我的笔记步伐亲自实践一下吧!
Tip:毕竟两个人的智慧大于一个人的智慧,如果你不理解本章节的内容或需要相关笔记、视频,可私信小安,请不要害羞和回避,可以向他人请教,花点时间直到你真正的理解。