常用存储卷类型及对比
类型 | 生命周期 | 适用场景 | 是否支持多Pod共享 | 示例 |
---|---|---|---|---|
emptyDir |
与 Pod 相同 | Pod 内容器间临时共享数据 | ❌ 仅限同一Pod内 | 日志处理、缓存 |
hostPath |
与节点相同 | 访问节点文件系统(慎用) | ❌ 依赖节点 | 监控工具、节点日志收集 |
configMap /secret |
独立管理 | 注入配置文件或敏感数据 | ✅ | 应用配置、数据库密码 |
persistentVolumeClaim |
独立于 Pod | 持久化数据(数据库、用户数据) | ✅ | MySQL 数据、用户上传文件 |
nfs /glusterfs |
独立于 Pod | 跨节点共享存储 | ✅ | 共享文件系统、媒体库 |
1.emptyDir:
Pod 内多个容器间共享临时数据,与 Pod 绑定,Pod 删除时数据销毁(无持久化能力)。
# logging-sidecar.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-with-logger
spec:
replicas: 2
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nginx
volumeMounts:
- name: log-dir
mountPath: /var/log/nginx # 主容器写入日志(容器1)
- name: log-collector
image: busybox
command: ["sh", "-c", "tail -f /logs/access.log"]
volumeMounts:
- name: log-dir
mountPath: /logs # Sidecar容器读取日志(容器2)
volumes:
- name: log-dir
emptyDir: {}
2.hostpath:挂载到node
apiVersion: v1 # 指定 Kubernetes API 版本
kind: Pod # 定义资源类型为 Pod
metadata:
name: hostpath-example # Pod 的名称
spec:
containers:
- name: mycontainer # 容器名称
image: nginx # 使用 nginx 镜像
volumeMounts: # 定义容器内的卷挂载点
- name: hostpath-volume # 引用下面定义的卷名称
mountPath: /usr/share/nginx/html # 将卷挂载到容器内的这个路径
# 这是 nginx 默认的网页目录
volumes: # 定义 Pod 使用的卷
- name: hostpath-volume # 卷名称(必须与上面的 volumeMounts.name 匹配)
hostPath: # 指定卷类型为 hostPath
path: /data # 节点上的实际路径
# 这个目录必须存在于运行 Pod 的节点上
type: Directory # 指定路径必须是一个目录
# 其他可选类型: File, DirectoryOrCreate, FileOrCreate 等
3.configMap
/secret:挂载配文件非加密和加密方式,用法相同
(1)准备 Nginx 配置文件
创建本地文件 nginx.conf
:
cat <<EOF > nginx.conf
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html;
}
# 自定义配置示例
location /api {
proxy_pass http://backend:8080;
}
}
EOF
(2)创建 ConfigMap
将配置文件导入为 ConfigMap:
kubectl create configmap nginx-config --from-file=nginx.conf
kubectl get configmap nginx-config -o yaml
apiVersion: v1
data:
nginx.conf: |
server {
listen 80;
server_name localhost;
...
}
kind: ConfigMap
metadata:
name: nginx-config
(3)创建 Pod 并挂载 ConfigMap
# configmap-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-config
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts: # 容器里面使用卷
- name: nginx-config-volume # 卷名称相同
mountPath: /etc/nginx/nginx.conf # 挂载到容器里面的路径
subPath: nginx.conf # 仅挂载单个文件(不覆盖整个目录)
volumes: # 声明定义卷
- name: nginx-config-volume # 卷名称相同
configMap:
name: nginx-config # 引用ConfigMap
items:
- key: nginx.conf # ConfigMap中的键(data下面的)
path: nginx.conf # 挂载后的文件名(可与原键名不同)
# defaultMode: 0644 # 可选:设置文件权限(需用数字格式)
(4)subpath:不覆盖配置文件
4.pv和pvc
# PersistentVolume 定义
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi # 存储容量
volumeMode: Filesystem # 卷模式 (Filesystem 或 Block)
accessModes:
- ReadWriteMany # 访问模式 (ReadWriteOnce, ReadOnlyMany, ReadWriteMany)
persistentVolumeReclaimPolicy: Retain # 回收策略 (Retain, Recycle, Delete)
storageClassName: nfs # 存储类名称
nfs:
path: /exports/data # NFS 服务器路径
server: 10.0.0.5 # NFS 服务器地址
---
# PersistentVolumeClaim 定义
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: nfs # 必须与 PV 匹配
accessModes:
- ReadWriteMany # 必须与 PV 匹配
resources:
requests:
storage: 5Gi # 请求的存储大小 (不能超过 PV 容量)
---
apiVersion: v1
kind: Pod
metadata:
name: pvc-pod-example
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: data-storage
mountPath: "/usr/share/nginx/html"
volumes:
- name: data-storage
persistentVolumeClaim:
claimName: nfs-pvc # 使用前面定义的 PVC
5.nfs:网络文件系统(远程)
apiVersion: v1
kind: Pod
metadata:
name: nfs-pod-example
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: nfs-volume
mountPath: /usr/share/nginx/html
volumes:
- name: nfs-volume
nfs:
server: 10.0.0.5 # NFS 服务器地址
path: /exports/data # NFS 服务器共享路径
readOnly: false # 默认为 false,可读写
6.storageClass:自动创建 PV:当用户创建 PVC 时,系统能自动按需创建对应的 PV
服务创建--->pvc--->storageClass-->provisioner-->pv
(1)服务使用存储卷
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: "/usr/share/nginx/html"
volumes:
- name: shared-data
persistentVolumeClaim:
claimName: nfs-web-data # 所有Pod使用同一个PVC
(2)准备创建pvc
# nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-web-data
spec:
accessModes:
- ReadWriteMany # NFS支持多节点读写
resources:
requests:
storage: 5Gi # 请求的存储大小
storageClassName:nfs-client
(3)创建storageclass
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: example.com/nfs # 必须与 Deployment 中的 PROVISIONER_NAME 一致
parameters:
archiveOnDelete: "false" # 删除时是否归档数据(改为false会直接删除)
mountOptions:
- hard
- nfsvers=4.1
(4)准备好provisioner监听pvc的创建,自动创建出pv
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-provisioner
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccountName: nfs-provisioner
containers:
- name: nfs-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: example.com/nfs # Provisioner 标识符
- name: NFS_SERVER
value: 192.168.1.100 # NFS 服务器
- name: NFS_PATH
value: /data/nfs_share # 共享路径
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.100
path: /data/nfs_share
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io