存储卷介绍

pod有生命周期,生命周期结束后pod里的数据会消失(如配置文件,业务数据等)。解决: 我们需要将数据与pod分离,将数据放在专门的存储卷上

pod在k8s集群的节点中是可以调度的, 如果pod挂了被调度到另一个节点,那么数据和pod的联系会中断。解决: 所以我们需要与集群节点分离的存储系统才能实现数据持久化

本地存储卷emptyDir

  • 应用场景

    实现pod内容器之间数据共享

  • 特点

    随着pod被删除,该卷也会被删除

1.创建yaml文件

cat volume-emptydir.yml
apiVersion: v1
kind: Pod
metadata:
  name: volume-emptydir
spec:
  containers:
  - name: write
    image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/library/centos:7
    imagePullPolicy: IfNotPresent
    command: ["bash","-c","echo haha > /data/1.txt ; sleep 6000"]
    volumeMounts:
    - name: data
      mountPath: /data

  - name: read
    image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/library/centos:7
    imagePullPolicy: IfNotPresent
    command: ["bash","-c","cat /data/1.txt; sleep 6000"]
    volumeMounts:
    - name: data
      mountPath: /data

  volumes:
  - name: data
    emptyDir: {}

2.基于yaml文件创建pod

kubectl apply -f volume-emptydir.yml
pod/volume-emptydir created

3.验证

kubectl logs volume-emptydir -c read
haha

本地存储卷hostPath

  • 应用场景

    pod内与集群节点目录映射(pod中容器想访问节点上数据,例如监控,只有监控访问到节点主机文件才能知道集群节点主机状态)

  • 缺点

    如果集群节点挂掉,控制器在另一个集群节点拉起容器,数据就会变成另一台集群节点主机的了(无法实现数据共享)

1.创建yaml文件

cat volume-hostpath.yml
apiVersion: v1
kind: Pod
metadata:
  name: volume-hostpath
spec:
  containers:
  - name: busybox
    image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/google-containers/busybox:latest
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","echo haha > /data/1.txt ; sleep 600"]
    volumeMounts:
    - name: data
      mountPath: /data

  volumes:
  - name: data
    hostPath:
      path: /opt
      type: Directory

2.基于yaml文件创建pod

kubectl apply -f volume-hostpath.yml
pod/volume-hostpath created

3.查看pod状态

kubectl get pods -o wide |grep volume-hostpath
volume-hostpath                     1/1     Running   0              55s     10.244.140.70    node02   <none>           <none>
可以看到pod是在node02节点上

4.验证pod所在机器上的挂载文件

cat /opt/1.txt
haha

网络存储卷之nfs

配置安装nfs

1.所有node节点安装nfs客户端相关软件包

yum install nfs-utils -y

2.配置nfs服务器

mkdir -p /data/nfs
vim /etc/exports
/data/nfs       *(rw,no_root_squash,sync)
systemctl restart nfs-server
systemctl enable nfs-server

3.验证nfs可用性

showmount -e 100.100.137.202
Export list for 100.100.137.202:
/data/nfs *

应用nfs存储

cat volume-nfs.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: volume-nfs
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nginx:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: documentroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: documentroot
        nfs:
          server: 100.100.137.202
          path: /data/nfs


kubectl apply -f volume-nfs.yml
deployment.apps/volume-nfs created


在nfs服务器共享目录中创建验证文件:
echo "volume-nfs" > /data/nfs/index.html
kubectl get pod -o wide
volume-nfs-7696cc9d7-8c4g8          1/1     Running   0              108s    10.244.140.71    node02   <none>           <none>
volume-nfs-7696cc9d7-swm4b          1/1     Running   0              108s    10.244.196.135   node01   <none>           <none>

curl 10.244.140.71:80
volume-nfs   # 文件内容与nfs服务器上创建的一致

PV(持久存储卷)与PVC(持久存储卷声明)

认识pv与pvc

persistenvolume(PV) 是配置好的一段存储(可以是任意类型的存储卷)

  • 也就是说将网络存储共享出来,配置定义成PV。

PersistentVolumeClaim(PVC)是用户pod使用PV的申请请求。

  • 用户不需要关心具体的volume实现细节,只需要关心使用需求。

pv与pvc之间的关系

  • pv提供存储资源(生产者)

  • pvc使用存储资源(消费者)

  • 使用pvc绑定pv

实现nfs类型的pv与pvc

1.编写创建pv的YAML文件

cat pv-nfs.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /data/nfs
    server: 100.100.137.202

访问模式有3种:
ReadWriteOnce 单节点读写挂载
ReadOnlyMany  多节点只读挂载
ReadWriteMany  多节点读写挂载
我们要实现多个nginx跨节点之间的数据共享,所以选择ReadWriteMany模式。

2.创建pv并验证

kubectl apply -f pv-nfs.yml
persistentvolume/pv-nfs created
kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-nfs   1Gi        RWX            Retain           Available                                   104s

RWX为ReadWriteMany的简写
Retain是回收策略,表示需要不使用了需要手动回收

3.编写创建pvc的YAML文件

cat pvc-nfs.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

4.创建pvc并验证

kubectl apply -f pvc-nfs.yml
persistentvolumeclaim/pvc-nfs created
kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    pv-nfs   1Gi        RWX                           23s
注意: STATUS必须为Bound状态(Bound状态表示pvc与pv绑定OK)

5.编写deployment的YMAL

cat deploy-nginx-nfs.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx-nfs
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
      - name: www
        persistentVolumeClaim:
          claimName: pvc-nfs

6.应用YAML创建deploment

kubectl apply -f deploy-nginx-nfs.yml
deployment.apps/deploy-nginx-nfs created

7.验证pod

kubectl get pod |grep deploy-nginx-nfs
deploy-nginx-nfs-5cc45b865f-c9lkl   1/1     Running   0              34s
deploy-nginx-nfs-5cc45b865f-sdpmg   1/1     Running   0              34s

8.验证pod内卷的数据

kubectl exec -it deploy-nginx-nfs-5cc45b865f-c9lkl -- bash
root@deploy-nginx-nfs-5cc45b865f-c9lkl:/# cat /usr/share/nginx/html/index.html
volume-nfs

kubectl exec -it deploy-nginx-nfs-5cc45b865f-sdpmg -- bash
root@deploy-nginx-nfs-5cc45b865f-sdpmg:/# cat /usr/share/nginx/html/index.html
volume-nfs

subpath使用

subpath是指只挂载卷中的某一个子目录到容器内的某个路径上。以下通过案例演示:

cat 01_create_pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod1
spec:
  containers:
  - name: c1
    image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/gcr.io/google-containers/busybox:latest
    command: ["/bin/sleep","100000"]
    volumeMounts:
      - name: data
        mountPath: /opt/data1
        subPath: data1
      - name: data
        mountPath: /opt/data2
        subPath: data2
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: pvc-nfs


kubectl apply -f 01_create_pod.yaml
pod/pod1 created


在nfs服务器查看pod中目录是否自动添加到nfs服务器/data/nfs目录中
ls /data/nfs/
data1  data2  index.html

存储的动态供给

什么是动态供给

每次使用存储要先创建pv, 再创建pvc,真累! 所以我们可以实现使用存储的动态供给特性。

  • 静态存储需要用户申请PVC时保证容量和读写类型与预置PV的容量及读写类型完全匹配, 而动态存储则无需如此.
  • 管理员无需预先创建大量的PV作为存储资源

Kubernetes从1.4版起引入了一个新的资源对象StorageClass,可用于将存储资源定义为具有显著特性的类(Class)而不是具体

的PV。用户通过PVC直接向意向的类别发出申请,匹配由管理员事先创建的PV,或者由其按需为用户动态创建PV,这样就免去

了需要先创建PV的过程。

使用NFS文件系统创建存储动态供给

官方插件是不支持NFS动态供给的,但是我们可以用第三方的插件来实现

第三方插件地址: https://github.com/kubernetes-retired/external-storage

1.下载并创建storageclass

wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/class.yaml
cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client   # 名称,要使用就需要调用此名称
  namespace: kube-system  #自定义命名空间
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"   # 删除数据时是否存档,false表示不存档,true表示存档

kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created

kubectl get storageclass -n kube-system
NAME         PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  31s
# RECLAIMPOLICY pv回收策略,pod或pvc被删除后,pv是否删除还是保留。
# VOLUMEBINDINGMODE Immediate 模式下PVC与PV立即绑定,主要是不等待相关Pod调度完成,不关心其运行节点,直接完成绑定。相反的 WaitForFirstConsumer模式下需要等待Pod调度完成后进行PV绑定。
# ALLOWVOLUMEEXPANSION pvc是否支持扩容

2.下载并创建rbac

因为storage自动创建pv需要经过kube-apiserver,所以需要授权。

wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/rbac.yaml

cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

这里只修改了命名空间

kubectl apply -f rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

3.创建动态供给的deployment

需要一个deployment来专门实现pv与pvc的自动创建

cat deploy-nfs-client-provisioner.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 100.100.137.202
            - name: NFS_PATH
              value: /data/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 100.100.137.202
            path: /data/nfs
kubectl apply -f deploy-nfs-client-provisioner.yml
deployment.apps/nfs-client-provisioner created
kubectl get pod -n kube-system |grep nfs
nfs-client-provisioner-66dff9875b-v5k67    1/1     Running   0              35s

4.测试存储动态供给是否可用

cat nginx-sc.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/nginx:latest
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-client"
      resources:
        requests:
          storage: 1Gi
kubectl get pods
web-0                               1/1     Running   0              42s
web-1                               1/1     Running   0              38s
ls /data/nfs/
default-www-web-0-pvc-7fb38d1d-9b6e-43e4-97c4-ac8eb1a23b06  default-www-web-1-pvc-44f9600b-a7b9-40e2-855e-7bf7bb3506de

results matching ""

    No results matching ""