- A+
前文我们聊到了k8s上的ingress资源相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14167581.html;今天们来聊一下k8s上volume相关话题;
在说k8s上的volume的使用前,我们先来回顾下docker里的volume;对于docker容器来说,镜像是分成构建的且每一层都是只读的,只读就意味着不能修改数据;只有当一个镜像运行为容器以后,在镜像的最顶端才会加上一个可写层,一旦这个容器被删除,对应可写层上的数据也随之被删除;为了解决docker容器上的数据持久化的问题;docker使用了volume;在docker上volume有两种管理方式,第一种是用户手动指定把宿主机(对于宿主机上的目录可能是挂载存储系统上的某目录)上的某个目录挂载到容器某个目录,这种管理方式叫做绑定挂载卷;还有一种就是docker自身维护把某个目录挂载到容器某个目录,这种叫docker自己管理的卷;不管使用那种方式管理的volume,它都是容器直接关联宿主机上的某个目录或文件;docker中的volume解决了容器生命周期内产生的数据在容器终止后能够持久化保存的问题;同样k8s也有同样的烦恼,不同的是k8s面对的是pod;我们知道pod是k8s上最小调度单元,一个pod被删除以后,pod里运行的容器也随之被删除,那么pod里容器产生的数据该如何被持久化呢?要想解决这个问题,我们先来看看pod的组成;
提示:在k8s上pod里可以运行一个或多个容器,运行多个容器,其中一个容器我们叫主容器,其他的容器是用来辅助主容器,我们叫做sidecar;对于pod来说,不管里面运行多少个容器,在最底层都会运行一个pause容器,该容器最主要用来为pod提供基础架构支撑;并且位于同一个pod中的容器都共享pause容器的网络名称空间以及IPC和UTS;这样一来我们要想给pod里的容器提供存储卷,首先要把存储卷关联到pause容器,然后在容器里挂载pause里的存储卷即可;如下图所示
提示:如上图所示,对于pause容器来说它可以关联存储A,也可以关联存储B;对于pause关联某个存储,其位于同一pod中的其他容器就也可以挂载pause里关联的存储目录或文件;对于k8s来说存储本来就不属于k8s内部组件,它是一个外来系统,这也意味着我们要想k8s使用外部存储系统,首先pause容器要有适配其对应存储系统的驱动;我们知道同一宿主机上运行的多个容器都是共享同一内核,即宿主机内核上有某个存储系统的驱动,那么pause就可以使用对应的驱动去适配对应的存储;
volumes类型
我们知道要想在k8s上使用存储卷,我们需要在对应的节点上提供对应存储系统的驱动,对应运行在该节点上的所有pod就可以使用对应的存储系统,那么问题来了,pod怎么使用对应的存储系统呢?该怎么向其驱动程序传递参数呢?我们知道在k8s上一切皆对象,要在k8s上使用存储卷,我们还需要把对应的驱动抽象成k8s上的资源;在使用时,我们直接初始化对应的资源为对象即可;为了在k8s上简化使用存储卷的复杂度,k8s内置了一些存储接口,对于不同类型的存储,其使用的接口、传递的参数也有所不同;除此之外在k8s上也支持用户使用自定义存储,通过csi接口来定义;
查看k8s上支持的volumes接口
[root@master01 ~]# kubectl explain pod.spec.volumes KIND: Pod VERSION: v1 RESOURCE: volumes <[]Object> DESCRIPTION: List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Volume represents a named volume in a pod that may be accessed by any container in the pod. FIELDS: awsElasticBlockStore <Object> AWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk <Object> AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile <Object> AzureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs <Object> CephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder <Object> Cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap <Object> ConfigMap represents a configMap that should populate this volume csi <Object> CSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI <Object> DownwardAPI represents downward API about the pod that should populate this volume emptyDir <Object> EmptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral <Object> Ephemeral represents a volume that is handled by a cluster storage driver (Alpha feature). The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc <Object> FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume <Object> FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker <Object> Flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk <Object> GCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo <Object> GitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs <Object> Glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath <Object> HostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi <Object> ISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name <string> -required- Volume's name. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs <Object> NFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim <Object> PersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk <Object> PhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume <Object> PortworxVolume represents a portworx volume attached and mounted on kubelets host machine projected <Object> Items for all in one resources secrets, configmaps, and downward API quobyte <Object> Quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd <Object> RBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO <Object> ScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret <Object> Secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos <Object> StorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume <Object> VsphereVolume represents a vSphere volume attached and mounted on kubelets host machine [root@master01 ~]#
提示:从上面的帮助信息可以看到k8s上支持的存储接口还是很多,每一个存储接口都是一种类型;对于这些存储类型我们大致可以分为云存储,分布式存储,网络存储、临时存储,节点本地存储,特殊类型存储、用户自定义存储等等;比如awsElasticBlockStore、azureDisk、azureFile、gcePersistentDisk、vshperVolume、cinder这些类型划分为云存储;cephfs、glusterfs、rbd这些划分为分布式存储;nfs、iscsi、fc这些划分为网络存储;enptyDIR划分为临时存储;hostPath、local划分为本地存储;自定义存储csi;特殊存储configMap、secret、downwardAPId;持久卷申请persistentVolumeClaim等等;
volumes的使用
示例:创建使用hostPath类型存储卷Pod
[root@master01 ~]# cat hostPath-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-hostpath-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml hostPath: path: /vol/html/ type: DirectoryOrCreate [root@master01 ~]#
提示:以上配置清单表示创建一个名为nginx的pod,对应使用nginx:1.14-alpine的镜像;并且定义了一个存储卷,该存储卷的名称为webhtml,其类型为hostPath;在定义存储卷时,我们需要在spec字段下使用volumes字段来指定,该字段的值为一个对象列表;其中name是必须字段,用于指定存储卷的名称,方便容器挂载其存储卷时引用的标识符;其次我们需要对应的存储卷类型来表示使用对应的存储接口;hostPath表示使用hostPath类型存储接口;该类型存储接口需要我们手动传递两个参数,第一个是path指定宿主机的某个目录或文件路径;type用来指定当宿主机上的指定的路径不存在时该怎么办,这个值有7个值;其中DirectoryOrCteate表示对应path字段指定的文件必须是一个目录,当这个目录在宿主机上不存在时就创建;Directory表示对应path字段指定的文件必须是一个已存在的目录;FileOrCreate表示对应path字段指定的文件必须是一个文件,如果文件不存在就创建;File表示对应path字段必须为一个已存在的文件;Socket表示对应path必须为一个已存在的Socket文件;CharDevice表示对应path字段指定的文件必须是一个已存在的字符设备;BlockDevice表示对应path字段指定的是一个已存在的块设备;
应用配置清单
[root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h vol-hostpath-demo 1/1 Running 0 11s [root@master01 ~]# kubectl describe pod/vol-hostpath-demo Name: vol-hostpath-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Wed, 23 Dec 2020 23:14:35 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.92 IPs: IP: 10.244.3.92 Containers: nginx: Container ID: docker://eb8666714b8697457ce2a88271a4615f836873b4729b6a0938776e3d527c6536 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Wed, 23 Dec 2020 23:14:37 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from webhtml (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhtml: Type: HostPath (bare host directory volume) Path: /vol/html/ HostPathType: DirectoryOrCreate default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 43s default-scheduler Successfully assigned default/vol-hostpath-demo to node03.k8s.org Normal Pulled 42s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 41s kubelet Created container nginx Normal Started 41s kubelet Started container nginx [root@master01 ~]#
提示:可以看到对应pod里以只读方式挂载了webhtml存储卷,对应webhtm存储卷类型为HostPath,对应path是/vol/html/;
查看对应pod所在节点
[root@master01 ~]# kubectl get pod vol-hostpath-demo -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vol-hostpath-demo 1/1 Running 0 3m39s 10.244.3.92 node03.k8s.org <none> <none> [root@master01 ~]#
在node03上查看对应目录是否创建?
[root@node03 ~]# ll / total 16 lrwxrwxrwx. 1 root root 7 Sep 15 20:33 bin -> usr/bin dr-xr-xr-x. 5 root root 4096 Sep 15 20:39 boot drwxr-xr-x 20 root root 3180 Dec 23 23:10 dev drwxr-xr-x. 80 root root 8192 Dec 23 23:10 etc drwxr-xr-x. 2 root root 6 Nov 5 2016 home lrwxrwxrwx. 1 root root 7 Sep 15 20:33 lib -> usr/lib lrwxrwxrwx. 1 root root 9 Sep 15 20:33 lib64 -> usr/lib64 drwxr-xr-x. 2 root root 6 Nov 5 2016 media drwxr-xr-x. 2 root root 6 Nov 5 2016 mnt drwxr-xr-x. 4 root root 35 Dec 8 14:25 opt dr-xr-xr-x 141 root root 0 Dec 23 23:09 proc dr-xr-x---. 4 root root 213 Dec 21 22:46 root drwxr-xr-x 26 root root 780 Dec 23 23:13 run lrwxrwxrwx. 1 root root 8 Sep 15 20:33 sbin -> usr/sbin drwxr-xr-x. 2 root root 6 Nov 5 2016 srv dr-xr-xr-x 13 root root 0 Dec 23 23:09 sys drwxrwxrwt. 9 root root 251 Dec 23 23:11 tmp drwxr-xr-x. 13 root root 155 Sep 15 20:33 usr drwxr-xr-x. 19 root root 267 Sep 15 20:38 var drwxr-xr-x 3 root root 18 Dec 23 23:14 vol [root@node03 ~]# ll /vol total 0 drwxr-xr-x 2 root root 6 Dec 23 23:14 html [root@node03 ~]# ll /vol/html/ total 0 [root@node03 ~]#
提示:可以看到对应节点上已经创建/vol/html/目录,对应目录下没有任何文件;
在对应节点对应目录下创建一个网页文件,访问对应pod看看是否对应网页文件是否能够被访问到?
[root@node03 ~]# echo "this is test page from node03 /vol/html/test.html" > /vol/html/test.html [root@node03 ~]# cat /vol/html/test.html this is test page from node03 /vol/html/test.html [root@node03 ~]# exit logout Connection to node03 closed. [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 7m45s 10.244.3.92 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.92/test.html this is test page from node03 /vol/html/test.html [root@master01 ~]#
提示:可以看到在对应节点上创建网页文件,访问pod能够正常被访问到;
测试:删除pod,看看对应节点上的目录是否会被删除?
[root@master01 ~]# kubectl delete -f hostPath-demo.yaml pod "vol-hostpath-demo" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h [root@master01 ~]# ssh node03 Last login: Wed Dec 23 23:18:51 2020 from master01 [root@node03 ~]# ll /vol/html/ total 4 -rw-r--r-- 1 root root 50 Dec 23 23:22 test.html [root@node03 ~]# exit logout Connection to node03 closed. [root@master01 ~]#
提示:可以看到删除了pod以后,在对应节点上的目录并不会被删除;对应的网页文件还是完好无损;
测试:重新引用配置清单,访问对应的pod,看看是否能够访问到对应的网页文件内容?
[root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 7s 10.244.3.93 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.93/test.html this is test page from node03 /vol/html/test.html [root@master01 ~]#
提示:可以看到对应pod被调度到node03上了,访问对应的pod能够访问到我们创建的网页文件;假如我们明确指定将此pod运行在node02上,对应pod是否还可以访问到对应的网页文件呢?
测试:绑定pod运行在node02.k8s.org上
[root@master01 ~]# cat hostPath-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-hostpath-demo namespace: default spec: nodeName: node02.k8s.org containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml hostPath: path: /vol/html/ type: DirectoryOrCreate [root@master01 ~]#
提示:绑定pod运行为某个节点上,我们可以在spec字段中用nodeName字段来指定对应节点的主机名即可;
删除原有pod,重新应用新资源清单
[root@master01 ~]# kubectl delete pod/vol-hostpath-demo pod "vol-hostpath-demo" deleted [root@master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-6479b786f5-9d4mh 1/1 Running 1 47h myapp-6479b786f5-k252c 1/1 Running 1 47h [root@master01 ~]# kubectl apply -f hostPath-demo.yaml pod/vol-hostpath-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 47h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 47h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 8s 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]#
提示:可以看到重新应用新资源清单,对应pod运行在node02上;
访问对应pod,看看test.html是否能够被访问到?
[root@master01 ~]# curl 10.244.2.100/test.html <html> <head><title>404 Not Found</title></head> <body bgcolor="white"> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.14.2</center> </body> </html> [root@master01 ~]#
提示:可以看到现在访问pod,对应网页文件就不能被访问到;其实原因很简单;hostPath类型存储卷是将宿主机上的某个目录或文件当作存储卷映射进pause容器,然后供pod里的容器挂载使用;这种类型的存储卷不能跨节点;所以在node02上创建的pod,node03上的文件肯定是不能被访问到的;为此,如果要使用hostPath类型的存储卷,我们就必须绑定节点;除此之外我们就应该在k8s节点上创建相同的文件或目录;
示例:创建使用emptyDir类型存储卷pod
[root@master01 ~]# cat emptyDir-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-emptydir-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: web-cache-dir mountPath: /usr/share/nginx/html readOnly: true readOnly: true - name: alpine image: alpine volumeMounts: - name: web-cache-dir mountPath: /nginx/html command: ["/bin/sh", "-c"] args: - while true; do echo $(hostname) $(date) >> /nginx/html/index.html; sleep 10; done volumes: - name: web-cache-dir emptyDir: medium: Memory sizeLimit: "10Mi" [root@master01 ~]#
提示:以上清单表示定义运行一个名为vol-emptydir-demo的pod;在其pod内部运行两个容器,一个名为nginx,一个名为alpine;同时这两个容器都同时挂载一个名为web-cache-dir的存储卷,其类型为emptyDir,如下图所示;定义empytDir类型的存储卷,我们需要在spec.volumes字段下使用name指定其存储卷的名称;用emptyDir指定其存储卷类型为emptyDir;对于empytDir类型存储卷,它有两个属性,medium字段用于指定媒介类型,Memory表示使用内存作为存储媒介;默认该字段的值为“”,表示使用默认的对应节点默认的存储媒介;sizeLimit字段是用来限制其对应存储大小,默认是空,表示不限制;
提示:如上图,其pod内部有两个容器,一个名为alpine的容器,它会每隔10往/nginx/html/inde.html文件中写入对应主机名+时间;而nginx容器挂载对应的empytDir类型存储卷到本地的网页存储目录;简单讲就是alpine容器往/nginx/html/index.html写数据,nginx容器挂载对应文件到网页目录;
应用资源清单
[root@master01 ~]# kubectl apply -f emptyDir-demo.yaml pod/vol-emptydir-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 0/2 ContainerCreating 0 8s <none> node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 72m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 2/2 Running 0 16s 10.244.3.94 node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 72m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# kubectl describe pod vol-emptydir-demo Name: vol-emptydir-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Thu, 24 Dec 2020 00:46:56 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.94 IPs: IP: 10.244.3.94 Containers: nginx: Container ID: docker://58af9ef80800fb22543d1c80be58849f45f3d62f3b44101dbca024e0761cead5 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Thu, 24 Dec 2020 00:46:57 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from web-cache-dir (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) alpine: Container ID: docker://327f110a10e8ef9edb5f86b5cb3dad53e824010b52b1c2a71d5dbecab6f49f05 Image: alpine Image ID: docker-pullable://alpine@sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436 Port: <none> Host Port: <none> Command: /bin/sh -c Args: while true; do echo $(hostname) $(date) >> /nginx/html/index.html; sleep 10; done State: Running Started: Thu, 24 Dec 2020 00:47:07 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /nginx/html from web-cache-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: web-cache-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: Memory SizeLimit: 10Mi default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 51s default-scheduler Successfully assigned default/vol-emptydir-demo to node03.k8s.org Normal Pulled 51s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 51s kubelet Created container nginx Normal Started 50s kubelet Started container nginx Normal Pulling 50s kubelet Pulling image "alpine" Normal Pulled 40s kubelet Successfully pulled image "alpine" in 10.163157508s Normal Created 40s kubelet Created container alpine Normal Started 40s kubelet Started container alpine [root@master01 ~]#
提示:可以看到对应pod已经正常运行起来,其内部有2个容器;其中nginx容器一只读方式挂载名为web-cache-dir的存储卷,alpine以读写方式挂载web-cache-dir的存储卷;对应存储卷类型为emptyDir;
访问对应pod,看看是否能够访问到对应存储卷中index.html的内容?
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d 10.244.4.21 node04.k8s.org <none> <none> vol-emptydir-demo 2/2 Running 0 4m38s 10.244.3.94 node03.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 77m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.94 vol-emptydir-demo Wed Dec 23 16:47:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:47:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:48:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:49:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:47 UTC 2020 vol-emptydir-demo Wed Dec 23 16:50:57 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:07 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:17 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:27 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:37 UTC 2020 vol-emptydir-demo Wed Dec 23 16:51:47 UTC 2020 [root@master01 ~]#
提示:可以看到能够访问到index.html文件内容,并且该文件内容是alpine容器动态生成的内容;从上面的示例,不难理解,在同一个pod内部可以共享同一存储卷;
示例:创建使用nfs类型的存储卷pod
[root@master01 ~]# cat nfs-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-nfs-demo namespace: default spec: containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml nfs: path: /data/html/ server: 192.168.0.99 [root@master01 ~]#
提示:定义nfs类型存储卷,对应spec.volumes.nfs字段下必须定义path字段,该字段用于指定其nfs文件系统的导出文件路径;server字段是用于指定其nfs服务器地址;在使用nfs存储作为pod的后端存储,首先我们要准备好nfs服务器,并导出对应的目录;
准备nfs服务器,在192.168.0.99这台服务器上安装nfs-utils包
[root@docker_registry ~]# ip a|grep 192.168.0.99 inet 192.168.0.99/24 brd 192.168.0.255 scope global enp3s0 [root@docker_registry ~]# yum install nfs-utils -y Loaded plugins: fastestmirror, langpacks Repository epel is listed more than once in the configuration Repository epel-debuginfo is listed more than once in the configuration Repository epel-source is listed more than once in the configuration base | 3.6 kB 00:00:00 docker-ce-stable | 3.5 kB 00:00:00 epel | 4.7 kB 00:00:00 extras | 2.9 kB 00:00:00 kubernetes/signature | 844 B 00:00:00 kubernetes/signature | 1.4 kB 00:00:00 !!! mariadb-main | 2.9 kB 00:00:00 mariadb-maxscale | 2.4 kB 00:00:00 mariadb-tools | 2.9 kB 00:00:00 mongodb-org | 2.5 kB 00:00:00 proxysql_repo | 2.9 kB 00:00:00 updates | 2.9 kB 00:00:00 (1/6): docker-ce-stable/x86_64/primary_db | 51 kB 00:00:00 (2/6): kubernetes/primary | 83 kB 00:00:01 (3/6): mongodb-org/primary_db | 26 kB 00:00:01 (4/6): epel/x86_64/updateinfo | 1.0 MB 00:00:02 (5/6): updates/7/x86_64/primary_db | 4.7 MB 00:00:01 (6/6): epel/x86_64/primary_db | 6.9 MB 00:00:02 Determining fastest mirrors * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com kubernetes 612/612 Resolving Dependencies --> Running transaction check ---> Package nfs-utils.x86_64 1:1.3.0-0.66.el7_8 will be updated ---> Package nfs-utils.x86_64 1:1.3.0-0.68.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================= Package Arch Version Repository Size ============================================================================================================================================= Updating: nfs-utils x86_64 1:1.3.0-0.68.el7 base 412 k Transaction Summary ============================================================================================================================================= Upgrade 1 Package Total download size: 412 k Downloading packages: No Presto metadata available for base nfs-utils-1.3.0-0.68.el7.x86_64.rpm | 412 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : 1:nfs-utils-1.3.0-0.68.el7.x86_64 1/2 Cleanup : 1:nfs-utils-1.3.0-0.66.el7_8.x86_64 2/2 Verifying : 1:nfs-utils-1.3.0-0.68.el7.x86_64 1/2 Verifying : 1:nfs-utils-1.3.0-0.66.el7_8.x86_64 2/2 Updated: nfs-utils.x86_64 1:1.3.0-0.68.el7 Complete! [root@docker_registry ~]#
创建/data/html目录
[root@docker_registry ~]# mkdir /data/html -pv mkdir: created directory ‘/data/html’ [root@docker_registry ~]#
配置该目录能够被k8s集群节点所访问
[root@docker_registry ~]# cat /etc/exports /data/html 192.168.0.0/24 (rw,no_root_squash) [root@docker_registry ~]#
提示:以上配置表示把/data/html这个目录以读写,不压榨root权限共享给192.168.0.0/24这个网络中的所有主机使用;
启动nfs
[root@docker_registry ~]# systemctl start nfs [root@docker_registry ~]# ss -tnl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:1514 *:* LISTEN 0 128 *:111 *:* LISTEN 0 128 *:20048 *:* LISTEN 0 64 *:42837 *:* LISTEN 0 5 192.168.122.1:53 *:* LISTEN 0 128 *:22 *:* LISTEN 0 128 192.168.0.99:631 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 64 *:2049 *:* LISTEN 0 128 *:59396 *:* LISTEN 0 128 :::34922 :::* LISTEN 0 128 :::111 :::* LISTEN 0 128 :::20048 :::* LISTEN 0 128 :::80 :::* LISTEN 0 128 :::22 :::* LISTEN 0 100 ::1:25 :::* LISTEN 0 128 :::443 :::* LISTEN 0 128 :::4443 :::* LISTEN 0 64 :::2049 :::* LISTEN 0 64 :::36997 :::* [root@docker_registry ~]#
提示:nfs监听在tcp的2049端口,启动请确保该端口能够正常处于监听状态;到此nfs服务器就准备好了;
在k8s节点上安装nfs-utils包,为其使用nfs提供所需驱动
yum install nfs-utils -y
验证:在node01上,看看能不能正常挂载nfs服务器共享出来的目录
[root@node01 ~]# showmount -e 192.168.0.99 Export list for 192.168.0.99: /data/html (everyone) [root@node01 ~]# mount -t nfs 192.168.0.99:/data/html /mnt [root@node01 ~]# mount |grep /data/html 192.168.0.99:/data/html on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.44,local_lock=none,addr=192.168.0.99) [root@node01 ~]# umount /mnt [root@node01 ~]# mount |grep /data/html [root@node01 ~]#
提示:可以看到在node01上能够正常看到nfs服务器共享出来的目录,并且也能正常挂载使用;等待其他节点把nfs-utils包安装完成以后,接下来就可以在master上应用配置清单了;
应用资源清单
[root@master01 ~]# kubectl apply -f nfs-demo.yaml pod/vol-nfs-demo created [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d1h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d1h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 141m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 10s 10.244.3.101 node03.k8s.org <none> <none> [root@master01 ~]# kubectl describe pod vol-nfs-demo Name: vol-nfs-demo Namespace: default Priority: 0 Node: node03.k8s.org/192.168.0.46 Start Time: Thu, 24 Dec 2020 01:55:51 +0800 Labels: <none> Annotations: <none> Status: Running IP: 10.244.3.101 IPs: IP: 10.244.3.101 Containers: nginx: Container ID: docker://72227e3a94622a4ea032a1ab0d7d353aef167d5a0e80c3739e774050eaea3914 Image: nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: <none> Host Port: <none> State: Running Started: Thu, 24 Dec 2020 01:55:52 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /usr/share/nginx/html from webhtml (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-xvd4c (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: webhtml: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: 192.168.0.99 Path: /data/html/ ReadOnly: false default-token-xvd4c: Type: Secret (a volume populated by a Secret) SecretName: default-token-xvd4c Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 28s default-scheduler Successfully assigned default/vol-nfs-demo to node03.k8s.org Normal Pulled 27s kubelet Container image "nginx:1.14-alpine" already present on machine Normal Created 27s kubelet Created container nginx Normal Started 27s kubelet Started container nginx [root@master01 ~]#
提示:可以看到对应pod已经正常运行,并且其内部容器已经正常挂载对应目录;
在nfs服务器对应目录,创建一个index.html文件
[root@docker_registry ~]# cd /data/html [root@docker_registry html]# echo "this is test file from nfs server ip addr is 192.168.0.99" > index.html [root@docker_registry html]# cat index.html this is test file from nfs server ip addr is 192.168.0.99 [root@docker_registry html]#
访问对应pod,看看是否能够访问到对应文件内容?
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 145m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 4m6s 10.244.3.101 node03.k8s.org <none> <none> [root@master01 ~]# curl 10.244.3.101 this is test file from nfs server ip addr is 192.168.0.99 [root@master01 ~]#
提示:可以看到对应文件内容能够通过pod访问到;
删除pod
[root@master01 ~]# kubectl delete -f nfs-demo.yaml pod "vol-nfs-demo" deleted [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 149m 10.244.2.100 node02.k8s.org <none> <none> [root@master01 ~]#
绑定pod运行在node02.k8s.org上,重新应用配置文件创建pod,再次访问对应pod,看看对应文件是否能够正常访问到呢?
[root@master01 ~]# cat nfs-demo.yaml apiVersion: v1 kind: Pod metadata: name: vol-nfs-demo namespace: default spec: nodeName: node02.k8s.org containers: - name: nginx image: nginx:1.14-alpine volumeMounts: - name: webhtml mountPath: /usr/share/nginx/html readOnly: true volumes: - name: webhtml nfs: path: /data/html/ server: 192.168.0.99 [root@master01 ~]# kubectl apply -f nfs-demo.yaml pod/vol-nfs-demo created [root@master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-6479b786f5-9d4mh 1/1 Running 1 2d2h 10.244.2.99 node02.k8s.org <none> <none> myapp-6479b786f5-k252c 1/1 Running 1 2d2h 10.244.4.21 node04.k8s.org <none> <none> vol-hostpath-demo 1/1 Running 0 151m 10.244.2.100 node02.k8s.org <none> <none> vol-nfs-demo 1/1 Running 0 8s 10.244.2.101 node02.k8s.org <none> <none> [root@master01 ~]# curl 10.244.2.101 this is test file from nfs server ip addr is 192.168.0.99 [root@master01 ~]#
提示:可以看到把对应pod绑定到node02上,访问对应pod也能正常访问到nfs服务器上的文件;从上述测试过程来看,nfs这种类型的存储卷能够脱离pod的生命周期,跨节点将pod里容器产生的数据持久化到对应的nfs文件系统服务器上;当然nfs此时是单点,一旦nfs服务器宕机挂掉,对应pod运行时产生的数据将全部丢失;所以对应外部存储系统,我们应该选择一个对数据有冗余,且k8s集群支持的类型的存储系统,比如cephfs,glusterfs等等;