Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

k8s共享存储使用NFS

容器中的应用数据得保存下来,使用local/hostPath可以临时用用,还是得有一个共享的存储。

先使用最简单的NFS分区/卷。

安装NFS server on aws ec2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#所有node节点安装nfs客户端
#yum -y install nfs-utils
#systemctl start nfs && systemctl enable nfs


[ec2-user@k8s ~]$ sudo yum install nfs-utils 
Loaded plugins: langpacks, priorities, update-motd
amzn2-core                                                                                                                     | 3.7 kB  00:00:00     
Package 1:nfs-utils-1.3.0-0.54.amzn2.0.2.x86_64 already installed and latest version
Nothing to do

[ec2-user@k8s ~]$ sudo mkdir /backup
[ec2-user@k8s ~]$ sudo chmod -R 755 /backup
[ec2-user@k8s ~]$ sudo chown nfsnobody:nfsnobody /backup

[ec2-user@k8s ~]$ sudo vi /etc/exports
[ec2-user@k8s ~]$ cat /etc/exports    
/backup 192.168.191.0/24(rw,sync,no_root_squash,no_all_squash)

# /k8s-fs *(rw,sync,no_root_squash,no_all_squash)

[ec2-user@k8s ~]$ sudo service nfs-server restart 
Redirecting to /bin/systemctl restart nfs-server.service
[ec2-user@k8s ~]$ 
[ec2-user@k8s ~]$ sudo exportfs
/backup         192.168.191.0/24
[ec2-user@k8s ~]$ sudo exportfs -arv 
exporting 192.168.191.0/24:/backup

[ec2-user@k8s ~]$ rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  56847  status
    100024    1   tcp  60971  status
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  47545  nlockmgr
    100021    3   udp  47545  nlockmgr
    100021    4   udp  47545  nlockmgr
    100021    1   tcp  40703  nlockmgr
    100021    3   tcp  40703  nlockmgr
    100021    4   tcp  40703  nlockmgr
[ec2-user@k8s ~]$ showmount -e 192.168.191.131
Export list for 192.168.191.131:
/backup 192.168.191.0/24

也可以通过docker来启动nfs server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
[ec2-user@k8s ~]$ sudo mkdir -p /data/kubernetes-volumes
[ec2-user@k8s ~]$ docker run --privileged -itd --name nfs -p 2049:2049 -e SHARED_DIRECTORY=/data -v /data/kubernetes-volumes:/data itsthenetwork/nfs-server-alpine:12 
f84b70dcca6bd5abb275fbee50fd161d8befdd709ce6523b3a514f04b7af8677

[ec2-user@k8s ~]$ docker logs f84b70dcca6bd5abb2 
Writing SHARED_DIRECTORY to /etc/exports file
The PERMITTED environment variable is unset or null, defaulting to '*'.
This means any client can mount.
The READ_ONLY environment variable is unset or null, defaulting to 'rw'.
Clients have read/write access.
The SYNC environment variable is unset or null, defaulting to 'async' mode.
Writes will not be immediately written to disk.
Displaying /etc/exports contents:
/data *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)

Starting rpcbind...
Displaying rpcbind status...
   program version netid     address                service    owner
    100000    4    tcp6      ::.0.111               -          superuser
    100000    3    tcp6      ::.0.111               -          superuser
    100000    4    udp6      ::.0.111               -          superuser
    100000    3    udp6      ::.0.111               -          superuser
    100000    4    tcp       0.0.0.0.0.111          -          superuser
    100000    3    tcp       0.0.0.0.0.111          -          superuser
    100000    2    tcp       0.0.0.0.0.111          -          superuser
    100000    4    udp       0.0.0.0.0.111          -          superuser
    100000    3    udp       0.0.0.0.0.111          -          superuser
    100000    2    udp       0.0.0.0.0.111          -          superuser
    100000    4    local     /var/run/rpcbind.sock  -          superuser
    100000    3    local     /var/run/rpcbind.sock  -          superuser
Starting NFS in the background...
rpc.nfsd: knfsd is currently down
rpc.nfsd: Writing version string to kernel: -2 -3 +4 +4.1 +4.2
rpc.nfsd: Created AF_INET TCP socket.
rpc.nfsd: Created AF_INET6 TCP socket.
Exporting File System...
exporting *:/data
/data           <world>
Starting Mountd in the background...These
Startup successful.

[ec2-user@k8s ~]$ sudo mount -v -o vers=4,loud 127.0.0.1:/ nfsmnt
mount.nfs: timeout set for Thu Apr 14 08:26:48 2022
mount.nfs: trying text-based options 'vers=4.1,addr=127.0.0.1,clientaddr=127.0.0.1'

[ec2-user@k8s ~]$ df -h | grep nfsmnt
127.0.0.1:/      25G  9.8G   16G  39% /home/ec2-user/nfsmnt

[ec2-user@k8s ~]$ touch nfsmnt/$(hostname).txt
[ec2-user@k8s ~]$ ls -l nfsmnt/
total 0
-rw-rw-r-- 1 ec2-user ec2-user 0 Apr 14 08:25 k8s.txt
[ec2-user@k8s ~]$ ls -l /data/kubernetes-volumes/
total 0
-rw-rw-r-- 1 ec2-user ec2-user 0 Apr 14 08:25 k8s.txt
[ec2-user@k8s ~]$ 

# vi /etc/fstab
# 192.168.0.4:/   /mnt   nfs4    _netdev,auto  0  0

### pod
# kubectl create -f nfs-server.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-server
spec:
  replicas: 1           # <- no more replicas
  template:
    metadata:
      labels:
        app: nfs-server
    spec:
      nodeSelector:     # <- use selector to fix nfs-server on k8s2.zhangqiaoc.com
        kubernetes.io/hostname: k8s2.zhangqiaoc.com
      containers:
      - name: nfs-server
        image: itsthenetwork/nfs-server-alpine:latest
        volumeMounts:
        - name: nfs-storage
          mountPath: /nfsshare
        env:
        - name: SHARED_DIRECTORY
          value: "/nfsshare"
        ports:
        - name: nfs  
          containerPort: 2049   # <- export port
        securityContext:
          privileged: true      # <- privileged mode is mandentory.
      volumes:
      - name: nfs-storage  
        hostPath:               # <- the folder on the host machine.
          path: /root/fileshare
# kubectl expose deployment nfs-server --type=ClusterIP
# kubectl get svc

# yum install -y nfs-utils
# mkdir /root/nfsmnt
# mount -v 10.101.117.226:/ /root/nfsmnt

client

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 所有work节点安装 nfs-utils rpcbind
[ec2-user@worker1 ~]$ sudo yum install nfs-utils 
Loaded plugins: langpacks, priorities, update-motd
amzn2-core                                                                                                                                        | 3.7 kB  00:00:00     
Package 1:nfs-utils-1.3.0-0.54.amzn2.0.2.x86_64 already installed and latest version
Nothing to do

[ec2-user@worker1 ~]$ sudo systemctl status nfs
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
[ec2-user@worker1 ~]$ sudo systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-04-13 20:44:34 CST; 1h 51min ago
  Process: 6979 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 7025 (rpcbind)
    Tasks: 1
   Memory: 2.1M
   CGroup: /system.slice/rpcbind.service
           └─7025 /sbin/rpcbind -w

Apr 13 20:44:34 worker1 systemd[1]: Starting RPC bind service...
Apr 13 20:44:34 worker1 systemd[1]: Started RPC bind service.


[ec2-user@worker1 ~]$ sudo mkdir -p /data
[ec2-user@worker1 ~]$ sudo chmod 777 /data 
[ec2-user@worker1 ~]$ sudo mount -t nfs 192.168.191.131:/backup /data
[ec2-user@worker1 ~]$ df -h | grep 192.168.191.131
192.168.191.131:/backup   25G  9.6G   16G  39% /data


# vi /etc/fstab
# 172.17.30.22:/backup /data nfs defaults 0 0

k8s中使用NFS

容器直接挂载NFS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
[ec2-user@k8s ~]$ cat nginx-1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        nfs:
          path: /backup
          server: 192.168.191.131
        
[ec2-user@k8s ~]$ kubectl apply -f nginx-1.yml

[ec2-user@k8s ~]$ kubectl get all 
NAME                                   READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-67dcb957c-g2h8x   1/1     Running   0          2m50s
pod/nginx-deployment-67dcb957c-gfv28   1/1     Running   0          2m50s
pod/nginx-deployment-67dcb957c-rqwjs   1/1     Running   0          2m50s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   27d

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3/3     3            3           2m50s

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-67dcb957c   3         3         3       2m50s

[ec2-user@k8s ~]$ kubectl exec -ti pod/nginx-deployment-67dcb957c-g2h8x -- bash 
root@nginx-deployment-67dcb957c-g2h8x:/# echo $(hostname) >/usr/share/nginx/html/1.txt

root@nginx-deployment-67dcb957c-g2h8x:/# mount | grep 192
192.168.191.131:/backup on /usr/share/nginx/html type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.191.132,local_lock=none,addr=192.168.191.131)
root@nginx-deployment-67dcb957c-g2h8x:/# 


# 服务端查看文件内容
[ec2-user@k8s ~]$ cat /backup/1.txt 
nginx-deployment-67dcb957c-g2h8x


[ec2-user@k8s ~]$ kubectl delete -f nginx-1.yml 
deployment.apps "nginx-deployment" deleted

pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
[ec2-user@k8s ~]$ vi pv-nfs.yaml 
[ec2-user@k8s ~]$ cat pv-nfs.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany 
  nfs:
    path: /backup
    server: 192.168.191.131

[ec2-user@k8s ~]$ kubectl apply -f pv-nfs.yaml 
persistentvolume/pv-nfs created
[ec2-user@k8s ~]$ kubectl get pv 
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-nfs   10Gi       RWX            Retain           Available                                   5s

[ec2-user@k8s ~]$ vi pvc-nfs.yaml 
[ec2-user@k8s ~]$ cat pvc-nfs.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-nfs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
[ec2-user@k8s ~]$ kubectl apply -f pvc-nfs.yaml 
persistentvolumeclaim/pvc-nfs created
[ec2-user@k8s ~]$ kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    pv-nfs   10Gi       RWX                           7s
[ec2-user@k8s ~]$ kubectl get pv 
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv-nfs   10Gi       RWX            Retain           Bound    default/pvc-nfs                           79s

[ec2-user@k8s ~]$ vi dp-pvc.yaml
[ec2-user@k8s ~]$ cat dp-pvc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-nfs

[ec2-user@k8s ~]$ kubectl apply -f dp-pvc.yaml 
deployment.apps/busybox created
[ec2-user@k8s ~]$ 


[ec2-user@k8s ~]$ kubectl get all 
NAME                           READY   STATUS    RESTARTS   AGE
pod/busybox-6b99c495c9-qnvlp   1/1     Running   0          47s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   27d

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/busybox   1/1     1            1           47s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/busybox-6b99c495c9   1         1         1       47s

# 查看NFS中原来的数据
[ec2-user@k8s ~]$ kubectl exec -ti busybox-6b99c495c9-qnvlp -- cat /data/1.txt 
nginx-deployment-67dcb957c-g2h8x

测一下subPathExpr:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
[ec2-user@k8s ~]$ kubectl delete -f dp-pvc.yaml 
deployment.apps "busybox" deleted
[ec2-user@k8s ~]$ 
[ec2-user@k8s ~]$ vi dp-pvc.yaml 
[ec2-user@k8s ~]$ cat dp-pvc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        volumeMounts:
        - name: data
          mountPath: /data
          subPathExpr: $(POD_NAME)
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-nfs

[ec2-user@k8s ~]$ kubectl apply -f dp-pvc.yaml 
deployment.apps/busybox created

[ec2-user@k8s ~]$ kubectl get all 
NAME                           READY   STATUS    RESTARTS   AGE
pod/busybox-5497486bf5-csr6q   1/1     Running   0          7s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   27d

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/busybox   1/1     1            1           7s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/busybox-5497486bf5   1         1         1       7s

[ec2-user@k8s ~]$ kubectl exec -ti pod/busybox-5497486bf5-csr6q -- sh 
/ # ls /data
/ # echo $(hostname) > /data/pvc.txt
/ # exit


# 查看服务端目录下数据
[ec2-user@k8s ~]$ ll /backup/
total 4
-rw-r--r-- 1 root root 33 Apr 14 00:37 1.txt
drwxr-xr-x 2 root root 21 Apr 14 00:51 busybox-5497486bf5-csr6q
[ec2-user@k8s ~]$ cat /backup/busybox-5497486bf5-csr6q/pvc.txt   
busybox-5497486bf5-csr6q

把replicas改成2,再试试:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

[ec2-user@k8s ~]$ kubectl apply -f dp-pvc.yaml 

[ec2-user@k8s ~]$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
busybox-5497486bf5-fkzls   1/1     Running   0          3m8s
busybox-5497486bf5-rv7k7   1/1     Running   0          3m8s

[ec2-user@k8s ~]$ kubectl exec busybox-5497486bf5-fkzls -- sh -c 'echo $(hostname) >/data/$(hostname).txt'
[ec2-user@k8s ~]$ kubectl exec busybox-5497486bf5-rv7k7 -- sh -c 'echo $(hostname) >/data/$(hostname).txt'                        


# 查看服务端目录结构
[ec2-user@k8s ~]$ ll -R /backup/
/backup/:
total 4
-rw-r--r-- 1 root root 33 Apr 14 00:37 1.txt
drwxr-xr-x 2 root root 21 Apr 14 00:51 busybox-5497486bf5-csr6q
drwxr-xr-x 2 root root 42 Apr 14 01:20 busybox-5497486bf5-fkzls
drwxr-xr-x 2 root root 42 Apr 14 01:20 busybox-5497486bf5-rv7k7

/backup/busybox-5497486bf5-csr6q:
total 4
-rw-r--r-- 1 root root 25 Apr 14 00:51 pvc.txt

/backup/busybox-5497486bf5-fkzls:
total 4
-rw-r--r-- 1 root root 25 Apr 14 01:20 busybox-5497486bf5-fkzls.txt

/backup/busybox-5497486bf5-rv7k7:
total 4
-rw-r--r-- 1 root root 25 Apr 14 01:20 busybox-5497486bf5-rv7k7.txt
[ec2-user@k8s ~]$ 

NFS Subdir External Provisioner

NFS subdir external provisioner 使用现有的的NFS 服务器来支持通过 Persistent Volume Claims 动态供应 Kubernetes Persistent Volumes。持久卷默认被配置为${namespace}-${pvcName}-${pvName},使用这个必须已经拥有 NFS 服务器。

K8S的外部NFS驱动,可以按照其工作方式(是作为NFS server还是NFS client)分为两类:

1.nfs-client:

也就是我们接下来演示的这一类,它通过K8S的内置的NFS驱动挂载远端的NFS服务器到本地目录;然后将自身作为storage provider,关联storage class。当用户创建对应的PVC来申请PV时,该provider就将PVC的要求与自身的属性比较,一旦满足就在本地挂载好的NFS目录中创建PV所属的子目录,为Pod提供动态的存储服务。

2.nfs: 与nfs-client不同,该驱动并不使用k8s的NFS驱动来挂载远端的NFS到本地再分配,而是直接将本地文件映射到容器内部,然后在容器内使用ganesha.nfsd来对外提供NFS服务;在每次创建PV的时候,直接在本地的NFS根目录中创建对应文件夹,并export出该子目录。

接下来我们来操作一个nfs-client驱动的例子,先对其有个直观的认识!

External NFS驱动的部署实例

这里,我们将nfs-client驱动做一个deployment部署到K8S集群中,然后对外提供存储服务。

1.部署nfs-client-provisioner

环境变量的PROVISIONER_NAME、NFS服务器地址、NFS对外提供服务的路径信息等需要设置好;部署所使用的yaml文件关键代码如下所示:

2.创建Storage Class

storage class的定义,需要注意的是:provisioner属性要等于驱动所传入的环境变量PROVISIONER_NAME的值。否则,驱动不知道知道如何绑定storage class。

3.创建PVC

这里指定了其对应的storage-class的名字为wise2c-nfs-storage,如下:

4.创建pod

指定该pod使用我们刚刚创建的PVC:henry-claim:

完成之后,如果attach到pod中执行一些文件的读写操作,就可以确定pod的/mnt已经使用了NFS的存储服务了。

官方文档中的脚本:

1
2
3
4
5
# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml

操作步骤:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[ec2-user@k8s ~]$ git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/
[ec2-user@k8s ~]$ cd nfs-subdir-external-provisioner/

[ec2-user@k8s nfs-subdir-external-provisioner]$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
[ec2-user@k8s nfs-subdir-external-provisioner]$ NAMESPACE=${NS:-default}
[ec2-user@k8s nfs-subdir-external-provisioner]$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl create -f deploy/rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created


$ vi deploy/deployment.yaml

            - name: NFS_SERVER
              value: 192.168.191.131
            - name: NFS_PATH
              value: /backup
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.191.131
            path: /backup

$ vi deploy/class.yaml

parameters:
  archiveOnDelete: "false"
#Specifies a template for creating a directory path via PVC metadata's such as labels, annotations, name or namespace. To specify metadata use ${.PVC.<metadata>}. Example: If folder should be named like <pvc-namespace>-<pvc-name>, use ${.PVC.namespace}-${.PVC.name} as pathPattern.
#  pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
#  onDelete: delete


# 先把镜像拉下来 k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl apply -f deploy/deployment.yaml 
deployment.apps/nfs-client-provisioner created

[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl apply -f deploy/class.yaml 
storageclass.storage.k8s.io/nfs-client created
[ec2-user@k8s nfs-subdir-external-provisioner]$

测试:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# PVC内容
# https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/deploy/test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml
persistentvolumeclaim/test-claim created
pod/test-pod created

# kubectl delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

test pod 在共享文件系统下写了一个 touch /mnt/SUCCESS 文件:

1
2
3
[ec2-user@k8s nfs-subdir-external-provisioner]$ ll /backup/default-test-claim-pvc-9857153a-6c2b-42d7-b464-aa5fc2acbf90/
total 0
-rw-r--r-- 1 root root 0 Apr 14 02:14 SUCCESS

NFS Ganesha server and external provisioner

就是直接在k8s集群中装一个NFS server。感觉没有直接在系统安装NFS管理方便,先搁置了。

–END

Comments