参考
源码
本地volumn
运行
目录结构
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[ec2-user@k8s ~]$ git clone https://github.com/kubernetes/examples
[ec2-user@k8s ~]$ cd examples/staging/volumes/nfs/
[ec2-user@k8s nfs]$ ls -l
total 48
-rw-rw-r-- 1 ec2-user ec2-user 823 Apr 19 20:34 nfs-busybox-deployment.yaml
drwxrwxr-x 2 ec2-user ec2-user 77 Apr 19 20:34 nfs-data
-rw-rw-r-- 1 ec2-user ec2-user 193 Apr 19 20:34 nfs-pvc.yaml
-rw-rw-r-- 1 ec2-user ec2-user 9379 Apr 19 20:34 nfs-pv.png
-rw-rw-r-- 1 ec2-user ec2-user 234 Apr 19 20:34 nfs-pv.yaml
-rw-rw-r-- 1 ec2-user ec2-user 729 Apr 19 20:34 nfs-server-deployment.yaml
-rw-rw-r-- 1 ec2-user ec2-user 212 Apr 19 20:34 nfs-server-service.yaml
-rw-rw-r-- 1 ec2-user ec2-user 673 Apr 19 20:34 nfs-web-deployment.yaml
-rw-rw-r-- 1 ec2-user ec2-user 120 Apr 19 20:34 nfs-web-service.yaml
drwxrwxr-x 2 ec2-user ec2-user 98 Apr 19 20:34 provisioner
-rw-rw-r-- 1 ec2-user ec2-user 6540 Apr 19 20:34 README.md
[ec2-user@k8s nfs]$ ls -l provisioner/
total 12
-rw-rw-r-- 1 ec2-user ec2-user 291 Apr 19 20:34 nfs-server-azure-pv.yaml
-rw-rw-r-- 1 ec2-user ec2-user 324 Apr 19 20:34 nfs-server-cdk-pv.yaml
-rw-rw-r-- 1 ec2-user ec2-user 215 Apr 19 20:34 nfs-server-gce-pv.yaml
nfs-data 是nfs-server镜像构建Dockerfile相关文件。
nfs-server:
nfs-server-deployment.yaml
nfs-server-service.yaml
provisioner
web
nfs-pv.yaml
nfs-pvc.yaml
nfs-web-deployment.yaml
nfs-web-service.yaml
nfs-busybox-deployment.yaml
按照结构,一步步的来进行配置和运行。
搭建NFS Server的本地存储卷
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[ec2-user@k8s ~]$ sudo mkdir /nfs
[ec2-user@k8s ~]$ sudo chmod 777 /nfs
[ec2-user@k8s nfs]$ cat server-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /nfs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k8s
[ec2-user@k8s nfs]$ kubectl apply -f server-pv.yml
persistentvolume/nfs-pv created
创建server pvc:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[ec2-user@k8s nfs]$ kubectl create namespace nfs
namespace/nfs-server created
[ec2-user@k8s nfs]$ cat provisioner/nfs-server-gce-pv.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: local-storage
[ec2-user@k8s nfs]$ kubectl apply -f provisioner/nfs-server-gce-pv.yaml -n nfs
persistentvolumeclaim/nfs-pv-provisioning-demo created
[ec2-user@k8s nfs]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 10Gi RWO Delete Bound nfs/nfs-pv-provisioning-demo local-storage 21s
[ec2-user@k8s nfs]$ kubectl get pvc -n nfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pv-provisioning-demo Bound nfs-pv 10Gi RWO local-storage 18s
启动nfs-server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
# 下载镜像要代理的,可以替换image或者下载后改tag
[ec2-user@k8s nfs]$ cat nfs-server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-server
spec:
replicas: 1
selector:
matchLabels:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo
[ec2-user@k8s nfs]$ cat nfs-server-service.yaml
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
[ec2-user@k8s nfs]$ kubectl apply -f nfs-server-deployment.yaml -f nfs-server-service.yaml -n nfs
deployment.apps/nfs-server created
service/nfs-server created
[ec2-user@k8s nfs]$ kubectl get all -n nfs -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nfs-server-64886b598f-57mnx 1/1 Running 0 18s 10.244.0.146 k8s <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/nfs-server ClusterIP 10.111.29.73 <none> 2049/TCP,20048/TCP,111/TCP 18s role=nfs-server
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/nfs-server 1/1 1 1 18s nfs-server k8s.gcr.io/volume-nfs:0.8 role=nfs-server
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/nfs-server-64886b598f 1 1 1 18s nfs-server k8s.gcr.io/volume-nfs:0.8 pod-template-hash=64886b598f,role=nfs-server
根据nodeAffinity容器部署在了k8s的机器。
启动web应用
创建容器使用的卷:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# 不知为何,后面容器挂载卷的时刻识别不了域名...
# mount的时刻可能是在node节点上执行的,所以解析不了域名。
# 把主机的 nameserver设置为(/etc/resolv.conf) nameserver 10.96.0.10 就可以了。
[ec2-user@k8s nfs]$ cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
server: nfs-server.nfs.svc.cluster.local
path: "/"
mountOptions:
- nfsvers=4.2
[ec2-user@k8s nfs]$ kubectl apply -f nfs-pv.yaml
persistentvolume/nfs created
[ec2-user@k8s nfs]$ kubectl get pv nfs
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs 1Mi RWX Retain Available 12s
[ec2-user@k8s nfs]$ cat nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
volumeName: nfs
# kubectl get pv | tail -n+2 | awk '$5 == "Released" {print $1}' | xargs -I{} kubectl patch pv {} --type='merge' -p '{"spec":{"claimRef": null}}
#
# [ec2-user@k8s nfs]$ kubectl patch pv nfs -p '{"spec":{"claimRef": null}}'
# persistentvolume/nfs patched
[ec2-user@k8s nfs]$ kubectl apply -f nfs-pvc.yaml
persistentvolumeclaim/nfs created
[ec2-user@k8s nfs]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs 1Mi RWX Retain Bound default/nfs 56s
nfs-pv 10Gi RWO Delete Bound nfs/nfs-pv-provisioning-demo local-storage 3m8s
[ec2-user@k8s nfs]$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs nfs Bound nfs 1Mi RWX 22s
nfs nfs-pv-provisioning-demo Bound nfs-pv 10Gi RWO local-storage 3m
部署web应用的容器:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
[ec2-user@k8s nfs]$ cat nfs-web-deployment.yaml
# This pod mounts the nfs volume claim into /usr/share/nginx/html and
# serves a simple web page.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-web
spec:
replicas: 2
selector:
matchLabels:
role: web-frontend
template:
metadata:
labels:
role: web-frontend
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
[ec2-user@k8s nfs]$ cat nfs-web-service.yaml
kind: Service
apiVersion: v1
metadata:
name: nfs-web
spec:
ports:
- port: 80
selector:
role: web-frontend
[ec2-user@k8s nfs]$ kubectl apply -f nfs-web-deployment.yaml -f nfs-web-service.yaml
deployment.apps/nfs-web created
service/nfs-web created
[ec2-user@k8s nfs]$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nfs-web-5554f77845-6rq5p 1/1 Running 0 4m56s
pod/nfs-web-5554f77845-8r885 1/1 Running 0 4m56s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33d
service/nfs-web ClusterIP 10.108.129.153 <none> 80/TCP 4m56s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nfs-web 2/2 2 2 4m56s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nfs-web-5554f77845 2 2 2 4m56s
测试:deployment创建两个busybox,会在nfs共享目录下创建修改index.html,内容为:时间和主机名。
1
2
3
4
5
6
7
8
9
[ec2-user@k8s nfs]$ kubectl apply -f nfs-busybox-deployment.yaml
deployment.apps/nfs-busybox created
[ec2-user@k8s nfs]$ curl nfs-web.default.svc.cluster.local
Wed Apr 20 00:15:20 UTC 2022
nfs-busybox-55c668489c-dxqcx
[ec2-user@k8s nfs]$ curl nfs-web.default.svc.cluster.local
Wed Apr 20 00:16:51 UTC 2022
nfs-busybox-55c668489c-sl7jz
清理
1
2
3
[ec2-user@k8s nfs]$ kubectl delete -f nfs-busybox-deployment.yaml -f nfs-web-deployment.yaml -f nfs-web-service.yaml -f nfs-pvc.yaml -f nfs-pv.yaml
[ec2-user@k8s nfs]$ kubectl delete ns nfs
[ec2-user@k8s nfs]$ kubectl delete -f server-pv.yml
问题
NFS服务器的域名解析不了
一开始没有改节点的dns,会出现域名解析不了的情况:
1
2
3
4
5
6
7
8
9
10
11
[ec2-user@k8s nfs]$ kubectl describe pod nfs-web-7bc965b94f-k6mrj -n nfs
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43s default-scheduler Successfully assigned nfs/nfs-web-7bc965b94f-k6mrj to worker1
Warning FailedMount 1s (x7 over 35s) kubelet MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nfsvers=4.2 nfs-server.nfs.svc.cluster.local:/ /var/lib/kubelet/pods/03049217-ded3-4d31-86e8-6a13c0f5b12f/volumes/kubernetes.io~nfs/nfs
Output: mount.nfs: Failed to resolve server nfs-server.nfs.svc.cluster.local: Name or service not known
mount.nfs: Operation already in progress
容器内是可以通过域名访问的,mount的时刻可能是在node节点上执行的,所以解析不了域名。
1
2
3
4
5
6
7
8
[ec2-user@k8s ~]$ kubectl run -ti test --rm --image busybox -n nfs -- sh
/ # ping nfs-server.nfs.svc.cluster.local
[ec2-user@k8s nfs]$ kubectl run -ti test --rm --image busybox -- sh
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local localdomain
options ndots:5
把所有节点的dns设置为 kube-dns 的 10.96.0.10 就可以了。
集群出问题
1
2
3
4
5
6
7
8
[ec2-user@k8s nfs]$ kubectl get pods -A
,,,
ingress-nginx ingress-nginx-controller-755447bb4d-55hjz 0/1 CrashLoopBackOff 32 (2m19s ago) 17d
kube-system coredns-64897985d-4d5rx 0/1 CrashLoopBackOff 40 (4m30s ago) 33d
kube-system coredns-64897985d-m9p9q 0/1 CrashLoopBackOff 40 (4m15s ago) 33d
,,,
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-rj8rl 0/1 CrashLoopBackOff 33 (116s ago) 17d
kubernetes-dashboard kubernetes-dashboard-fb8648fd9-22krh 0/1 CrashLoopBackOff 32 (3m35s ago) 17d
把所有的容器都清理了,然后重启服务器(或者kubelet)
1
[ec2-user@k8s nfs]$ docker ps -a | awk '{print $1}' | while read i ; do docker rm -f $i ; done
–END