Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

K8S官方例子NFS

参考

源码

本地volumn

运行

目录结构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[ec2-user@k8s ~]$ git clone https://github.com/kubernetes/examples

[ec2-user@k8s ~]$ cd examples/staging/volumes/nfs/
[ec2-user@k8s nfs]$ ls -l
total 48
-rw-rw-r-- 1 ec2-user ec2-user  823 Apr 19 20:34 nfs-busybox-deployment.yaml
drwxrwxr-x 2 ec2-user ec2-user   77 Apr 19 20:34 nfs-data
-rw-rw-r-- 1 ec2-user ec2-user  193 Apr 19 20:34 nfs-pvc.yaml
-rw-rw-r-- 1 ec2-user ec2-user 9379 Apr 19 20:34 nfs-pv.png
-rw-rw-r-- 1 ec2-user ec2-user  234 Apr 19 20:34 nfs-pv.yaml
-rw-rw-r-- 1 ec2-user ec2-user  729 Apr 19 20:34 nfs-server-deployment.yaml
-rw-rw-r-- 1 ec2-user ec2-user  212 Apr 19 20:34 nfs-server-service.yaml
-rw-rw-r-- 1 ec2-user ec2-user  673 Apr 19 20:34 nfs-web-deployment.yaml
-rw-rw-r-- 1 ec2-user ec2-user  120 Apr 19 20:34 nfs-web-service.yaml
drwxrwxr-x 2 ec2-user ec2-user   98 Apr 19 20:34 provisioner
-rw-rw-r-- 1 ec2-user ec2-user 6540 Apr 19 20:34 README.md
[ec2-user@k8s nfs]$ ls -l provisioner/
total 12
-rw-rw-r-- 1 ec2-user ec2-user 291 Apr 19 20:34 nfs-server-azure-pv.yaml
-rw-rw-r-- 1 ec2-user ec2-user 324 Apr 19 20:34 nfs-server-cdk-pv.yaml
-rw-rw-r-- 1 ec2-user ec2-user 215 Apr 19 20:34 nfs-server-gce-pv.yaml
  • nfs-data 是nfs-server镜像构建Dockerfile相关文件。
  • nfs-server:
    • nfs-server-deployment.yaml
    • nfs-server-service.yaml
    • provisioner
  • web
    • nfs-pv.yaml
    • nfs-pvc.yaml
    • nfs-web-deployment.yaml
    • nfs-web-service.yaml
    • nfs-busybox-deployment.yaml

按照结构,一步步的来进行配置和运行。

搭建NFS Server的本地存储卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[ec2-user@k8s ~]$ sudo mkdir /nfs
[ec2-user@k8s ~]$ sudo chmod 777 /nfs

[ec2-user@k8s nfs]$ cat server-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaiming
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /nfs
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - k8s

[ec2-user@k8s nfs]$ kubectl apply -f server-pv.yml
persistentvolume/nfs-pv created

创建server pvc:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[ec2-user@k8s nfs]$ kubectl create namespace nfs
namespace/nfs-server created

[ec2-user@k8s nfs]$ cat provisioner/nfs-server-gce-pv.yaml   
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pv-provisioning-demo
  labels:
    demo: nfs-pv-provisioning
spec:
  accessModes: [ "ReadWriteOnce" ]
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-storage

[ec2-user@k8s nfs]$ kubectl apply -f provisioner/nfs-server-gce-pv.yaml -n nfs
persistentvolumeclaim/nfs-pv-provisioning-demo created

[ec2-user@k8s nfs]$ kubectl get pv 
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                          STORAGECLASS    REASON   AGE
nfs-pv   10Gi       RWO            Delete           Bound    nfs/nfs-pv-provisioning-demo   local-storage            21s
[ec2-user@k8s nfs]$ kubectl get pvc -n nfs 
NAME                       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
nfs-pv-provisioning-demo   Bound    nfs-pv   10Gi       RWO            local-storage   18s

启动nfs-server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
# 下载镜像要代理的,可以替换image或者下载后改tag

[ec2-user@k8s nfs]$ cat nfs-server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-server
spec:
  replicas: 1
  selector:
    matchLabels:
      role: nfs-server
  template:
    metadata:
      labels:
        role: nfs-server
    spec:
      containers:
      - name: nfs-server
        image: k8s.gcr.io/volume-nfs:0.8
        ports:
          - name: nfs
            containerPort: 2049
          - name: mountd
            containerPort: 20048
          - name: rpcbind
            containerPort: 111
        securityContext:
          privileged: true
        volumeMounts:
          - mountPath: /exports
            name: mypvc
      volumes:
        - name: mypvc
          persistentVolumeClaim:
            claimName: nfs-pv-provisioning-demo
[ec2-user@k8s nfs]$ cat nfs-server-service.yaml
kind: Service
apiVersion: v1
metadata:
  name: nfs-server
spec:
  ports:
    - name: nfs
      port: 2049
    - name: mountd
      port: 20048
    - name: rpcbind
      port: 111
  selector:
    role: nfs-server

[ec2-user@k8s nfs]$ kubectl apply -f nfs-server-deployment.yaml -f nfs-server-service.yaml -n nfs
deployment.apps/nfs-server created
service/nfs-server created

[ec2-user@k8s nfs]$ kubectl get all -n nfs -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP             NODE   NOMINATED NODE   READINESS GATES
pod/nfs-server-64886b598f-57mnx   1/1     Running   0          18s   10.244.0.146   k8s    <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
service/nfs-server   ClusterIP   10.111.29.73   <none>        2049/TCP,20048/TCP,111/TCP   18s   role=nfs-server

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                      SELECTOR
deployment.apps/nfs-server   1/1     1            1           18s   nfs-server   k8s.gcr.io/volume-nfs:0.8   role=nfs-server

NAME                                    DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                      SELECTOR
replicaset.apps/nfs-server-64886b598f   1         1         1       18s   nfs-server   k8s.gcr.io/volume-nfs:0.8   pod-template-hash=64886b598f,role=nfs-server

根据nodeAffinity容器部署在了k8s的机器。

启动web应用

创建容器使用的卷:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# 不知为何,后面容器挂载卷的时刻识别不了域名... 
# mount的时刻可能是在node节点上执行的,所以解析不了域名。
# 把主机的 nameserver设置为(/etc/resolv.conf) nameserver 10.96.0.10 就可以了。
[ec2-user@k8s nfs]$ cat nfs-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 1Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: nfs-server.nfs.svc.cluster.local
    path: "/"
  mountOptions:
    - nfsvers=4.2

[ec2-user@k8s nfs]$ kubectl apply -f nfs-pv.yaml 
persistentvolume/nfs created
[ec2-user@k8s nfs]$ kubectl get pv nfs
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs    1Mi        RWX            Retain           Available                                   12s


[ec2-user@k8s nfs]$ cat nfs-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 1Mi
  volumeName: nfs
  
# kubectl get pv | tail -n+2 | awk '$5 == "Released" {print $1}' | xargs -I{} kubectl patch pv {} --type='merge' -p '{"spec":{"claimRef": null}}
# 
# [ec2-user@k8s nfs]$ kubectl patch pv nfs -p '{"spec":{"claimRef": null}}'                
# persistentvolume/nfs patched
[ec2-user@k8s nfs]$ kubectl apply -f nfs-pvc.yaml
persistentvolumeclaim/nfs created


[ec2-user@k8s nfs]$ kubectl get pv 
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                          STORAGECLASS    REASON   AGE
nfs      1Mi        RWX            Retain           Bound    default/nfs                                                 56s
nfs-pv   10Gi       RWO            Delete           Bound    nfs/nfs-pv-provisioning-demo   local-storage            3m8s
[ec2-user@k8s nfs]$ kubectl get pvc -A 
NAMESPACE   NAME                       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
nfs         nfs                        Bound    nfs      1Mi        RWX                            22s
nfs         nfs-pv-provisioning-demo   Bound    nfs-pv   10Gi       RWO            local-storage   3m

部署web应用的容器:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
[ec2-user@k8s nfs]$ cat nfs-web-deployment.yaml 
# This pod mounts the nfs volume claim into /usr/share/nginx/html and
# serves a simple web page.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-web
spec:
  replicas: 2
  selector:
    matchLabels:
      role: web-frontend
  template:
    metadata:
      labels:
        role: web-frontend
    spec:
      containers:
      - name: web
        image: nginx
        ports:
          - name: web
            containerPort: 80
        volumeMounts:
            # name must match the volume name below
            - name: nfs
              mountPath: "/usr/share/nginx/html"
      volumes:
      - name: nfs
        persistentVolumeClaim:
          claimName: nfs

[ec2-user@k8s nfs]$ cat nfs-web-service.yaml
kind: Service
apiVersion: v1
metadata:
  name: nfs-web
spec:
  ports:
    - port: 80
  selector:
    role: web-frontend

[ec2-user@k8s nfs]$ kubectl apply -f nfs-web-deployment.yaml -f nfs-web-service.yaml
deployment.apps/nfs-web created
service/nfs-web created

[ec2-user@k8s nfs]$ kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/nfs-web-5554f77845-6rq5p   1/1     Running   0          4m56s
pod/nfs-web-5554f77845-8r885   1/1     Running   0          4m56s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   33d
service/nfs-web      ClusterIP   10.108.129.153   <none>        80/TCP    4m56s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-web   2/2     2            2           4m56s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-web-5554f77845   2         2         2       4m56s

测试:deployment创建两个busybox,会在nfs共享目录下创建修改index.html,内容为:时间和主机名。

1
2
3
4
5
6
7
8
9
[ec2-user@k8s nfs]$ kubectl apply -f nfs-busybox-deployment.yaml 
deployment.apps/nfs-busybox created

[ec2-user@k8s nfs]$ curl nfs-web.default.svc.cluster.local 
Wed Apr 20 00:15:20 UTC 2022
nfs-busybox-55c668489c-dxqcx
[ec2-user@k8s nfs]$ curl nfs-web.default.svc.cluster.local
Wed Apr 20 00:16:51 UTC 2022
nfs-busybox-55c668489c-sl7jz

清理

1
2
3
[ec2-user@k8s nfs]$ kubectl delete -f nfs-busybox-deployment.yaml -f nfs-web-deployment.yaml -f nfs-web-service.yaml -f nfs-pvc.yaml -f nfs-pv.yaml 
[ec2-user@k8s nfs]$ kubectl delete ns nfs 
[ec2-user@k8s nfs]$ kubectl delete -f server-pv.yml 

问题

NFS服务器的域名解析不了

一开始没有改节点的dns,会出现域名解析不了的情况:

1
2
3
4
5
6
7
8
9
10
11
[ec2-user@k8s nfs]$ kubectl describe pod nfs-web-7bc965b94f-k6mrj -n nfs 
...
Events:
  Type     Reason       Age               From               Message
  ----     ------       ----              ----               -------
  Normal   Scheduled    43s               default-scheduler  Successfully assigned nfs/nfs-web-7bc965b94f-k6mrj to worker1
  Warning  FailedMount  1s (x7 over 35s)  kubelet            MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nfsvers=4.2 nfs-server.nfs.svc.cluster.local:/ /var/lib/kubelet/pods/03049217-ded3-4d31-86e8-6a13c0f5b12f/volumes/kubernetes.io~nfs/nfs
Output: mount.nfs: Failed to resolve server nfs-server.nfs.svc.cluster.local: Name or service not known
mount.nfs: Operation already in progress

容器内是可以通过域名访问的,mount的时刻可能是在node节点上执行的,所以解析不了域名。

1
2
3
4
5
6
7
8
[ec2-user@k8s ~]$ kubectl run -ti test --rm --image busybox -n nfs -- sh  
/ # ping nfs-server.nfs.svc.cluster.local
[ec2-user@k8s nfs]$ kubectl run -ti test --rm --image busybox -- sh

/ # cat /etc/resolv.conf 
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local localdomain
options ndots:5

把所有节点的dns设置为 kube-dns 的 10.96.0.10 就可以了。

集群出问题

1
2
3
4
5
6
7
8
[ec2-user@k8s nfs]$ kubectl get pods -A 
,,,
ingress-nginx          ingress-nginx-controller-755447bb4d-55hjz    0/1     CrashLoopBackOff       32 (2m19s ago)   17d
kube-system            coredns-64897985d-4d5rx                      0/1     CrashLoopBackOff       40 (4m30s ago)   33d
kube-system            coredns-64897985d-m9p9q                      0/1     CrashLoopBackOff       40 (4m15s ago)   33d
,,,
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-rj8rl   0/1     CrashLoopBackOff       33 (116s ago)    17d
kubernetes-dashboard   kubernetes-dashboard-fb8648fd9-22krh         0/1     CrashLoopBackOff       32 (3m35s ago)   17d

把所有的容器都清理了,然后重启服务器(或者kubelet)

1
[ec2-user@k8s nfs]$ docker ps -a |  awk '{print $1}' | while read i ; do docker rm -f $i ; done 

–END

k8s共享存储使用NFS

容器中的应用数据得保存下来,使用local/hostPath可以临时用用,还是得有一个共享的存储。

先使用最简单的NFS分区/卷。

安装NFS server on aws ec2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#所有node节点安装nfs客户端
#yum -y install nfs-utils
#systemctl start nfs && systemctl enable nfs


[ec2-user@k8s ~]$ sudo yum install nfs-utils 
Loaded plugins: langpacks, priorities, update-motd
amzn2-core                                                                                                                     | 3.7 kB  00:00:00     
Package 1:nfs-utils-1.3.0-0.54.amzn2.0.2.x86_64 already installed and latest version
Nothing to do

[ec2-user@k8s ~]$ sudo mkdir /backup
[ec2-user@k8s ~]$ sudo chmod -R 755 /backup
[ec2-user@k8s ~]$ sudo chown nfsnobody:nfsnobody /backup

[ec2-user@k8s ~]$ sudo vi /etc/exports
[ec2-user@k8s ~]$ cat /etc/exports    
/backup 192.168.191.0/24(rw,sync,no_root_squash,no_all_squash)

# /k8s-fs *(rw,sync,no_root_squash,no_all_squash)

[ec2-user@k8s ~]$ sudo service nfs-server restart 
Redirecting to /bin/systemctl restart nfs-server.service
[ec2-user@k8s ~]$ 
[ec2-user@k8s ~]$ sudo exportfs
/backup         192.168.191.0/24
[ec2-user@k8s ~]$ sudo exportfs -arv 
exporting 192.168.191.0/24:/backup

[ec2-user@k8s ~]$ rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  56847  status
    100024    1   tcp  60971  status
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  47545  nlockmgr
    100021    3   udp  47545  nlockmgr
    100021    4   udp  47545  nlockmgr
    100021    1   tcp  40703  nlockmgr
    100021    3   tcp  40703  nlockmgr
    100021    4   tcp  40703  nlockmgr
[ec2-user@k8s ~]$ showmount -e 192.168.191.131
Export list for 192.168.191.131:
/backup 192.168.191.0/24

也可以通过docker来启动nfs server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
[ec2-user@k8s ~]$ sudo mkdir -p /data/kubernetes-volumes
[ec2-user@k8s ~]$ docker run --privileged -itd --name nfs -p 2049:2049 -e SHARED_DIRECTORY=/data -v /data/kubernetes-volumes:/data itsthenetwork/nfs-server-alpine:12 
f84b70dcca6bd5abb275fbee50fd161d8befdd709ce6523b3a514f04b7af8677

[ec2-user@k8s ~]$ docker logs f84b70dcca6bd5abb2 
Writing SHARED_DIRECTORY to /etc/exports file
The PERMITTED environment variable is unset or null, defaulting to '*'.
This means any client can mount.
The READ_ONLY environment variable is unset or null, defaulting to 'rw'.
Clients have read/write access.
The SYNC environment variable is unset or null, defaulting to 'async' mode.
Writes will not be immediately written to disk.
Displaying /etc/exports contents:
/data *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)

Starting rpcbind...
Displaying rpcbind status...
   program version netid     address                service    owner
    100000    4    tcp6      ::.0.111               -          superuser
    100000    3    tcp6      ::.0.111               -          superuser
    100000    4    udp6      ::.0.111               -          superuser
    100000    3    udp6      ::.0.111               -          superuser
    100000    4    tcp       0.0.0.0.0.111          -          superuser
    100000    3    tcp       0.0.0.0.0.111          -          superuser
    100000    2    tcp       0.0.0.0.0.111          -          superuser
    100000    4    udp       0.0.0.0.0.111          -          superuser
    100000    3    udp       0.0.0.0.0.111          -          superuser
    100000    2    udp       0.0.0.0.0.111          -          superuser
    100000    4    local     /var/run/rpcbind.sock  -          superuser
    100000    3    local     /var/run/rpcbind.sock  -          superuser
Starting NFS in the background...
rpc.nfsd: knfsd is currently down
rpc.nfsd: Writing version string to kernel: -2 -3 +4 +4.1 +4.2
rpc.nfsd: Created AF_INET TCP socket.
rpc.nfsd: Created AF_INET6 TCP socket.
Exporting File System...
exporting *:/data
/data           <world>
Starting Mountd in the background...These
Startup successful.

[ec2-user@k8s ~]$ sudo mount -v -o vers=4,loud 127.0.0.1:/ nfsmnt
mount.nfs: timeout set for Thu Apr 14 08:26:48 2022
mount.nfs: trying text-based options 'vers=4.1,addr=127.0.0.1,clientaddr=127.0.0.1'

[ec2-user@k8s ~]$ df -h | grep nfsmnt
127.0.0.1:/      25G  9.8G   16G  39% /home/ec2-user/nfsmnt

[ec2-user@k8s ~]$ touch nfsmnt/$(hostname).txt
[ec2-user@k8s ~]$ ls -l nfsmnt/
total 0
-rw-rw-r-- 1 ec2-user ec2-user 0 Apr 14 08:25 k8s.txt
[ec2-user@k8s ~]$ ls -l /data/kubernetes-volumes/
total 0
-rw-rw-r-- 1 ec2-user ec2-user 0 Apr 14 08:25 k8s.txt
[ec2-user@k8s ~]$ 

# vi /etc/fstab
# 192.168.0.4:/   /mnt   nfs4    _netdev,auto  0  0

### pod
# kubectl create -f nfs-server.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-server
spec:
  replicas: 1           # <- no more replicas
  template:
    metadata:
      labels:
        app: nfs-server
    spec:
      nodeSelector:     # <- use selector to fix nfs-server on k8s2.zhangqiaoc.com
        kubernetes.io/hostname: k8s2.zhangqiaoc.com
      containers:
      - name: nfs-server
        image: itsthenetwork/nfs-server-alpine:latest
        volumeMounts:
        - name: nfs-storage
          mountPath: /nfsshare
        env:
        - name: SHARED_DIRECTORY
          value: "/nfsshare"
        ports:
        - name: nfs  
          containerPort: 2049   # <- export port
        securityContext:
          privileged: true      # <- privileged mode is mandentory.
      volumes:
      - name: nfs-storage  
        hostPath:               # <- the folder on the host machine.
          path: /root/fileshare
# kubectl expose deployment nfs-server --type=ClusterIP
# kubectl get svc

# yum install -y nfs-utils
# mkdir /root/nfsmnt
# mount -v 10.101.117.226:/ /root/nfsmnt

client

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# 所有work节点安装 nfs-utils rpcbind
[ec2-user@worker1 ~]$ sudo yum install nfs-utils 
Loaded plugins: langpacks, priorities, update-motd
amzn2-core                                                                                                                                        | 3.7 kB  00:00:00     
Package 1:nfs-utils-1.3.0-0.54.amzn2.0.2.x86_64 already installed and latest version
Nothing to do

[ec2-user@worker1 ~]$ sudo systemctl status nfs
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
[ec2-user@worker1 ~]$ sudo systemctl status rpcbind
● rpcbind.service - RPC bind service
   Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-04-13 20:44:34 CST; 1h 51min ago
  Process: 6979 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 7025 (rpcbind)
    Tasks: 1
   Memory: 2.1M
   CGroup: /system.slice/rpcbind.service
           └─7025 /sbin/rpcbind -w

Apr 13 20:44:34 worker1 systemd[1]: Starting RPC bind service...
Apr 13 20:44:34 worker1 systemd[1]: Started RPC bind service.


[ec2-user@worker1 ~]$ sudo mkdir -p /data
[ec2-user@worker1 ~]$ sudo chmod 777 /data 
[ec2-user@worker1 ~]$ sudo mount -t nfs 192.168.191.131:/backup /data
[ec2-user@worker1 ~]$ df -h | grep 192.168.191.131
192.168.191.131:/backup   25G  9.6G   16G  39% /data


# vi /etc/fstab
# 172.17.30.22:/backup /data nfs defaults 0 0

k8s中使用NFS

容器直接挂载NFS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
[ec2-user@k8s ~]$ cat nginx-1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        nfs:
          path: /backup
          server: 192.168.191.131
        
[ec2-user@k8s ~]$ kubectl apply -f nginx-1.yml

[ec2-user@k8s ~]$ kubectl get all 
NAME                                   READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-67dcb957c-g2h8x   1/1     Running   0          2m50s
pod/nginx-deployment-67dcb957c-gfv28   1/1     Running   0          2m50s
pod/nginx-deployment-67dcb957c-rqwjs   1/1     Running   0          2m50s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   27d

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3/3     3            3           2m50s

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-67dcb957c   3         3         3       2m50s

[ec2-user@k8s ~]$ kubectl exec -ti pod/nginx-deployment-67dcb957c-g2h8x -- bash 
root@nginx-deployment-67dcb957c-g2h8x:/# echo $(hostname) >/usr/share/nginx/html/1.txt

root@nginx-deployment-67dcb957c-g2h8x:/# mount | grep 192
192.168.191.131:/backup on /usr/share/nginx/html type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.191.132,local_lock=none,addr=192.168.191.131)
root@nginx-deployment-67dcb957c-g2h8x:/# 


# 服务端查看文件内容
[ec2-user@k8s ~]$ cat /backup/1.txt 
nginx-deployment-67dcb957c-g2h8x


[ec2-user@k8s ~]$ kubectl delete -f nginx-1.yml 
deployment.apps "nginx-deployment" deleted

pvc

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
[ec2-user@k8s ~]$ vi pv-nfs.yaml 
[ec2-user@k8s ~]$ cat pv-nfs.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-nfs
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany 
  nfs:
    path: /backup
    server: 192.168.191.131

[ec2-user@k8s ~]$ kubectl apply -f pv-nfs.yaml 
persistentvolume/pv-nfs created
[ec2-user@k8s ~]$ kubectl get pv 
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-nfs   10Gi       RWX            Retain           Available                                   5s

[ec2-user@k8s ~]$ vi pvc-nfs.yaml 
[ec2-user@k8s ~]$ cat pvc-nfs.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-nfs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
[ec2-user@k8s ~]$ kubectl apply -f pvc-nfs.yaml 
persistentvolumeclaim/pvc-nfs created
[ec2-user@k8s ~]$ kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    pv-nfs   10Gi       RWX                           7s
[ec2-user@k8s ~]$ kubectl get pv 
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
pv-nfs   10Gi       RWX            Retain           Bound    default/pvc-nfs                           79s

[ec2-user@k8s ~]$ vi dp-pvc.yaml
[ec2-user@k8s ~]$ cat dp-pvc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
        volumeMounts:
        - name: data
          mountPath: /data
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-nfs

[ec2-user@k8s ~]$ kubectl apply -f dp-pvc.yaml 
deployment.apps/busybox created
[ec2-user@k8s ~]$ 


[ec2-user@k8s ~]$ kubectl get all 
NAME                           READY   STATUS    RESTARTS   AGE
pod/busybox-6b99c495c9-qnvlp   1/1     Running   0          47s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   27d

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/busybox   1/1     1            1           47s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/busybox-6b99c495c9   1         1         1       47s

# 查看NFS中原来的数据
[ec2-user@k8s ~]$ kubectl exec -ti busybox-6b99c495c9-qnvlp -- cat /data/1.txt 
nginx-deployment-67dcb957c-g2h8x

测一下subPathExpr:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
[ec2-user@k8s ~]$ kubectl delete -f dp-pvc.yaml 
deployment.apps "busybox" deleted
[ec2-user@k8s ~]$ 
[ec2-user@k8s ~]$ vi dp-pvc.yaml 
[ec2-user@k8s ~]$ cat dp-pvc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  replicas: 1
  selector:
    matchLabels:
      app: busybox
  template:
    metadata:
      labels:
        app: busybox
    spec:
      containers:
      - name: busybox
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        volumeMounts:
        - name: data
          mountPath: /data
          subPathExpr: $(POD_NAME)
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: pvc-nfs

[ec2-user@k8s ~]$ kubectl apply -f dp-pvc.yaml 
deployment.apps/busybox created

[ec2-user@k8s ~]$ kubectl get all 
NAME                           READY   STATUS    RESTARTS   AGE
pod/busybox-5497486bf5-csr6q   1/1     Running   0          7s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   27d

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/busybox   1/1     1            1           7s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/busybox-5497486bf5   1         1         1       7s

[ec2-user@k8s ~]$ kubectl exec -ti pod/busybox-5497486bf5-csr6q -- sh 
/ # ls /data
/ # echo $(hostname) > /data/pvc.txt
/ # exit


# 查看服务端目录下数据
[ec2-user@k8s ~]$ ll /backup/
total 4
-rw-r--r-- 1 root root 33 Apr 14 00:37 1.txt
drwxr-xr-x 2 root root 21 Apr 14 00:51 busybox-5497486bf5-csr6q
[ec2-user@k8s ~]$ cat /backup/busybox-5497486bf5-csr6q/pvc.txt   
busybox-5497486bf5-csr6q

把replicas改成2,再试试:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

[ec2-user@k8s ~]$ kubectl apply -f dp-pvc.yaml 

[ec2-user@k8s ~]$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
busybox-5497486bf5-fkzls   1/1     Running   0          3m8s
busybox-5497486bf5-rv7k7   1/1     Running   0          3m8s

[ec2-user@k8s ~]$ kubectl exec busybox-5497486bf5-fkzls -- sh -c 'echo $(hostname) >/data/$(hostname).txt'
[ec2-user@k8s ~]$ kubectl exec busybox-5497486bf5-rv7k7 -- sh -c 'echo $(hostname) >/data/$(hostname).txt'                        


# 查看服务端目录结构
[ec2-user@k8s ~]$ ll -R /backup/
/backup/:
total 4
-rw-r--r-- 1 root root 33 Apr 14 00:37 1.txt
drwxr-xr-x 2 root root 21 Apr 14 00:51 busybox-5497486bf5-csr6q
drwxr-xr-x 2 root root 42 Apr 14 01:20 busybox-5497486bf5-fkzls
drwxr-xr-x 2 root root 42 Apr 14 01:20 busybox-5497486bf5-rv7k7

/backup/busybox-5497486bf5-csr6q:
total 4
-rw-r--r-- 1 root root 25 Apr 14 00:51 pvc.txt

/backup/busybox-5497486bf5-fkzls:
total 4
-rw-r--r-- 1 root root 25 Apr 14 01:20 busybox-5497486bf5-fkzls.txt

/backup/busybox-5497486bf5-rv7k7:
total 4
-rw-r--r-- 1 root root 25 Apr 14 01:20 busybox-5497486bf5-rv7k7.txt
[ec2-user@k8s ~]$ 

NFS Subdir External Provisioner

NFS subdir external provisioner 使用现有的的NFS 服务器来支持通过 Persistent Volume Claims 动态供应 Kubernetes Persistent Volumes。持久卷默认被配置为${namespace}-${pvcName}-${pvName},使用这个必须已经拥有 NFS 服务器。

K8S的外部NFS驱动,可以按照其工作方式(是作为NFS server还是NFS client)分为两类:

1.nfs-client:

也就是我们接下来演示的这一类,它通过K8S的内置的NFS驱动挂载远端的NFS服务器到本地目录;然后将自身作为storage provider,关联storage class。当用户创建对应的PVC来申请PV时,该provider就将PVC的要求与自身的属性比较,一旦满足就在本地挂载好的NFS目录中创建PV所属的子目录,为Pod提供动态的存储服务。

2.nfs: 与nfs-client不同,该驱动并不使用k8s的NFS驱动来挂载远端的NFS到本地再分配,而是直接将本地文件映射到容器内部,然后在容器内使用ganesha.nfsd来对外提供NFS服务;在每次创建PV的时候,直接在本地的NFS根目录中创建对应文件夹,并export出该子目录。

接下来我们来操作一个nfs-client驱动的例子,先对其有个直观的认识!

External NFS驱动的部署实例

这里,我们将nfs-client驱动做一个deployment部署到K8S集群中,然后对外提供存储服务。

1.部署nfs-client-provisioner

环境变量的PROVISIONER_NAME、NFS服务器地址、NFS对外提供服务的路径信息等需要设置好;部署所使用的yaml文件关键代码如下所示:

2.创建Storage Class

storage class的定义,需要注意的是:provisioner属性要等于驱动所传入的环境变量PROVISIONER_NAME的值。否则,驱动不知道知道如何绑定storage class。

3.创建PVC

这里指定了其对应的storage-class的名字为wise2c-nfs-storage,如下:

4.创建pod

指定该pod使用我们刚刚创建的PVC:henry-claim:

完成之后,如果attach到pod中执行一些文件的读写操作,就可以确定pod的/mnt已经使用了NFS的存储服务了。

官方文档中的脚本:

1
2
3
4
5
# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml

操作步骤:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[ec2-user@k8s ~]$ git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/
[ec2-user@k8s ~]$ cd nfs-subdir-external-provisioner/

[ec2-user@k8s nfs-subdir-external-provisioner]$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
[ec2-user@k8s nfs-subdir-external-provisioner]$ NAMESPACE=${NS:-default}
[ec2-user@k8s nfs-subdir-external-provisioner]$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl create -f deploy/rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created


$ vi deploy/deployment.yaml

            - name: NFS_SERVER
              value: 192.168.191.131
            - name: NFS_PATH
              value: /backup
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.191.131
            path: /backup

$ vi deploy/class.yaml

parameters:
  archiveOnDelete: "false"
#Specifies a template for creating a directory path via PVC metadata's such as labels, annotations, name or namespace. To specify metadata use ${.PVC.<metadata>}. Example: If folder should be named like <pvc-namespace>-<pvc-name>, use ${.PVC.namespace}-${.PVC.name} as pathPattern.
#  pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
#  onDelete: delete


# 先把镜像拉下来 k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2

[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl apply -f deploy/deployment.yaml 
deployment.apps/nfs-client-provisioner created

[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl apply -f deploy/class.yaml 
storageclass.storage.k8s.io/nfs-client created
[ec2-user@k8s nfs-subdir-external-provisioner]$

测试:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# PVC内容
# https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/deploy/test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml
persistentvolumeclaim/test-claim created
pod/test-pod created

# kubectl delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

test pod 在共享文件系统下写了一个 touch /mnt/SUCCESS 文件:

1
2
3
[ec2-user@k8s nfs-subdir-external-provisioner]$ ll /backup/default-test-claim-pvc-9857153a-6c2b-42d7-b464-aa5fc2acbf90/
total 0
-rw-r--r-- 1 root root 0 Apr 14 02:14 SUCCESS

NFS Ganesha server and external provisioner

就是直接在k8s集群中装一个NFS server。感觉没有直接在系统安装NFS管理方便,先搁置了。

–END

Xiaomi R4a Install Padavan

最近换了个小米的路由,想着登录ssh,能在路由上跑一些小的定时任务。但是无奈,绑定不通过,那就直接刷机了。

一开始是按照csdn上的文章刷opewrt的,但是都不成功,最后换成 老毛子Padavan 了。

参考

步骤描述

获取root权限

使用 WSL Ubuntu 进行访问,python3已经安装了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ python3 --version
Python 3.8.5

# https://github.com/acecilia/OpenWRTInvasion
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ ls
Dockerfile  readme                                     requirements.txt  set_english.py
extras      README.md                                  script.sh         speedtest_urls_template.xml
firmwares   remote_command_execution_vulnerability.py  script_tools      tcp_file_server.py

winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ sudo apt install python3-pip

winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ pip3 install -r requirements.txt
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from -r requirements.txt (line 1)) (2.22.0)

# stok获取:登录web访问后,浏览器的地址上就有stok的参数。
# 详情可查看参考的文章
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ python3 remote_command_execution_vulnerability.py
Router IP address [press enter for using the default 'miwifi.com']:
Enter router admin password: __xxx__
There two options to provide the files needed for invasion:
   1. Use a local TCP file server runing on random port to provide files in local directory `script_tools`.
   2. Download needed files from remote github repository. (choose this option only if github is accessable inside router device.)
Which option do you prefer? (default: 1)
****************
router_ip_address: miwifi.com
stok: __xxx__
file provider: local file server
****************
start uploading config file...
start exec command...
local file server is runing on 0.0.0.0:1135. root='script_tools'
local file server is getting 'busybox-mipsel' for 192.168.31.1.
local file server is getting 'dropbearStaticMipsel.tar.bz2' for 192.168.31.1.
done! Now you can connect to the router using several options: (user: root, password: root)
* telnet miwifi.com
* ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -c 3des-cbc -o UserKnownHostsFile=/dev/null root@miwifi.com
* ftp: using a program like cyberduck

winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -c 3des-cbc -o UserKnownHostsFile=/dev/null root@miwifi.com
The authenticity of host 'miwifi.com (192.168.31.1)' can't be established.
RSA key fingerprint is SHA256:AT91yqVuqPnmOO5wmke6V0Hl67GKXdkb48W/FU3WfEM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'miwifi.com,192.168.31.1' (RSA) to the list of known hosts.
root@miwifi.com's password:

BusyBox v1.19.4 (2021-09-30 03:16:53 UTC) built-in shell (ash)
Enter 'help' for a list of built-in commands.

 -----------------------------------------------------
       Welcome to XiaoQiang!
 -----------------------------------------------------
  $$$$$$\  $$$$$$$\  $$$$$$$$\      $$\      $$\        $$$$$$\  $$\   $$\
 $$  __$$\ $$  __$$\ $$  _____|     $$ |     $$ |      $$  __$$\ $$ | $$  |
 $$ /  $$ |$$ |  $$ |$$ |           $$ |     $$ |      $$ /  $$ |$$ |$$  /
 $$$$$$$$ |$$$$$$$  |$$$$$\         $$ |     $$ |      $$ |  $$ |$$$$$  /
 $$  __$$ |$$  __$$< $$  __|        $$ |     $$ |      $$ |  $$ |$$  $$<
 $$ |  $$ |$$ |  $$ |$$ |           $$ |     $$ |      $$ |  $$ |$$ |\$$\
 $$ |  $$ |$$ |  $$ |$$$$$$$$\       $$$$$$$$$  |       $$$$$$  |$$ | \$$\
 \__|  \__|\__|  \__|\________|      \_________/        \______/ \__|  \__|


root@XiaoQiang:~# 

安装breed

用WinSCP登入路由,使用 ftp协议 ,ip地址 miwifi.com ,账号 root,把 breed-mt7621-pbr-m1.bin 文件上传到tmp文件夹内,之后执行如下命令:

1
2
3
4
5
6
7
8
9
root@XiaoQiang:~# cd /tmp
root@XiaoQiang:/tmp# ls breed-mt7621-pbr-*
breed-mt7621-pbr-m1.bin

root@XiaoQiang:/tmp# mtd -r write breed-mt7621-pbr-m1.bin Bootloader
Unlocking Bootloader ...

Writing from breed-mt7621-pbr-m1.bin to Bootloader ...
Rebooting ...

接上网线,再等一阵子(1,2分钟),然后访问 http://192.168.1.1/

刷padavan

下载r4a版本的padavan: https://opt.cn2qq.com/padavan/

1
MI-R4A_3.4.3.9-099.trx

访问 192.168.1.1 ,备份eeprom和固件(重要),然后勾选固件,然后将小米4A的trx固件文件进行上传,然后完成固件更新流程。

更新过程请不要切断路由电源!更新完成后, 页面并不会自动刷新, 自己尝试能否进入路由配置页面。。

刷完后,这里需要多等待一点时间,耐心一点。

访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
http://192.168.123.1/
admin/admin

在 [系统管理 - 服务] 页签开启ssh公钥登录。

[MI-R4A /home/root]# uname -a
Linux MI-R4A 3.4.113 #3 SMP Sun Apr 3 14:26:03 CST 2022 mips GNU/Linux
[MI-R4A /home/root]# uname -r
3.4.113
[MI-R4A /home/root]# uname -m
mips

#CPU/Memory information
$ cat /proc/cpuinfo

#Version
$ cat /proc/version

#SCSI/Sata devices
$ cat /proc/scsi/scsi

#Partitions
$ cat /proc/partitions

#安装 okpg , 进入 Shell 后,在根目录安装,输入如下:
[MI-R4A /home/root]# opkg.sh

# 升级:
[MI-R4A /home/root]# opkg update
Downloading http://bin.entware.net/mipselsf-k3.4/Packages.gz
Updated list of available packages in /opt/var/opkg-lists/entware

[MI-R4A /home/root]# opkg install jq
Installing jq (1.6-2) to root...
Downloading http://bin.entware.net/mipselsf-k3.4/jq_1.6-2_mipsel-3.4.ipk
Configuring jq.

# opkg list

–END

Minikube Guide

正如其名,minikube快速的安装一个k8s的集群,方便新手和应用开发人员调试等。

注:如果资源足够的话,搭建一个kubeadm的集群来的好一些。

官网文档

下载

1
2
3
4
5
6
7
8
9
10
11
# 墙外下载
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

# 上传
[ec2-user@amazonlinux ~]$ rz
rz waiting to receive.
Starting zmodem transfer.  Press Ctrl+C to cancel.
Transferring minikube-linux-amd64...
  100%   70948 KB    35474 KB/sec    00:00:02       0 Errors  

[ec2-user@amazonlinux ~]$ sudo install minikube-linux-amd64 /usr/local/bin/minikube

安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
[ec2-user@amazonlinux ~]$ sudo amazon-linux-extras install docker
Installing docker
Loaded plugins: langpacks, priorities, update-motd
Cleaning repos: amzn2-core amzn2extra-docker
12 metadata files removed
4 sqlite files removed
0 metadata files removed
Loaded plugins: langpacks, priorities, update-motd
amzn2-core                                                                                                                                        | 3.7 kB  00:00:00     
amzn2extra-docker                                                                                                                                 | 3.0 kB  00:00:00     
(1/5): amzn2-core/2/x86_64/group_gz                                                                                                               | 2.5 kB  00:00:01     
(2/5): amzn2-core/2/x86_64/updateinfo                                                                                                             | 452 kB  00:00:01     
(3/5): amzn2extra-docker/2/x86_64/updateinfo                                                                                                      | 5.9 kB  00:00:00     
(4/5): amzn2extra-docker/2/x86_64/primary_db                                                                                                      |  86 kB  00:00:01     
(5/5): amzn2-core/2/x86_64/primary_db                                                                                                             |  60 MB  00:00:04     
Resolving Dependencies
--> Running transaction check
---> Package docker.x86_64 0:20.10.7-5.amzn2 will be installed
--> Processing Dependency: runc >= 1.0.0 for package: docker-20.10.7-5.amzn2.x86_64
--> Processing Dependency: libcgroup >= 0.40.rc1-5.15 for package: docker-20.10.7-5.amzn2.x86_64
--> Processing Dependency: containerd >= 1.3.2 for package: docker-20.10.7-5.amzn2.x86_64
--> Processing Dependency: pigz for package: docker-20.10.7-5.amzn2.x86_64
--> Running transaction check
---> Package containerd.x86_64 0:1.4.6-8.amzn2 will be installed
---> Package libcgroup.x86_64 0:0.41-21.amzn2 will be installed
---> Package pigz.x86_64 0:2.3.4-1.amzn2.0.1 will be installed
---> Package runc.x86_64 0:1.0.0-2.amzn2 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================================================================================
 Package                               Arch                              Version                                      Repository                                    Size
=========================================================================================================================================================================
Installing:
 docker                                x86_64                            20.10.7-5.amzn2                              amzn2extra-docker                             42 M
Installing for dependencies:
 containerd                            x86_64                            1.4.6-8.amzn2                                amzn2extra-docker                             24 M
 libcgroup                             x86_64                            0.41-21.amzn2                                amzn2-core                                    66 k
 pigz                                  x86_64                            2.3.4-1.amzn2.0.1                            amzn2-core                                    81 k
 runc                                  x86_64                            1.0.0-2.amzn2                                amzn2extra-docker                            3.3 M

Transaction Summary
=========================================================================================================================================================================
Install  1 Package (+4 Dependent packages)

Total download size: 69 M
Installed size: 285 M
Is this ok [y/d/N]: t
Is this ok [y/d/N]: y
Downloading packages:
(1/5): libcgroup-0.41-21.amzn2.x86_64.rpm                                                                                                         |  66 kB  00:00:01     
(2/5): pigz-2.3.4-1.amzn2.0.1.x86_64.rpm                                                                                                          |  81 kB  00:00:01     
(3/5): docker-20.10.7-5.amzn2.x86_64.rpm                                                                                                          |  42 MB  00:00:07     
(4/5): runc-1.0.0-2.amzn2.x86_64.rpm                                                                                                              | 3.3 MB  00:00:00     
(5/5): containerd-1.4.6-8.amzn2.x86_64.rpm                                                                                                        |  24 MB  00:00:12     
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                    5.5 MB/s |  69 MB  00:00:12     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : runc-1.0.0-2.amzn2.x86_64                                                                                                                             1/5 
  Installing : containerd-1.4.6-8.amzn2.x86_64                                                                                                                       2/5 
  Installing : libcgroup-0.41-21.amzn2.x86_64                                                                                                                        3/5 
  Installing : pigz-2.3.4-1.amzn2.0.1.x86_64                                                                                                                         4/5 
  Installing : docker-20.10.7-5.amzn2.x86_64                                                                                                                         5/5 
  Verifying  : docker-20.10.7-5.amzn2.x86_64                                                                                                                         1/5 
  Verifying  : containerd-1.4.6-8.amzn2.x86_64                                                                                                                       2/5 
  Verifying  : runc-1.0.0-2.amzn2.x86_64                                                                                                                             3/5 
  Verifying  : pigz-2.3.4-1.amzn2.0.1.x86_64                                                                                                                         4/5 
  Verifying  : libcgroup-0.41-21.amzn2.x86_64                                                                                                                        5/5 

Installed:
  docker.x86_64 0:20.10.7-5.amzn2                                                                                                                                        

Dependency Installed:
  containerd.x86_64 0:1.4.6-8.amzn2           libcgroup.x86_64 0:0.41-21.amzn2           pigz.x86_64 0:2.3.4-1.amzn2.0.1           runc.x86_64 0:1.0.0-2.amzn2          

Complete!
  0  ansible2                 available    \
        [ =2.4.2  =2.4.6  =2.8  =stable ]
  2  httpd_modules            available    [ =1.0  =stable ]
  3  memcached1.5             available    \
        [ =1.5.1  =1.5.16  =1.5.17 ]
  5  postgresql9.6            available    \
        [ =9.6.6  =9.6.8  =stable ]
  6  postgresql10             available    [ =10  =stable ]
  9  R3.4                     available    [ =3.4.3  =stable ]
 10  rust1                    available    \
        [ =1.22.1  =1.26.0  =1.26.1  =1.27.2  =1.31.0  =1.38.0
          =stable ]
 11  vim                      available    [ =8.0  =stable ]
 18  libreoffice              available    \
        [ =5.0.6.2_15  =5.3.6.1  =stable ]
 19  gimp                     available    [ =2.8.22 ]
 20  docker=latest            enabled      \
        [ =17.12.1  =18.03.1  =18.06.1  =18.09.9  =stable ]
 21  mate-desktop1.x          available    \
        [ =1.19.0  =1.20.0  =stable ]
 22  GraphicsMagick1.3        available    \
        [ =1.3.29  =1.3.32  =1.3.34  =stable ]
 23  tomcat8.5                available    \
        [ =8.5.31  =8.5.32  =8.5.38  =8.5.40  =8.5.42  =8.5.50
          =stable ]
 24  epel                     available    [ =7.11  =stable ]
 25  testing                  available    [ =1.0  =stable ]
 26  ecs                      available    [ =stable ]
 27  corretto8                available    \
        [ =1.8.0_192  =1.8.0_202  =1.8.0_212  =1.8.0_222  =1.8.0_232
          =1.8.0_242  =stable ]
 28  firecracker              available    [ =0.11  =stable ]
 29  golang1.11               available    \
        [ =1.11.3  =1.11.11  =1.11.13  =stable ]
 30  squid4                   available    [ =4  =stable ]
 32  lustre2.10               available    \
        [ =2.10.5  =2.10.8  =stable ]
 33  java-openjdk11           available    [ =11  =stable ]
 34  lynis                    available    [ =stable ]
 35  kernel-ng                available    [ =stable ]
 36  BCC                      available    [ =0.x  =stable ]
 37  mono                     available    [ =5.x  =stable ]
 38  nginx1                   available    [ =stable ]
 39  ruby2.6                  available    [ =2.6  =stable ]
 40  mock                     available    [ =stable ]
 41  postgresql11             available    [ =11  =stable ]
 42  php7.4                   available    [ =stable ]
 43  livepatch                available    [ =stable ]
 44  python3.8                available    [ =stable ]
 45  haproxy2                 available    [ =stable ]
 46  collectd                 available    [ =stable ]
 47  aws-nitro-enclaves-cli   available    [ =stable ]
 48  R4                       available    [ =stable ]
 49  kernel-5.4               available    [ =stable ]
 50  selinux-ng               available    [ =stable ]
 51  php8.0                   available    [ =stable ]
 52  tomcat9                  available    [ =stable ]
 53  unbound1.13              available    [ =stable ]
 54  mariadb10.5              available    [ =stable ]
 55  kernel-5.10              available    [ =stable ]
 56  redis6                   available    [ =stable ]
 57  ruby3.0                  available    [ =stable ]
 58  postgresql12             available    [ =stable ]
 59  postgresql13             available    [ =stable ]
 60  mock2                    available    [ =stable ]
 61  dnsmasq2.85              available    [ =stable ]

[ec2-user@amazonlinux ~]$ sudo service docker start
Redirecting to /bin/systemctl start docker.service
[ec2-user@amazonlinux ~]$ sudo systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[ec2-user@amazonlinux ~]$ sudo usermod -a -G docker ec2-user

[ec2-user@amazonlinux ~]$ exit
logout

[ec2-user@amazonlinux ~]$ docker info 
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.7
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc version: 84113eef6fc27af1b01b3181f31bbaf708715301
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.14.268-205.500.amzn2.x86_64
 Operating System: Amazon Linux 2
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.828GiB
 Name: amazonlinux.onprem
 ID: MXVZ:LQK7:BVKI:WECH:XNBN:QJUK:IXYU:FADA:4EYI:JOHA:VS3R:LNLX
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

[ec2-user@amazonlinux ~]$ docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.7
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc version: 84113eef6fc27af1b01b3181f31bbaf708715301
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.14.268-205.500.amzn2.x86_64
 Operating System: Amazon Linux 2
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.828GiB
 Name: amazonlinux.onprem
 ID: MXVZ:LQK7:BVKI:WECH:XNBN:QJUK:IXYU:FADA:4EYI:JOHA:VS3R:LNLX
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

启动minikube

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[ec2-user@amazonlinux ~]$ minikube start
* minikube v1.25.2 on Amazon 2
* Automatically selected the docker driver. Other choices: none, ssh
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Downloading Kubernetes v1.23.3 preload ...
    > preloaded-images-k8s-v17-v1...: 505.68 MiB / 505.68 MiB  100.00% 14.20 Mi
    > index.docker.io/kicbase/sta...: 379.06 MiB / 379.06 MiB  100.00% 2.11 MiB
! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.30, but successfully downloaded docker.io/kicbase/stable:v0.0.30 as a fallback image
* Creating docker container (CPUs=2, Memory=2200MB) ...
! This container is having trouble accessing https://k8s.gcr.io
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
  - kubelet.housekeeping-interval=5m
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
* kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Interact with your cluster

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
[ec2-user@amazonlinux ~]$ minikube kubectl -- get pods -A 
    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 44.43 MiB / 44.43 MiB [-------------] 100.00% 17.41 MiB p/s 2.8s
NAMESPACE     NAME                               READY   STATUS    RESTARTS       AGE
kube-system   coredns-64897985d-cm6h7            1/1     Running   0              2m3s
kube-system   etcd-minikube                      1/1     Running   0              2m15s
kube-system   kube-apiserver-minikube            1/1     Running   0              2m15s
kube-system   kube-controller-manager-minikube   1/1     Running   0              2m15s
kube-system   kube-proxy-s2sf4                   1/1     Running   0              2m3s
kube-system   kube-scheduler-minikube            1/1     Running   0              2m15s
kube-system   storage-provisioner                1/1     Running   1 (101s ago)   2m14s

[ec2-user@amazonlinux ~]$ alias kubectl="minikube kubectl --"
[ec2-user@amazonlinux ~]$ kubectl get pods -A 
NAMESPACE     NAME                               READY   STATUS    RESTARTS        AGE
kube-system   coredns-64897985d-cm6h7            1/1     Running   0               5m5s
kube-system   etcd-minikube                      1/1     Running   0               5m17s
kube-system   kube-apiserver-minikube            1/1     Running   0               5m17s
kube-system   kube-controller-manager-minikube   1/1     Running   0               5m17s
kube-system   kube-proxy-s2sf4                   1/1     Running   0               5m5s
kube-system   kube-scheduler-minikube            1/1     Running   0               5m17s
kube-system   storage-provisioner                1/1     Running   1 (4m43s ago)   5m16s

# 查看配置
[ec2-user@amazonlinux ~]$ cat .kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/ec2-user/.minikube/ca.crt
    extensions:
    - extension:
        last-update: Sun, 03 Apr 2022 16:43:46 UTC
        provider: minikube.sigs.k8s.io
        version: v1.25.2
      name: cluster_info
    server: https://192.168.49.2:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    extensions:
    - extension:
        last-update: Sun, 03 Apr 2022 16:43:46 UTC
        provider: minikube.sigs.k8s.io
        version: v1.25.2
      name: context_info
    namespace: default
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /home/ec2-user/.minikube/profiles/minikube/client.crt
    client-key: /home/ec2-user/.minikube/profiles/minikube/client.key
[ec2-user@amazonlinux ~]$ 

或者做一个软连接的kubectl:

1
2
3
4
5
# https://minikube.sigs.k8s.io/docs/handbook/kubectl/
# https://kubernetes.io/docs/tasks/tools/included/optional-kubectl-configs-bash-linux/
# https://minikube.sigs.k8s.io/docs/handbook/kubectl/

ln -s $(which minikube) /usr/local/bin/kubectl

对系统做了些什么?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[ec2-user@amazonlinux ~]$ docker images 
REPOSITORY       TAG       IMAGE ID       CREATED       SIZE
kicbase/stable   v0.0.30   1312ccd2422d   7 weeks ago   1.14GB
[ec2-user@amazonlinux ~]$ docker ps 
CONTAINER ID   IMAGE                    COMMAND                  CREATED         STATUS         PORTS                                                                                                                                  NAMES
19887e6799fb   kicbase/stable:v0.0.30   "/usr/local/bin/entr…"   7 minutes ago   Up 7 minutes   127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp   minikube

# 进到容器内查看
[ec2-user@amazonlinux ~]$ docker exec -ti 19887e6799fb bash 

[ec2-user@amazonlinux ~]$ minikube ssh
docker@minikube:~$ sudo su -
root@minikube:/# 
root@minikube:/# docker ps -a 
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS                     PORTS     NAMES
317a4dc12504   6e38f40d628d           "/storage-provisioner"   7 minutes ago   Up 7 minutes                         k8s_storage-provisioner_storage-provisioner_kube-system_c4ea2d31-2c3e-4672-9479-719b0082ac5c_1
e0876840ef56   a4ca41631cc7           "/coredns -conf /etc…"   7 minutes ago   Up 7 minutes                         k8s_coredns_coredns-64897985d-cm6h7_kube-system_0bbdb9ae-d64a-4d74-8b38-ba5bd66af499_0
485899a5fe0c   9b7cc9982109           "/usr/local/bin/kube…"   7 minutes ago   Up 7 minutes                         k8s_kube-proxy_kube-proxy-s2sf4_kube-system_2e24cec7-d9b3-430e-8869-b9af785588de_0
0faab30374f5   k8s.gcr.io/pause:3.6   "/pause"                 7 minutes ago   Up 7 minutes                         k8s_POD_coredns-64897985d-cm6h7_kube-system_0bbdb9ae-d64a-4d74-8b38-ba5bd66af499_0
ae99f0b5b873   k8s.gcr.io/pause:3.6   "/pause"                 7 minutes ago   Up 7 minutes                         k8s_POD_kube-proxy-s2sf4_kube-system_2e24cec7-d9b3-430e-8869-b9af785588de_0
fdeeb06fda78   6e38f40d628d           "/storage-provisioner"   7 minutes ago   Exited (1) 7 minutes ago             k8s_storage-provisioner_storage-provisioner_kube-system_c4ea2d31-2c3e-4672-9479-719b0082ac5c_0
f83e36d2d77e   k8s.gcr.io/pause:3.6   "/pause"                 7 minutes ago   Up 7 minutes                         k8s_POD_storage-provisioner_kube-system_c4ea2d31-2c3e-4672-9479-719b0082ac5c_0
df73834cbaf8   b07520cd7ab7           "kube-controller-man…"   8 minutes ago   Up 8 minutes                         k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_b965983ec05322d0973594a01d5e8245_0
49ad661cee86   25f8c7f3da61           "etcd --advertise-cl…"   8 minutes ago   Up 8 minutes                         k8s_etcd_etcd-minikube_kube-system_9d3d310935e5fabe942511eec3e2cd0c_0
2c28a97b3875   99a3486be4f2           "kube-scheduler --au…"   8 minutes ago   Up 8 minutes                         k8s_kube-scheduler_kube-scheduler-minikube_kube-system_be132fe5c6572cb34d93f5e05ce2a540_0
72aca4e710b6   f40be0088a83           "kube-apiserver --ad…"   8 minutes ago   Up 8 minutes                         k8s_kube-apiserver_kube-apiserver-minikube_kube-system_cd6e47233d36a9715b0ab9632f871843_0
97c6ecde381c   k8s.gcr.io/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes                         k8s_POD_kube-scheduler-minikube_kube-system_be132fe5c6572cb34d93f5e05ce2a540_0
9140211bc570   k8s.gcr.io/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes                         k8s_POD_kube-controller-manager-minikube_kube-system_b965983ec05322d0973594a01d5e8245_0
b27ec09ec789   k8s.gcr.io/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes                         k8s_POD_kube-apiserver-minikube_kube-system_cd6e47233d36a9715b0ab9632f871843_0
0aef74ead92e   k8s.gcr.io/pause:3.6   "/pause"                 8 minutes ago   Up 8 minutes                         k8s_POD_etcd-minikube_kube-system_9d3d310935e5fabe942511eec3e2cd0c_0

root@minikube:/# docker images 
REPOSITORY                                TAG       IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-apiserver                 v1.23.3   f40be0088a83   2 months ago    135MB
k8s.gcr.io/kube-proxy                     v1.23.3   9b7cc9982109   2 months ago    112MB
k8s.gcr.io/kube-scheduler                 v1.23.3   99a3486be4f2   2 months ago    53.5MB
k8s.gcr.io/kube-controller-manager        v1.23.3   b07520cd7ab7   2 months ago    125MB
k8s.gcr.io/etcd                           3.5.1-0   25f8c7f3da61   5 months ago    293MB
k8s.gcr.io/coredns/coredns                v1.8.6    a4ca41631cc7   5 months ago    46.8MB
k8s.gcr.io/pause                          3.6       6270bb605e12   7 months ago    683kB
kubernetesui/dashboard                    v2.3.1    e1482a24335a   9 months ago    220MB
kubernetesui/metrics-scraper              v1.0.7    7801cfc6d5c0   9 months ago    34.4MB
gcr.io/k8s-minikube/storage-provisioner   v5        6e38f40d628d   12 months ago   31.5MB

更便捷的管理minikube docker:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env
[ec2-user@amazonlinux ~]$ eval $(minikube docker-env)
# 原理就是设置了环境变量,docker连上了远程服务:export DOCKER_HOST="tcp://192.168.49.2:2376"

[ec2-user@amazonlinux ~]$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS     NAMES
344dc0e8c1e6   7801cfc6d5c0           "/metrics-sidecar"       13 minutes ago   Up 13 minutes             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-58549894f-mzhg2_kubernetes-dashboard_4d6591df-2ee1-46b1-b44c-46ef748a3cd8_0
276a2b6b591f   e1482a24335a           "/dashboard --insecu…"   13 minutes ago   Up 13 minutes             k8s_kubernetes-dashboard_kubernetes-dashboard-ccd587f44-w22lx_kubernetes-dashboard_eb06974d-a4e6-459a-b135-b2426696a75f_0
739f9fdc02bf   k8s.gcr.io/pause:3.6   "/pause"                 13 minutes ago   Up 13 minutes             k8s_POD_dashboard-metrics-scraper-58549894f-mzhg2_kubernetes-dashboard_4d6591df-2ee1-46b1-b44c-46ef748a3cd8_0
9b9d38bdb6d7   k8s.gcr.io/pause:3.6   "/pause"                 13 minutes ago   Up 13 minutes             k8s_POD_kubernetes-dashboard-ccd587f44-w22lx_kubernetes-dashboard_eb06974d-a4e6-459a-b135-b2426696a75f_0
8fd387311710   k8s.gcr.io/pause:3.6   "/pause"                 8 hours ago      Up 8 hours                k8s_POD_hello-minikube-74c6b47596-gfxrl_default_689891ce-a761-46a6-aa16-fb33dd037c45_0
317a4dc12504   6e38f40d628d           "/storage-provisioner"   8 hours ago      Up 8 hours                k8s_storage-provisioner_storage-provisioner_kube-system_c4ea2d31-2c3e-4672-9479-719b0082ac5c_1
e0876840ef56   a4ca41631cc7           "/coredns -conf /etc…"   8 hours ago      Up 8 hours                k8s_coredns_coredns-64897985d-cm6h7_kube-system_0bbdb9ae-d64a-4d74-8b38-ba5bd66af499_0
485899a5fe0c   9b7cc9982109           "/usr/local/bin/kube…"   8 hours ago      Up 8 hours                k8s_kube-proxy_kube-proxy-s2sf4_kube-system_2e24cec7-d9b3-430e-8869-b9af785588de_0
0faab30374f5   k8s.gcr.io/pause:3.6   "/pause"                 8 hours ago      Up 8 hours                k8s_POD_coredns-64897985d-cm6h7_kube-system_0bbdb9ae-d64a-4d74-8b38-ba5bd66af499_0
ae99f0b5b873   k8s.gcr.io/pause:3.6   "/pause"                 8 hours ago      Up 8 hours                k8s_POD_kube-proxy-s2sf4_kube-system_2e24cec7-d9b3-430e-8869-b9af785588de_0
f83e36d2d77e   k8s.gcr.io/pause:3.6   "/pause"                 8 hours ago      Up 8 hours                k8s_POD_storage-provisioner_kube-system_c4ea2d31-2c3e-4672-9479-719b0082ac5c_0
df73834cbaf8   b07520cd7ab7           "kube-controller-man…"   8 hours ago      Up 8 hours                k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_b965983ec05322d0973594a01d5e8245_0
49ad661cee86   25f8c7f3da61           "etcd --advertise-cl…"   8 hours ago      Up 8 hours                k8s_etcd_etcd-minikube_kube-system_9d3d310935e5fabe942511eec3e2cd0c_0
2c28a97b3875   99a3486be4f2           "kube-scheduler --au…"   8 hours ago      Up 8 hours                k8s_kube-scheduler_kube-scheduler-minikube_kube-system_be132fe5c6572cb34d93f5e05ce2a540_0
72aca4e710b6   f40be0088a83           "kube-apiserver --ad…"   8 hours ago      Up 8 hours                k8s_kube-apiserver_kube-apiserver-minikube_kube-system_cd6e47233d36a9715b0ab9632f871843_0
97c6ecde381c   k8s.gcr.io/pause:3.6   "/pause"                 8 hours ago      Up 8 hours                k8s_POD_kube-scheduler-minikube_kube-system_be132fe5c6572cb34d93f5e05ce2a540_0
9140211bc570   k8s.gcr.io/pause:3.6   "/pause"                 8 hours ago      Up 8 hours                k8s_POD_kube-controller-manager-minikube_kube-system_b965983ec05322d0973594a01d5e8245_0
b27ec09ec789   k8s.gcr.io/pause:3.6   "/pause"                 8 hours ago      Up 8 hours                k8s_POD_kube-apiserver-minikube_kube-system_cd6e47233d36a9715b0ab9632f871843_0
0aef74ead92e   k8s.gcr.io/pause:3.6   "/pause"                 8 hours ago      Up 8 hours                k8s_POD_etcd-minikube_kube-system_9d3d310935e5fabe942511eec3e2cd0c_0

管理:

1
2
3
4
minikube cache add alpine:latest
minikube cache reload
minikube cache list
minikube cache delete <image name>

dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[ec2-user@amazonlinux ~]$ minikube dashboard
* Enabling dashboard ...
  - Using image kubernetesui/metrics-scraper:v1.0.7
  - Using image kubernetesui/dashboard:v2.3.1
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:37163/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
  http://127.0.0.1:37163/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

[ec2-user@amazonlinux ~]$ sudo netstat -anp | grep 37163 | grep LISTEN
tcp        0      0 127.0.0.1:37163         0.0.0.0:*               LISTEN      71016/kubectl       

[ec2-user@amazonlinux ~]$ ps aux|grep 71016
ec2-user   71016  0.0  0.9 750808 38932 pts/2    Sl+  00:59   0:00 /home/ec2-user/.minikube/cache/linux/amd64/v1.23.3/kubectl --cluster minikube --context minikube proxy --port 0


# 再获取访问地址
[ec2-user@amazonlinux ~]$ minikube dashboard --url
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
http://127.0.0.1:43247/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

hello world

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[ec2-user@amazonlinux ~]$ eval $(minikube docker-env)
[ec2-user@amazonlinux ~]$ docker load -i echoserver.tar.gz 
6cc9890d69b6: Loading layer [==================================================>]  61.68MB/61.68MB
5f70bf18a086: Loading layer [==================================================>]  1.024kB/1.024kB
e105cd217163: Loading layer [==================================================>]  8.704kB/8.704kB
9f9b8efa9a34: Loading layer [==================================================>]  83.34MB/83.34MB
4cc84b7b3aba: Loading layer [==================================================>]  3.072kB/3.072kB
e2615e4925e2: Loading layer [==================================================>]  3.072kB/3.072kB
1787713d6d5d: Loading layer [==================================================>]   5.12kB/5.12kB
67639a8a7916: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: k8s.gcr.io/echoserver:1.4

[ec2-user@amazonlinux ~]$ docker images 
REPOSITORY                                TAG       IMAGE ID       CREATED         SIZE
k8s.gcr.io/kube-apiserver                 v1.23.3   f40be0088a83   2 months ago    135MB
k8s.gcr.io/kube-scheduler                 v1.23.3   99a3486be4f2   2 months ago    53.5MB
k8s.gcr.io/kube-proxy                     v1.23.3   9b7cc9982109   2 months ago    112MB
k8s.gcr.io/kube-controller-manager        v1.23.3   b07520cd7ab7   2 months ago    125MB
k8s.gcr.io/etcd                           3.5.1-0   25f8c7f3da61   5 months ago    293MB
k8s.gcr.io/coredns/coredns                v1.8.6    a4ca41631cc7   6 months ago    46.8MB
k8s.gcr.io/pause                          3.6       6270bb605e12   7 months ago    683kB
kubernetesui/dashboard                    v2.3.1    e1482a24335a   10 months ago   220MB
kubernetesui/metrics-scraper              v1.0.7    7801cfc6d5c0   10 months ago   34.4MB
gcr.io/k8s-minikube/storage-provisioner   v5        6e38f40d628d   12 months ago   31.5MB
k8s.gcr.io/echoserver                     1.4       a90209bb39e3   5 years ago     140MB


# kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
# kubectl expose deployment hello-minikube --type=NodePort --port=8080

# kubectl --help

[ec2-user@amazonlinux ~]$ kubectl get svc 
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
hello-minikube   NodePort    10.98.195.3   <none>        8080:32754/TCP   7h
kubernetes       ClusterIP   10.96.0.1     <none>        443/TCP          12d

[ec2-user@amazonlinux ~]$ kubectl get services hello-minikube
NAME             TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
hello-minikube   NodePort   10.98.195.3   <none>        8080:32754/TCP   7h59m


# kubectl port-forward service/hello-minikube 7080:8080

[ec2-user@amazonlinux ~]$ kubectl port-forward --address 0.0.0.0 service/hello-minikube 7080:8080
Forwarding from 0.0.0.0:7080 -> 8080
Handling connection for 7080
Handling connection for 7080

浏览器访问 http://192.168.191.133:7080/

load balancer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[ec2-user@amazonlinux ~]$ kubectl create deployment balanced --image=k8s.gcr.io/echoserver:1.4  
deployment.apps/balanced created
[ec2-user@amazonlinux ~]$ kubectl expose deployment balanced --type=LoadBalancer --port=8080
service/balanced exposed
[ec2-user@amazonlinux ~]$ minikube tunnel


[ec2-user@amazonlinux ~]$ kubectl get services balanced
NAME       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
balanced   LoadBalancer   10.100.193.49   10.100.193.49   8080:30406/TCP   13s
[ec2-user@amazonlinux ~]$ curl 10.100.193.49:8080
CLIENT VALUES:
client_address=172.17.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.100.193.49:8080/

SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:
accept=*/*
host=10.100.193.49:8080
user-agent=curl/7.79.1
BODY:
[ec2-user@amazonlinux ~]$ 

清理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
[ec2-user@amazonlinux ~]$ minikube pause
* Pausing node minikube ... 
* Paused 18 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator
[ec2-user@amazonlinux ~]$ minikube unpause
* Unpausing node minikube ... 
* Unpaused 18 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator

###!!!!
[ec2-user@amazonlinux ~]$ minikube config set memory 16384
! These changes will take effect upon a minikube delete and then a minikube start

[ec2-user@amazonlinux ~]$ minikube config unset memory


[ec2-user@amazonlinux ~]$ docker ps 
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS     NAMES
01cf3a66eb7c   a90209bb39e3           "nginx -g 'daemon of…"   6 minutes ago    Up 6 minutes              k8s_echoserver_balanced-5b98d98bf8-t464f_default_c6b70ff1-d799-4633-bf13-e481052a5da1_0
6a23b4b82131   k8s.gcr.io/pause:3.6   "/pause"                 6 minutes ago    Up 6 minutes              k8s_POD_balanced-5b98d98bf8-t464f_default_c6b70ff1-d799-4633-bf13-e481052a5da1_0
b5817aab4b25   a90209bb39e3           "nginx -g 'daemon of…"   17 minutes ago   Up 17 minutes             k8s_echoserver_hello-minikube-7bc9d7884c-m4r4s_default_cef90979-1c63-4497-acb0-d13847c5a60b_0
9d5677d088c6   k8s.gcr.io/pause:3.6   "/pause"                 17 minutes ago   Up 17 minutes             k8s_POD_hello-minikube-7bc9d7884c-m4r4s_default_cef90979-1c63-4497-acb0-d13847c5a60b_0
c13da70206de   e1482a24335a           "/dashboard --insecu…"   50 minutes ago   Up 50 minutes             k8s_kubernetes-dashboard_kubernetes-dashboard-ccd587f44-w22lx_kubernetes-dashboard_eb06974d-a4e6-459a-b135-b2426696a75f_4
0cead3ead613   6e38f40d628d           "/storage-provisioner"   50 minutes ago   Up 50 minutes             k8s_storage-provisioner_storage-provisioner_kube-system_c4ea2d31-2c3e-4672-9479-719b0082ac5c_5
3a1c19a7a51f   7801cfc6d5c0           "/metrics-sidecar"       51 minutes ago   Up 51 minutes             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-58549894f-mzhg2_kubernetes-dashboard_4d6591df-2ee1-46b1-b44c-46ef748a3cd8_2
0c48d4f778de   a4ca41631cc7           "/coredns -conf /etc…"   51 minutes ago   Up 51 minutes             k8s_coredns_coredns-64897985d-cm6h7_kube-system_0bbdb9ae-d64a-4d74-8b38-ba5bd66af499_2
ac2ee5d973c6   9b7cc9982109           "/usr/local/bin/kube…"   51 minutes ago   Up 51 minutes             k8s_kube-proxy_kube-proxy-s2sf4_kube-system_2e24cec7-d9b3-430e-8869-b9af785588de_2
dc17216220bb   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_kubernetes-dashboard-ccd587f44-w22lx_kubernetes-dashboard_eb06974d-a4e6-459a-b135-b2426696a75f_2
9d7f883afb88   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_storage-provisioner_kube-system_c4ea2d31-2c3e-4672-9479-719b0082ac5c_2
d28d9a0a91b1   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_coredns-64897985d-cm6h7_kube-system_0bbdb9ae-d64a-4d74-8b38-ba5bd66af499_2
7e7a16e16d03   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_kube-proxy-s2sf4_kube-system_2e24cec7-d9b3-430e-8869-b9af785588de_2
ace45942773c   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_dashboard-metrics-scraper-58549894f-mzhg2_kubernetes-dashboard_4d6591df-2ee1-46b1-b44c-46ef748a3cd8_2
8f00f611d713   f40be0088a83           "kube-apiserver --ad…"   51 minutes ago   Up 51 minutes             k8s_kube-apiserver_kube-apiserver-minikube_kube-system_cd6e47233d36a9715b0ab9632f871843_2
79f528bbd4d1   99a3486be4f2           "kube-scheduler --au…"   51 minutes ago   Up 51 minutes             k8s_kube-scheduler_kube-scheduler-minikube_kube-system_be132fe5c6572cb34d93f5e05ce2a540_2
75ce75e37ceb   b07520cd7ab7           "kube-controller-man…"   51 minutes ago   Up 51 minutes             k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_b965983ec05322d0973594a01d5e8245_2
50c302912f4a   25f8c7f3da61           "etcd --advertise-cl…"   51 minutes ago   Up 51 minutes             k8s_etcd_etcd-minikube_kube-system_9d3d310935e5fabe942511eec3e2cd0c_2
0b1c094b5824   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_kube-controller-manager-minikube_kube-system_b965983ec05322d0973594a01d5e8245_2
7b6d52353935   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_kube-scheduler-minikube_kube-system_be132fe5c6572cb34d93f5e05ce2a540_2
64ba4b4c1a84   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_etcd-minikube_kube-system_9d3d310935e5fabe942511eec3e2cd0c_2
92f05324a447   k8s.gcr.io/pause:3.6   "/pause"                 51 minutes ago   Up 51 minutes             k8s_POD_kube-apiserver-minikube_kube-system_cd6e47233d36a9715b0ab9632f871843_2


[ec2-user@amazonlinux ~]$ minikube addons list
|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | third-party (ambassador)       |
| auto-pause                  | minikube | disabled     | google                         |
| csi-hostpath-driver         | minikube | disabled     | kubernetes                     |
| dashboard                   | minikube | enabled ✅   | kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | kubernetes                     |
| efk                         | minikube | disabled     | third-party (elastic)          |
| freshpod                    | minikube | disabled     | google                         |
| gcp-auth                    | minikube | disabled     | google                         |
| gvisor                      | minikube | disabled     | google                         |
| helm-tiller                 | minikube | disabled     | third-party (helm)             |
| ingress                     | minikube | disabled     | unknown (third-party)          |
| ingress-dns                 | minikube | disabled     | google                         |
| istio                       | minikube | disabled     | third-party (istio)            |
| istio-provisioner           | minikube | disabled     | third-party (istio)            |
| kong                        | minikube | disabled     | third-party (Kong HQ)          |
| kubevirt                    | minikube | disabled     | third-party (kubevirt)         |
| logviewer                   | minikube | disabled     | unknown (third-party)          |
| metallb                     | minikube | disabled     | third-party (metallb)          |
| metrics-server              | minikube | disabled     | kubernetes                     |
| nvidia-driver-installer     | minikube | disabled     | google                         |
| nvidia-gpu-device-plugin    | minikube | disabled     | third-party (nvidia)           |
| olm                         | minikube | disabled     | third-party (operator          |
|                             |          |              | framework)                     |
| pod-security-policy         | minikube | disabled     | unknown (third-party)          |
| portainer                   | minikube | disabled     | portainer.io                   |
| registry                    | minikube | disabled     | google                         |
| registry-aliases            | minikube | disabled     | unknown (third-party)          |
| registry-creds              | minikube | disabled     | third-party (upmc enterprises) |
| storage-provisioner         | minikube | enabled ✅   | google                         |
| storage-provisioner-gluster | minikube | disabled     | unknown (third-party)          |
| volumesnapshots             | minikube | disabled     | kubernetes                     |
|-----------------------------|----------|--------------|--------------------------------|

[ec2-user@amazonlinux ~]$ minikube delete --all
* Deleting "minikube" in docker ...
* Removing /home/ec2-user/.minikube/machines/minikube ...
* Removed all traces of the "minikube" cluster.
* Successfully deleted all profiles


#Create a second cluster running an older Kubernetes release:
# minikube start -p aged --kubernetes-version=v1.16.1

k3d

  • https://k3d.io/v5.4.1/ k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
[ec2-user@amazonlinux ~]$ wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
Preparing to install k3d into /usr/local/bin
k3d installed into /usr/local/bin/k3d
Run 'k3d --help' to see what you can do with it.

[ec2-user@amazonlinux ~]$ k3d cluster create mycluster
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-mycluster'              
INFO[0000] Created image volume k3d-mycluster-images    
INFO[0000] Starting new tools node...                   
INFO[0001] Creating node 'k3d-mycluster-server-0'       
INFO[0003] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.1' 
INFO[0007] Pulling image 'docker.io/rancher/k3s:v1.22.7-k3s1' 
INFO[0009] Starting Node 'k3d-mycluster-tools'          
INFO[0084] Creating LoadBalancer 'k3d-mycluster-serverlb' 
INFO[0087] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.1' 
INFO[0098] Using the k3d-tools node to gather environment information 
INFO[0098] HostIP: using network gateway 172.18.0.1 address 
INFO[0098] Starting cluster 'mycluster'                 
INFO[0098] Starting servers...                          
INFO[0098] Starting Node 'k3d-mycluster-server-0'       
INFO[0104] All agents already running.                  
INFO[0104] Starting helpers...                          
INFO[0104] Starting Node 'k3d-mycluster-serverlb'       
INFO[0111] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... 
INFO[0114] Cluster 'mycluster' created successfully!    
INFO[0114] You can now use it like this:                
kubectl cluster-info

[ec2-user@amazonlinux ~]$ docker ps 
CONTAINER ID   IMAGE                            COMMAND                  CREATED         STATUS         PORTS                             NAMES
5d57c1328e66   ghcr.io/k3d-io/k3d-proxy:5.4.1   "/bin/sh -c nginx-pr…"   6 minutes ago   Up 6 minutes   80/tcp, 0.0.0.0:41489->6443/tcp   k3d-mycluster-serverlb
add7ac133348   rancher/k3s:v1.22.7-k3s1         "/bin/k3s server --t…"   6 minutes ago   Up 6 minutes                                     k3d-mycluster-server-0

# 注意:上面的kubectl是minikue的,需要下载
[ec2-user@amazonlinux ~]$ curl -LO https://dl.k8s.io/release/v1.23.0/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   154  100   154    0     0    230      0 --:--:-- --:--:-- --:--:--   230
100 44.4M  100 44.4M    0     0  1989k      0  0:00:22  0:00:22 --:--:-- 2256k
[ec2-user@amazonlinux ~]$ ll
[ec2-user@amazonlinux ~]$ sudo cp kubectl /usr/local/bin/
[ec2-user@amazonlinux ~]$ which kubectl
/usr/local/bin/kubectl

[ec2-user@amazonlinux ~]$ kubectl get nodes
NAME                     STATUS   ROLES                  AGE   VERSION
k3d-mycluster-server-0   Ready    control-plane,master   47m   v1.22.7+k3s1

[ec2-user@amazonlinux ~]$ kubectl get pods -A 
NAMESPACE     NAME                                      READY   STATUS      RESTARTS      AGE
kube-system   local-path-provisioner-84bb864455-s8t7f   1/1     Running     0             45m
kube-system   coredns-96cc4f57d-vvmrx                   1/1     Running     0             45m
kube-system   helm-install-traefik-crd--1-ghv9f         0/1     Completed   0             45m
kube-system   helm-install-traefik--1-s4ktb             0/1     Completed   1             45m
kube-system   svclb-traefik-9jt5d                       2/2     Running     1 (44m ago)   44m
kube-system   metrics-server-ff9dbcb6c-72mwc            1/1     Running     0             45m
kube-system   traefik-56c4b88c4b-f454f                  1/1     Running     0             44m
[ec2-user@amazonlinux ~]$ 

[ec2-user@amazonlinux ~]$ k3d cluster list 
NAME        SERVERS   AGENTS   LOADBALANCER
mycluster   1/1       0/0      true
[ec2-user@amazonlinux ~]$ k3d node create worker1 --cluster mycluster
INFO[0000] Adding 1 node(s) to the runtime local cluster 'mycluster'... 
INFO[0000] Using the k3d-tools node to gather environment information 
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-mycluster-tools'          
INFO[0001] HostIP: using network gateway 172.18.0.1 address 
INFO[0001] Starting Node 'k3d-worker1-0'                
INFO[0009] Successfully created 1 node(s)!              
[ec2-user@amazonlinux ~]$ kubectl get nodes
NAME                     STATUS     ROLES                  AGE   VERSION
k3d-mycluster-server-0   Ready      control-plane,master   50m   v1.22.7+k3s1
k3d-worker1-0            NotReady   <none>                 7s    v1.22.7+k3s1
[ec2-user@amazonlinux ~]$ 

[ec2-user@amazonlinux ~]$ kubectl get nodes                          
NAME                     STATUS   ROLES                  AGE   VERSION
k3d-mycluster-server-0   Ready    control-plane,master   50m   v1.22.7+k3s1
k3d-worker1-0            Ready    <none>                 24s   v1.22.7+k3s1
[ec2-user@amazonlinux ~]$ 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[ec2-user@amazonlinux ~]$ cat .vimrc 
set paste

[ec2-user@amazonlinux ~]$ cat thatfile.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx
            port:
              number: 80

[ec2-user@amazonlinux ~]$ export KUBECONFIG="$(k3d kubeconfig write mycluster)"  
[ec2-user@amazonlinux ~]$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[ec2-user@amazonlinux ~]$ kubectl create service clusterip nginx --tcp=80:80
service/nginx created

[ec2-user@amazonlinux ~]$ kubectl apply -f thatfile.yaml
ingress.networking.k8s.io/nginx created

查看结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
[ec2-user@amazonlinux ~]$ kubectl get all -A -o wide
NAMESPACE     NAME                                          READY   STATUS      RESTARTS      AGE     IP          NODE                     NOMINATED NODE   READINESS GATES
kube-system   pod/local-path-provisioner-84bb864455-s8t7f   1/1     Running     0             64m     10.42.0.5   k3d-mycluster-server-0   <none>           <none>
kube-system   pod/coredns-96cc4f57d-vvmrx                   1/1     Running     0             64m     10.42.0.6   k3d-mycluster-server-0   <none>           <none>
kube-system   pod/helm-install-traefik-crd--1-ghv9f         0/1     Completed   0             64m     10.42.0.4   k3d-mycluster-server-0   <none>           <none>
kube-system   pod/helm-install-traefik--1-s4ktb             0/1     Completed   1             64m     10.42.0.3   k3d-mycluster-server-0   <none>           <none>
kube-system   pod/svclb-traefik-9jt5d                       2/2     Running     1 (63m ago)   63m     10.42.0.7   k3d-mycluster-server-0   <none>           <none>
kube-system   pod/metrics-server-ff9dbcb6c-72mwc            1/1     Running     0             64m     10.42.0.2   k3d-mycluster-server-0   <none>           <none>
kube-system   pod/traefik-56c4b88c4b-f454f                  1/1     Running     0             63m     10.42.0.8   k3d-mycluster-server-0   <none>           <none>
kube-system   pod/svclb-traefik-dvvrp                       2/2     Running     1 (13m ago)   14m     10.42.1.2   k3d-worker1-0            <none>           <none>
default       pod/nginx-6799fc88d8-vgc9x                    1/1     Running     0             4m34s   10.42.1.3   k3d-worker1-0            <none>           <none>

NAMESPACE     NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP             PORT(S)                      AGE     SELECTOR
default       service/kubernetes       ClusterIP      10.43.0.1       <none>                  443/TCP                      64m     <none>
kube-system   service/kube-dns         ClusterIP      10.43.0.10      <none>                  53/UDP,53/TCP,9153/TCP       64m     k8s-app=kube-dns
kube-system   service/metrics-server   ClusterIP      10.43.88.129    <none>                  443/TCP                      64m     k8s-app=metrics-server
kube-system   service/traefik          LoadBalancer   10.43.4.2       172.18.0.2,172.18.0.4   80:32037/TCP,443:30484/TCP   63m     app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik
default       service/nginx            ClusterIP      10.43.206.223   <none>                  80/TCP                       4m30s   app=nginx

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS               IMAGES                                                SELECTOR
kube-system   daemonset.apps/svclb-traefik   2         2         2       2            2           <none>          63m   lb-port-80,lb-port-443   rancher/klipper-lb:v0.3.4,rancher/klipper-lb:v0.3.4   app=svclb-traefik

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS               IMAGES                                   SELECTOR
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           64m     local-path-provisioner   rancher/local-path-provisioner:v0.0.21   app=local-path-provisioner
kube-system   deployment.apps/coredns                  1/1     1            1           64m     coredns                  rancher/mirrored-coredns-coredns:1.8.6   k8s-app=kube-dns
kube-system   deployment.apps/metrics-server           1/1     1            1           64m     metrics-server           rancher/mirrored-metrics-server:v0.5.2   k8s-app=metrics-server
kube-system   deployment.apps/traefik                  1/1     1            1           63m     traefik                  rancher/mirrored-library-traefik:2.6.1   app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik
default       deployment.apps/nginx                    1/1     1            1           4m34s   nginx                    nginx                                    app=nginx

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE     CONTAINERS               IMAGES                                   SELECTOR
kube-system   replicaset.apps/local-path-provisioner-84bb864455   1         1         1       64m     local-path-provisioner   rancher/local-path-provisioner:v0.0.21   app=local-path-provisioner,pod-template-hash=84bb864455
kube-system   replicaset.apps/coredns-96cc4f57d                   1         1         1       64m     coredns                  rancher/mirrored-coredns-coredns:1.8.6   k8s-app=kube-dns,pod-template-hash=96cc4f57d
kube-system   replicaset.apps/metrics-server-ff9dbcb6c            1         1         1       64m     metrics-server           rancher/mirrored-metrics-server:v0.5.2   k8s-app=metrics-server,pod-template-hash=ff9dbcb6c
kube-system   replicaset.apps/traefik-56c4b88c4b                  1         1         1       63m     traefik                  rancher/mirrored-library-traefik:2.6.1   app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik,pod-template-hash=56c4b88c4b
default       replicaset.apps/nginx-6799fc88d8                    1         1         1       4m34s   nginx                    nginx                                    app=nginx,pod-template-hash=6799fc88d8

NAMESPACE     NAME                                 COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES                                      SELECTOR
kube-system   job.batch/helm-install-traefik-crd   1/1           52s        64m   helm         rancher/klipper-helm:v0.6.6-build20211022   controller-uid=1d4ac20d-0436-4d54-9d8b-e37fe7c46e6e
kube-system   job.batch/helm-install-traefik       1/1           53s        64m   helm         rancher/klipper-helm:v0.6.6-build20211022   controller-uid=b51475a5-3fac-4b96-b218-7dd7a094512b

[ec2-user@amazonlinux ~]$ kubectl get ingress 
NAME    CLASS    HOSTS   ADDRESS                 PORTS   AGE
nginx   <none>   *       172.18.0.2,172.18.0.4   80      3m18s

[ec2-user@amazonlinux ~]$ curl 172.18.0.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[ec2-user@amazonlinux ~]$ 

–END

K8s Ingress

Ingress是一个集中的集群应用网关,自动化的k8s反向代理组件(功能类比nginx)。

Ingress涉及到LoadBalancer,Ingress Controller, Ingress config等相关概念。controller从LoadBalancer/NodePort把当前的服务发布出去,同时监听Ingress config实时的修改当前Ingress配置(实时更新nginx.conf配置文件,并重载)

这里仅从helloworld入门实践操作进行。

入门指南

参考

版本兼容性: * https://github.com/kubernetes/ingress-nginx/#support-versions-table

下载镜像

镜像在gcr上面,先远程下载回来:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml 
[ec2-user@k8s ~]$ vi ingress-nginx-controller-v1.1.2.yaml

[ec2-user@k8s ~]$ grep image: ingress-nginx-controller-v1.1.2.yaml | sed 's/image: //' | sort -u | xargs echo 
k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660

[root@izt4nhcmmx33bjwcsdmf8oz ~]# docker pull k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c 
k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c: Pulling from ingress-nginx/controller
a0d0a0d46f8b: Pull complete 
3aae86482564: Pull complete 
c0d03781abb3: Pull complete 
0297e2ef8f7f: Pull complete 
866a68ce3c13: Pull complete 
1c2a7ca65b54: Pull complete 
41fd2de30e46: Pull complete 
637f10464e4d: Pull complete 
998064a16da4: Pull complete 
e63d23220e8c: Pull complete 
8128610547fb: Pull complete 
ae07a1a7f038: Pull complete 
ceb23c4cb607: Pull complete 
Digest: sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
Status: Downloaded newer image for k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c

[root@izt4nhcmmx33bjwcsdmf8oz ~]# docker pull k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 
k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660: Pulling from ingress-nginx/kube-webhook-certgen
ec52731e9273: Pull complete 
b90aa28117d4: Pull complete 
Digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
Status: Downloaded newer image for k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660

[root@izt4nhcmmx33bjwcsdmf8oz ~]# docker save k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 -o ingress-nginx-v1.1.2.tar
[root@izt4nhcmmx33bjwcsdmf8oz ~]# gzip ingress-nginx-v1.1.2.tar 

[root@izt4nhcmmx33bjwcsdmf8oz ~]# ll -h ingress-nginx-v1.1.2.tar.gz 
-rw------- 1 root root 116M Mar 24 23:49 ingress-nginx-v1.1.2.tar.gz

本地Linux服务器加载镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

[ec2-user@k8s ~]$ docker load -i k8s.gcr.io-ingress-nginx-v1.1.2.tar.gz 
c0d270ab7e0d: Loading layer [==================================================>]  3.697MB/3.697MB
ce7a3c1169b6: Loading layer [==================================================>]  45.38MB/45.38MB
e2eb06d8af82: Loading layer [==================================================>]  5.865MB/5.865MB
ab1476f3fdd9: Loading layer [==================================================>]  120.9MB/120.9MB
ad20729656ef: Loading layer [==================================================>]  4.096kB/4.096kB
0d5022138006: Loading layer [==================================================>]  38.09MB/38.09MB
8f757e3fe5e4: Loading layer [==================================================>]  21.42MB/21.42MB
d2bc6b915bc9: Loading layer [==================================================>]  4.019MB/4.019MB
bbeb6784ed45: Loading layer [==================================================>]  313.9kB/313.9kB
0c411e83ee78: Loading layer [==================================================>]  6.141MB/6.141MB
9c2d86dc137f: Loading layer [==================================================>]  38.45MB/38.45MB
7797e5b3a760: Loading layer [==================================================>]  2.754MB/2.754MB
98ef19df5514: Loading layer [==================================================>]  4.096kB/4.096kB
4cde87c7ecaf: Loading layer [==================================================>]  51.75MB/51.75MB
11536690d74a: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image ID: sha256:c41e9fcadf5a291120de706b7dfa1af598b9f2ed5138b6dcb9f79a68aad0ef4c
Loaded image ID: sha256:7e5c1cecb086f36c6ef4b319a60853020820997f3600c3687e8ba6139e83674d

[ec2-user@k8s ~]$ cat k8s.gcr.io-ingress-nginx-v1.1.2.tar.gz | ssh worker1 docker load 

[ec2-user@k8s ~]$ 
docker tag 7e5c1cecb086 k8s.gcr.io/ingress-nginx/controller:v1.1.2
docker tag c41e9fcadf5a k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1


注:由于配置中用了sha码,下载后tag不同,把image最后的 @sha256:xxx 删掉
vi ingress-nginx-controller-v1.1.2.yaml

创建服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[ec2-user@k8s ~]$ kubectl apply -f ingress-nginx-controller-v1.1.2.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

[ec2-user@k8s ~]$ kubectl wait --namespace ingress-nginx \
   --for=condition=ready pod \
   --selector=app.kubernetes.io/component=controller \
   --timeout=120s
pod/ingress-nginx-controller-755447bb4d-rnxvl condition met


# 状态参考:
# https://kubernetes.io/zh/docs/tasks/access-application-cluster/ingress-minikube/
[ec2-user@k8s ~]$ kubectl get pods --namespace=ingress-nginx

[ec2-user@k8s ~]$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-hbt9d        0/1     Completed   0          2m51s
pod/ingress-nginx-admission-patch-j8qfh         0/1     Completed   1          2m51s
pod/ingress-nginx-controller-755447bb4d-rnxvl   1/1     Running     0          2m51s

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.104.8.155    <pending>     80:31031/TCP,443:31845/TCP   2m51s
service/ingress-nginx-controller-admission   ClusterIP      10.108.67.255   <none>        443/TCP                      2m51s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           2m51s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-755447bb4d   1         1         1       2m51s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           3s         2m51s
job.batch/ingress-nginx-admission-patch    1/1           3s         2m51s

看到 ingress-nginx-controller 服务的 EXTERNAL-IP<pending> ,由于本地搭建并没有配备负载均衡器,所以没有手段,获取不到对外的IP。

本地测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[ec2-user@k8s ~]$ kubectl create deployment demo --image=httpd --port=80
deployment.apps/demo created
[ec2-user@k8s ~]$ kubectl expose deployment demo
service/demo exposed

[ec2-user@k8s ~]$ kubectl create ingress demo-localhost --class=nginx \
   --rule=demo.localdev.me/*=demo:80
ingress.networking.k8s.io/demo-localhost created

[ec2-user@k8s ~]$ kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
Handling connection for 8080

[ec2-user@k8s ~]$ curl http://demo.localdev.me:8080/
<html><body><h1>It works!</h1></body></html>

clean

1
2
3
kubectl delete deployment demo 
kubectl delete service demo 
kubectl delete ingress demo

集成(发布)

cloud

配置云厂商的负载均衡器。

baremetal: nodeport

适用于部署在裸机服务器上的 Kubernetes 集群,以及使用通用 Linux 发行版手动安装 Kubernetes 的 [原始] 虚拟机

为了快速测试,您可以使用 NodePort。这应该适用于几乎每个集群,但它通常会使用 30000-32767 范围内的端口。

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/baremetal/deploy.yaml

注:也可以通过修改配置使用80,443等端口,但不推荐。

baremetal: hostNetwork

ingress nginx controller的pod网络直接使用主机网络,这个比Service Nodeport稍微灵活一点,可以自己选择/管理端口。

A pure software solution: MetalLB

It has two features that work together to provide this service: address allocation, and external announcement.

After MetalLB has assigned an external IP address to a service, it needs to make the network beyond the cluster aware that the IP “lives” in the cluster. MetalLB uses standard routing protocols to achieve this: ARP, NDP, or BGP.

安装

需要kube-proxy配置arp为true。得与局域网进行广播通信,所以需要开启arp功能(标准路由协议)。

1
2
3
4
5
6
7
kubectl edit configmap -n kube-system kube-proxy

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

或者批处理一步到位

1
2
3
4
5
6
7
8
9
# see what changes would be made, returns nonzero returncode if different
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system

# actually apply the changes, returns nonzero returncode on errors only
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

安装

1
2
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

下载镜像慢一点,需要稍微多等等。再查看状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[ec2-user@k8s ~]$ kubectl get all -n metallb-system
NAME                             READY   STATUS    RESTARTS   AGE
pod/controller-57fd9c5bb-kc5zt   1/1     Running   0          5m55s
pod/speaker-8pg4v                1/1     Running   0          5m55s
pod/speaker-95bs8                1/1     Running   0          5m55s

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   2         2         2       2            2           kubernetes.io/os=linux   5m55s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           5m55s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-57fd9c5bb   1         1         1       5m55s

配置地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 查看主机ip,避开这些节点IP的区间
[ec2-user@k8s ~]$ kubectl get node -o wide
NAME      STATUS   ROLES                  AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
k8s       Ready    control-plane,master   10d   v1.23.5   192.168.191.131   <none>        Amazon Linux 2   4.14.268-205.500.amzn2.x86_64   docker://20.10.7
worker1   Ready    <none>                 10d   v1.23.5   192.168.191.132   <none>        Amazon Linux 2   4.14.268-205.500.amzn2.x86_64   docker://20.10.7

[ec2-user@k8s ~]$ cat metallb-config.yml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.191.200-192.168.191.220
[ec2-user@k8s ~]$ kubectl apply -f metallb-config.yml 
configmap/config created

然后,再回过头重新安装一遍nginx-ingress:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[ec2-user@k8s ~]$ kubectl get pods --namespace=ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-b9hkn        0/1     Completed   0          17s
ingress-nginx-admission-patch-xmnbr         0/1     Completed   1          17s
ingress-nginx-controller-755447bb4d-lfrwk   0/1     Running     0          17s
[ec2-user@k8s ~]$ kubectl get pods --namespace=ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-b9hkn        0/1     Completed   0          25s
ingress-nginx-admission-patch-xmnbr         0/1     Completed   1          25s
ingress-nginx-controller-755447bb4d-lfrwk   1/1     Running     0          25s

[ec2-user@k8s ~]$ kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.107.221.243   192.168.191.200   80:31443/TCP,443:30099/TCP   57s
ingress-nginx-controller-admission   ClusterIP      10.105.12.185    <none>            443/TCP                      57s

这次 EXTERNAL-IP 的ip就有值了,上面配置的ip段里面一个ip。

在线/集成测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[ec2-user@k8s ~]$ kubectl get service ingress-nginx-controller --namespace=ingress-nginx
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.107.221.243   192.168.191.200   80:31443/TCP,443:30099/TCP   4m


[ec2-user@k8s ~]$ kubectl create deployment demo --image=httpd --port=80
deployment.apps/demo created
[ec2-user@k8s ~]$ kubectl expose deployment demo
service/demo exposed

[ec2-user@k8s ~]$ kubectl create ingress demo --class=nginx \
   --rule="www.demo.io/*=demo:80"
ingress.networking.k8s.io/demo created


[ec2-user@k8s ~]$ kubectl get ingress 
NAME   CLASS   HOSTS         ADDRESS   PORTS   AGE
demo   nginx   www.demo.io             80      42s

[ec2-user@k8s ~]$ kubectl get ingress 
NAME   CLASS   HOSTS         ADDRESS           PORTS   AGE
demo   nginx   www.demo.io   192.168.191.200   80      27m

在本地windows机器的 C:/Windows/System32/drivers/etc/hosts 增加 192.168.191.200 www.demo.io ,然后浏览器访问 http://www.demo.io/ ,顺利的话就能在浏览器看到:

1
It works!

后记

理一下网络调用,其实就是nginx的方式:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[ec2-user@k8s ~]$ kubectl logs --tail=2 pod/demo-764c97f6fd-q5xts
10.244.2.79 - - [28/Mar/2022:09:52:36 +0000] "GET / HTTP/1.1" 200 45
10.244.2.79 - - [28/Mar/2022:09:52:36 +0000] "GET /favicon.ico HTTP/1.1" 404 196

[ec2-user@k8s ~]$ kubectl get pods -n ingress-nginx -o wide 
NAME                                        READY   STATUS      RESTARTS   AGE    IP            NODE      NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-b9hkn        0/1     Completed   0          157m   10.244.2.78   worker1   <none>           <none>
ingress-nginx-admission-patch-xmnbr         0/1     Completed   1          157m   10.244.2.77   worker1   <none>           <none>
ingress-nginx-controller-755447bb4d-lfrwk   1/1     Running     0          157m   10.244.2.79   worker1   <none>           <none>

# 192.168.191.1是vmware虚拟网卡的地址
[ec2-user@k8s ~]$ kubectl logs --tail=2 pod/ingress-nginx-controller-755447bb4d-lfrwk -n ingress-nginx 
192.168.191.1 - - [28/Mar/2022:09:52:36 +0000] "GET / HTTP/1.1" 200 45 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.82 Safari/537.36" 566 0.000 [default-demo-80] [] 10.244.2.80:80 45 0.000 200 6d52ef8349eb3101c31c3cc6377b982b
192.168.191.1 - - [28/Mar/2022:09:52:36 +0000] "GET /favicon.ico HTTP/1.1" 404 196 "http://www.demo.io/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.82 Safari/537.36" 506 0.001 [default-demo-80] [] 10.244.2.80:80 196 0.000 404 fefe172a57273977cdcd1455bcf322ac


## web
Request URL: http://www.demo.io/
Request Method: GET
Status Code: 200 OK
Remote Address: 192.168.191.200:80
Referrer Policy: strict-origin-when-cross-origin

看看metallb的日志,ip是怎么分配的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# https://metallb.universe.tf/concepts/layer2/

[ec2-user@k8s ~]$ kubectl get pods -n metallb-system -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
controller-57fd9c5bb-kc5zt   1/1     Running   0          3h34m   10.244.2.76       worker1   <none>           <none>
speaker-8pg4v                1/1     Running   0          3h34m   192.168.191.132   worker1   <none>           <none>
speaker-95bs8                1/1     Running   0          3h34m   192.168.191.131   k8s       <none>           <none>

[ec2-user@k8s ~]$ kubectl logs pod/controller-57fd9c5bb-kc5zt -n metallb-system
{"caller":"level.go:63","event":"ipAllocated","ip":["192.168.191.200"],"level":"info","msg":"IP address assigned by controller","service":"ingress-nginx/ingress-nginx-controller","ts":"2022-03-28T07:16:22.675599527Z"}
{"caller":"level.go:63","event":"serviceUpdated","level":"info","msg":"updated service object","service":"ingress-nginx/ingress-nginx-controller","ts":"2022-03-28T07:1

[ec2-user@k8s ~]$ kubectl logs speaker-8pg4v -n metallb-system 
{"caller":"level.go:63","event":"serviceAnnounced","ips":["192.168.191.200"],"level":"info","msg":"service has IP, announcing","pool":"default","protocol":"layer2","service":"ingress-nginx/ingress-nginx-controller","ts":"2022-03-28T07:16:42.775467559Z"}

[ec2-user@k8s ~]$ ping 192.168.191.200
PING 192.168.191.200 (192.168.191.200) 56(84) bytes of data.
^C
--- 192.168.191.200 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2055ms

[ec2-user@k8s ~]$ arp 
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.191.200          ether   00:0c:29:d5:4f:0f   C                     eth0

# worker1节点的MAC
[ec2-user@worker1 ~]$ ip a | grep -i -C 10 '00:0c:29:d5:4f:0f' 
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:d5:4f:0f brd ff:ff:ff:ff:ff:ff
    inet 192.168.191.132/24 brd 192.168.191.255 scope global dynamic eth0
       valid_lft 1714sec preferred_lft 1714sec

# In layer 2 mode, all traffic for a service IP goes to one node. From there, kube-proxy spreads the traffic to all the service’s pods.

[ec2-user@k8s ~]$ kubectl get pods -A -o wide | grep 192.168.191.132
kube-system            kube-flannel-ds-q4qkt                        1/1     Running     8 (2d10h ago)   11d     192.168.191.132   worker1   <none>           <none>
kube-system            kube-proxy-pd77m                             1/1     Running     6 (3d2h ago)    11d     192.168.191.132   worker1   <none>           <none>
metallb-system         speaker-8pg4v                                1/1     Running     0               5h31m   192.168.191.132   worker1   <none>           <none>

例子:

验证

文中说loadbalancer是通过了nodeport(会创建nodeport),还是有点诧异的。验证一番,果真如此!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[ec2-user@k8s ~]$ kubectl get service ingress-nginx-controller --namespace=ingress-nginx
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.107.221.243   192.168.191.200   80:31443/TCP,443:30099/TCP   34m

[ec2-user@k8s ~]$ kubectl describe service ingress-nginx-controller --namespace=ingress-nginx
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.1.2
                          helm.sh/chart=ingress-nginx-4.0.18
Annotations:              <none>
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.221.243
IPs:                      10.107.221.243
LoadBalancer Ingress:     192.168.191.200
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31443/TCP
Endpoints:                10.244.2.79:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30099/TCP
Endpoints:                10.244.2.79:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31942
Events:
  Type    Reason        Age   From                Message
  ----    ------        ----  ----                -------
  Normal  IPAllocated   38m   metallb-controller  Assigned IP ["192.168.191.200"]
  Normal  nodeAssigned  38m   metallb-speaker     announcing from node "worker1"


[ec2-user@k8s ~]$ netstat -anp | grep 31443 
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp        0      0 0.0.0.0:31443           0.0.0.0:*               LISTEN      -       

[ec2-user@worker1 ~]$ netstat -anp | grep 31443
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp        0      0 0.0.0.0:31443           0.0.0.0:*               LISTEN      -                   

其他参考

1
2
3
4
# 创建证书
openssl genrsa 2048 > k8s-dashboard-private.key
openssl req -new -x509 -nodes -sha1 -days 3650 -extensions v3_ca -key k8s-dashboard-private.key > k8s-dashboard.crt

–END