Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

K8s Ingress

Ingress是一个集中的集群应用网关,自动化的k8s反向代理组件(功能类比nginx)。

Ingress涉及到LoadBalancer,Ingress Controller, Ingress config等相关概念。controller从LoadBalancer/NodePort把当前的服务发布出去,同时监听Ingress config实时的修改当前Ingress配置(实时更新nginx.conf配置文件,并重载)

这里仅从helloworld入门实践操作进行。

入门指南

参考

版本兼容性: * https://github.com/kubernetes/ingress-nginx/#support-versions-table

下载镜像

镜像在gcr上面,先远程下载回来:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml 
[ec2-user@k8s ~]$ vi ingress-nginx-controller-v1.1.2.yaml

[ec2-user@k8s ~]$ grep image: ingress-nginx-controller-v1.1.2.yaml | sed 's/image: //' | sort -u | xargs echo 
k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660

[root@izt4nhcmmx33bjwcsdmf8oz ~]# docker pull k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c 
k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c: Pulling from ingress-nginx/controller
a0d0a0d46f8b: Pull complete 
3aae86482564: Pull complete 
c0d03781abb3: Pull complete 
0297e2ef8f7f: Pull complete 
866a68ce3c13: Pull complete 
1c2a7ca65b54: Pull complete 
41fd2de30e46: Pull complete 
637f10464e4d: Pull complete 
998064a16da4: Pull complete 
e63d23220e8c: Pull complete 
8128610547fb: Pull complete 
ae07a1a7f038: Pull complete 
ceb23c4cb607: Pull complete 
Digest: sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
Status: Downloaded newer image for k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c

[root@izt4nhcmmx33bjwcsdmf8oz ~]# docker pull k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 
k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660: Pulling from ingress-nginx/kube-webhook-certgen
ec52731e9273: Pull complete 
b90aa28117d4: Pull complete 
Digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
Status: Downloaded newer image for k8s.gcr.io/ingress-nginx/kube-webhook-certgen@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660

[root@izt4nhcmmx33bjwcsdmf8oz ~]# docker save k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660 -o ingress-nginx-v1.1.2.tar
[root@izt4nhcmmx33bjwcsdmf8oz ~]# gzip ingress-nginx-v1.1.2.tar 

[root@izt4nhcmmx33bjwcsdmf8oz ~]# ll -h ingress-nginx-v1.1.2.tar.gz 
-rw------- 1 root root 116M Mar 24 23:49 ingress-nginx-v1.1.2.tar.gz

本地Linux服务器加载镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

[ec2-user@k8s ~]$ docker load -i k8s.gcr.io-ingress-nginx-v1.1.2.tar.gz 
c0d270ab7e0d: Loading layer [==================================================>]  3.697MB/3.697MB
ce7a3c1169b6: Loading layer [==================================================>]  45.38MB/45.38MB
e2eb06d8af82: Loading layer [==================================================>]  5.865MB/5.865MB
ab1476f3fdd9: Loading layer [==================================================>]  120.9MB/120.9MB
ad20729656ef: Loading layer [==================================================>]  4.096kB/4.096kB
0d5022138006: Loading layer [==================================================>]  38.09MB/38.09MB
8f757e3fe5e4: Loading layer [==================================================>]  21.42MB/21.42MB
d2bc6b915bc9: Loading layer [==================================================>]  4.019MB/4.019MB
bbeb6784ed45: Loading layer [==================================================>]  313.9kB/313.9kB
0c411e83ee78: Loading layer [==================================================>]  6.141MB/6.141MB
9c2d86dc137f: Loading layer [==================================================>]  38.45MB/38.45MB
7797e5b3a760: Loading layer [==================================================>]  2.754MB/2.754MB
98ef19df5514: Loading layer [==================================================>]  4.096kB/4.096kB
4cde87c7ecaf: Loading layer [==================================================>]  51.75MB/51.75MB
11536690d74a: Loading layer [==================================================>]  3.584kB/3.584kB
Loaded image ID: sha256:c41e9fcadf5a291120de706b7dfa1af598b9f2ed5138b6dcb9f79a68aad0ef4c
Loaded image ID: sha256:7e5c1cecb086f36c6ef4b319a60853020820997f3600c3687e8ba6139e83674d

[ec2-user@k8s ~]$ cat k8s.gcr.io-ingress-nginx-v1.1.2.tar.gz | ssh worker1 docker load 

[ec2-user@k8s ~]$ 
docker tag 7e5c1cecb086 k8s.gcr.io/ingress-nginx/controller:v1.1.2
docker tag c41e9fcadf5a k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1


注:由于配置中用了sha码,下载后tag不同,把image最后的 @sha256:xxx 删掉
vi ingress-nginx-controller-v1.1.2.yaml

创建服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[ec2-user@k8s ~]$ kubectl apply -f ingress-nginx-controller-v1.1.2.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

[ec2-user@k8s ~]$ kubectl wait --namespace ingress-nginx \
   --for=condition=ready pod \
   --selector=app.kubernetes.io/component=controller \
   --timeout=120s
pod/ingress-nginx-controller-755447bb4d-rnxvl condition met


# 状态参考:
# https://kubernetes.io/zh/docs/tasks/access-application-cluster/ingress-minikube/
[ec2-user@k8s ~]$ kubectl get pods --namespace=ingress-nginx

[ec2-user@k8s ~]$ kubectl get all -n ingress-nginx
NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-hbt9d        0/1     Completed   0          2m51s
pod/ingress-nginx-admission-patch-j8qfh         0/1     Completed   1          2m51s
pod/ingress-nginx-controller-755447bb4d-rnxvl   1/1     Running     0          2m51s

NAME                                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             LoadBalancer   10.104.8.155    <pending>     80:31031/TCP,443:31845/TCP   2m51s
service/ingress-nginx-controller-admission   ClusterIP      10.108.67.255   <none>        443/TCP                      2m51s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           2m51s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-755447bb4d   1         1         1       2m51s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           3s         2m51s
job.batch/ingress-nginx-admission-patch    1/1           3s         2m51s

看到 ingress-nginx-controller 服务的 EXTERNAL-IP<pending> ,由于本地搭建并没有配备负载均衡器,所以没有手段,获取不到对外的IP。

本地测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[ec2-user@k8s ~]$ kubectl create deployment demo --image=httpd --port=80
deployment.apps/demo created
[ec2-user@k8s ~]$ kubectl expose deployment demo
service/demo exposed

[ec2-user@k8s ~]$ kubectl create ingress demo-localhost --class=nginx \
   --rule=demo.localdev.me/*=demo:80
ingress.networking.k8s.io/demo-localhost created

[ec2-user@k8s ~]$ kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
Handling connection for 8080

[ec2-user@k8s ~]$ curl http://demo.localdev.me:8080/
<html><body><h1>It works!</h1></body></html>

clean

1
2
3
kubectl delete deployment demo 
kubectl delete service demo 
kubectl delete ingress demo

集成(发布)

cloud

配置云厂商的负载均衡器。

baremetal: nodeport

适用于部署在裸机服务器上的 Kubernetes 集群,以及使用通用 Linux 发行版手动安装 Kubernetes 的 [原始] 虚拟机

为了快速测试,您可以使用 NodePort。这应该适用于几乎每个集群,但它通常会使用 30000-32767 范围内的端口。

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/baremetal/deploy.yaml

注:也可以通过修改配置使用80,443等端口,但不推荐。

baremetal: hostNetwork

ingress nginx controller的pod网络直接使用主机网络,这个比Service Nodeport稍微灵活一点,可以自己选择/管理端口。

A pure software solution: MetalLB

It has two features that work together to provide this service: address allocation, and external announcement.

After MetalLB has assigned an external IP address to a service, it needs to make the network beyond the cluster aware that the IP “lives” in the cluster. MetalLB uses standard routing protocols to achieve this: ARP, NDP, or BGP.

安装

需要kube-proxy配置arp为true。得与局域网进行广播通信,所以需要开启arp功能(标准路由协议)。

1
2
3
4
5
6
7
kubectl edit configmap -n kube-system kube-proxy

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

或者批处理一步到位

1
2
3
4
5
6
7
8
9
# see what changes would be made, returns nonzero returncode if different
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl diff -f - -n kube-system

# actually apply the changes, returns nonzero returncode on errors only
kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
kubectl apply -f - -n kube-system

安装

1
2
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml

下载镜像慢一点,需要稍微多等等。再查看状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[ec2-user@k8s ~]$ kubectl get all -n metallb-system
NAME                             READY   STATUS    RESTARTS   AGE
pod/controller-57fd9c5bb-kc5zt   1/1     Running   0          5m55s
pod/speaker-8pg4v                1/1     Running   0          5m55s
pod/speaker-95bs8                1/1     Running   0          5m55s

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
daemonset.apps/speaker   2         2         2       2            2           kubernetes.io/os=linux   5m55s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   1/1     1            1           5m55s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-57fd9c5bb   1         1         1       5m55s

配置地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 查看主机ip,避开这些节点IP的区间
[ec2-user@k8s ~]$ kubectl get node -o wide
NAME      STATUS   ROLES                  AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
k8s       Ready    control-plane,master   10d   v1.23.5   192.168.191.131   <none>        Amazon Linux 2   4.14.268-205.500.amzn2.x86_64   docker://20.10.7
worker1   Ready    <none>                 10d   v1.23.5   192.168.191.132   <none>        Amazon Linux 2   4.14.268-205.500.amzn2.x86_64   docker://20.10.7

[ec2-user@k8s ~]$ cat metallb-config.yml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.191.200-192.168.191.220
[ec2-user@k8s ~]$ kubectl apply -f metallb-config.yml 
configmap/config created

然后,再回过头重新安装一遍nginx-ingress:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[ec2-user@k8s ~]$ kubectl get pods --namespace=ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-b9hkn        0/1     Completed   0          17s
ingress-nginx-admission-patch-xmnbr         0/1     Completed   1          17s
ingress-nginx-controller-755447bb4d-lfrwk   0/1     Running     0          17s
[ec2-user@k8s ~]$ kubectl get pods --namespace=ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-b9hkn        0/1     Completed   0          25s
ingress-nginx-admission-patch-xmnbr         0/1     Completed   1          25s
ingress-nginx-controller-755447bb4d-lfrwk   1/1     Running     0          25s

[ec2-user@k8s ~]$ kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.107.221.243   192.168.191.200   80:31443/TCP,443:30099/TCP   57s
ingress-nginx-controller-admission   ClusterIP      10.105.12.185    <none>            443/TCP                      57s

这次 EXTERNAL-IP 的ip就有值了,上面配置的ip段里面一个ip。

在线/集成测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[ec2-user@k8s ~]$ kubectl get service ingress-nginx-controller --namespace=ingress-nginx
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.107.221.243   192.168.191.200   80:31443/TCP,443:30099/TCP   4m


[ec2-user@k8s ~]$ kubectl create deployment demo --image=httpd --port=80
deployment.apps/demo created
[ec2-user@k8s ~]$ kubectl expose deployment demo
service/demo exposed

[ec2-user@k8s ~]$ kubectl create ingress demo --class=nginx \
   --rule="www.demo.io/*=demo:80"
ingress.networking.k8s.io/demo created


[ec2-user@k8s ~]$ kubectl get ingress 
NAME   CLASS   HOSTS         ADDRESS   PORTS   AGE
demo   nginx   www.demo.io             80      42s

[ec2-user@k8s ~]$ kubectl get ingress 
NAME   CLASS   HOSTS         ADDRESS           PORTS   AGE
demo   nginx   www.demo.io   192.168.191.200   80      27m

在本地windows机器的 C:/Windows/System32/drivers/etc/hosts 增加 192.168.191.200 www.demo.io ,然后浏览器访问 http://www.demo.io/ ,顺利的话就能在浏览器看到:

1
It works!

后记

理一下网络调用,其实就是nginx的方式:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[ec2-user@k8s ~]$ kubectl logs --tail=2 pod/demo-764c97f6fd-q5xts
10.244.2.79 - - [28/Mar/2022:09:52:36 +0000] "GET / HTTP/1.1" 200 45
10.244.2.79 - - [28/Mar/2022:09:52:36 +0000] "GET /favicon.ico HTTP/1.1" 404 196

[ec2-user@k8s ~]$ kubectl get pods -n ingress-nginx -o wide 
NAME                                        READY   STATUS      RESTARTS   AGE    IP            NODE      NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-b9hkn        0/1     Completed   0          157m   10.244.2.78   worker1   <none>           <none>
ingress-nginx-admission-patch-xmnbr         0/1     Completed   1          157m   10.244.2.77   worker1   <none>           <none>
ingress-nginx-controller-755447bb4d-lfrwk   1/1     Running     0          157m   10.244.2.79   worker1   <none>           <none>

# 192.168.191.1是vmware虚拟网卡的地址
[ec2-user@k8s ~]$ kubectl logs --tail=2 pod/ingress-nginx-controller-755447bb4d-lfrwk -n ingress-nginx 
192.168.191.1 - - [28/Mar/2022:09:52:36 +0000] "GET / HTTP/1.1" 200 45 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.82 Safari/537.36" 566 0.000 [default-demo-80] [] 10.244.2.80:80 45 0.000 200 6d52ef8349eb3101c31c3cc6377b982b
192.168.191.1 - - [28/Mar/2022:09:52:36 +0000] "GET /favicon.ico HTTP/1.1" 404 196 "http://www.demo.io/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.82 Safari/537.36" 506 0.001 [default-demo-80] [] 10.244.2.80:80 196 0.000 404 fefe172a57273977cdcd1455bcf322ac


## web
Request URL: http://www.demo.io/
Request Method: GET
Status Code: 200 OK
Remote Address: 192.168.191.200:80
Referrer Policy: strict-origin-when-cross-origin

看看metallb的日志,ip是怎么分配的

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# https://metallb.universe.tf/concepts/layer2/

[ec2-user@k8s ~]$ kubectl get pods -n metallb-system -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
controller-57fd9c5bb-kc5zt   1/1     Running   0          3h34m   10.244.2.76       worker1   <none>           <none>
speaker-8pg4v                1/1     Running   0          3h34m   192.168.191.132   worker1   <none>           <none>
speaker-95bs8                1/1     Running   0          3h34m   192.168.191.131   k8s       <none>           <none>

[ec2-user@k8s ~]$ kubectl logs pod/controller-57fd9c5bb-kc5zt -n metallb-system
{"caller":"level.go:63","event":"ipAllocated","ip":["192.168.191.200"],"level":"info","msg":"IP address assigned by controller","service":"ingress-nginx/ingress-nginx-controller","ts":"2022-03-28T07:16:22.675599527Z"}
{"caller":"level.go:63","event":"serviceUpdated","level":"info","msg":"updated service object","service":"ingress-nginx/ingress-nginx-controller","ts":"2022-03-28T07:1

[ec2-user@k8s ~]$ kubectl logs speaker-8pg4v -n metallb-system 
{"caller":"level.go:63","event":"serviceAnnounced","ips":["192.168.191.200"],"level":"info","msg":"service has IP, announcing","pool":"default","protocol":"layer2","service":"ingress-nginx/ingress-nginx-controller","ts":"2022-03-28T07:16:42.775467559Z"}

[ec2-user@k8s ~]$ ping 192.168.191.200
PING 192.168.191.200 (192.168.191.200) 56(84) bytes of data.
^C
--- 192.168.191.200 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2055ms

[ec2-user@k8s ~]$ arp 
Address                  HWtype  HWaddress           Flags Mask            Iface
192.168.191.200          ether   00:0c:29:d5:4f:0f   C                     eth0

# worker1节点的MAC
[ec2-user@worker1 ~]$ ip a | grep -i -C 10 '00:0c:29:d5:4f:0f' 
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:d5:4f:0f brd ff:ff:ff:ff:ff:ff
    inet 192.168.191.132/24 brd 192.168.191.255 scope global dynamic eth0
       valid_lft 1714sec preferred_lft 1714sec

# In layer 2 mode, all traffic for a service IP goes to one node. From there, kube-proxy spreads the traffic to all the service’s pods.

[ec2-user@k8s ~]$ kubectl get pods -A -o wide | grep 192.168.191.132
kube-system            kube-flannel-ds-q4qkt                        1/1     Running     8 (2d10h ago)   11d     192.168.191.132   worker1   <none>           <none>
kube-system            kube-proxy-pd77m                             1/1     Running     6 (3d2h ago)    11d     192.168.191.132   worker1   <none>           <none>
metallb-system         speaker-8pg4v                                1/1     Running     0               5h31m   192.168.191.132   worker1   <none>           <none>

例子:

验证

文中说loadbalancer是通过了nodeport(会创建nodeport),还是有点诧异的。验证一番,果真如此!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[ec2-user@k8s ~]$ kubectl get service ingress-nginx-controller --namespace=ingress-nginx
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.107.221.243   192.168.191.200   80:31443/TCP,443:30099/TCP   34m

[ec2-user@k8s ~]$ kubectl describe service ingress-nginx-controller --namespace=ingress-nginx
Name:                     ingress-nginx-controller
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.1.2
                          helm.sh/chart=ingress-nginx-4.0.18
Annotations:              <none>
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.221.243
IPs:                      10.107.221.243
LoadBalancer Ingress:     192.168.191.200
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31443/TCP
Endpoints:                10.244.2.79:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  30099/TCP
Endpoints:                10.244.2.79:443
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31942
Events:
  Type    Reason        Age   From                Message
  ----    ------        ----  ----                -------
  Normal  IPAllocated   38m   metallb-controller  Assigned IP ["192.168.191.200"]
  Normal  nodeAssigned  38m   metallb-speaker     announcing from node "worker1"


[ec2-user@k8s ~]$ netstat -anp | grep 31443 
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp        0      0 0.0.0.0:31443           0.0.0.0:*               LISTEN      -       

[ec2-user@worker1 ~]$ netstat -anp | grep 31443
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp        0      0 0.0.0.0:31443           0.0.0.0:*               LISTEN      -                   

其他参考

1
2
3
4
# 创建证书
openssl genrsa 2048 > k8s-dashboard-private.key
openssl req -new -x509 -nodes -sha1 -days 3650 -extensions v3_ca -key k8s-dashboard-private.key > k8s-dashboard.crt

–END

Comments