注[2022-03-25]:如果后面需要用AWS,用Amazon的操作系统会便利一点。aws的命令这些都自带了。
本文是在 Amazon Linux 2 系统上安装部署的,和centos7.3基本相似。
和k8s软件依赖需要访问google的,已经在前面一篇文章中下载好,本文中会直接使用。依赖的软件可以在百度网盘下载:
链接:https://pan.baidu.com/s/1P3ABqKGt1JhNkg-9yB22yQ
提取码:k7af
安装 amazon-linux-2 操作系统
这里下载vmware使用的镜像 amzn2-vmware_esx-2.0.20220218.3-x86_64.xfs.gpt.ova
和初始化配置 Seed.iso
。
这里简单说下,其实ova已经是可以直接用的,文档中讲的很多内容是辅助系统定制初始化的。user-data用于创建用户和修改文件内容,meta-data配置主机名和网络ip设置。为了本地开发测试,我们直接用默认提供 Seed.iso
即可,登录使用 ec2-user:amazon
。
然后双击 ova 文件,就可以导入创建一个虚拟机出来了。
修改网络适配器为NAT模式;
添加CD/DVD设备,选择 Seed.iso
ISO映射文件;
开机登录系统后,打开sshd的密码登录。
1
2
3
4
5
6
7
$ sudo ifup eth0
$ ip a
$ sudo vi /etc/ssh/sshd_config
#PasswordAuthentication no
$ sudo service sshd reload
安装docker
k8s需要容器运行时软件,我们先安装好docker。
aws linux 2有它自己的docker源,使用docker官网文档的方式依赖有些找不到。直接按照aws官方文档中提供的方式安装。
坑
一开始是按照docker官网在centos的方式安装的,但yum repo的变量不对上,改了releasever后,然后依赖的版本找不到。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
## https://docs.docker.com/engine/install/centos/
[ec2-user@amazonlinux ~]$ cat /etc/issue
\S
Kernel \r on an \m
[ec2-user@amazonlinux ~]$ yum-debug-dump
Loaded plugins: langpacks, priorities, update-motd
Output written to: /home/ec2-user/yum_debug_dump-amazonlinux.onprem-2022-03-17_02:16:37.txt.gz
[ec2-user@amazonlinux ~]$ less /home/ec2-user/yum_debug_dump-amazonlinux.onprem-2022-03-17_02:16:37.txt.gz
[ec2-user@amazonlinux ~]$
$releasever的值,这个表示当前系统的发行版本,可以通过rpm -qi centos-release命令查看,结果如下:
$basearch是我们的系统硬件架构(CPU指令集),使用命令arch得到
[ec2-user@amazonlinux ~]$ sudo sed -i 's/$releasever/7/g' /etc/yum.repos.d/docker-ce.repo
## 缺少依赖
正式安装docker
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
## https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html
[ec2-user@amazonlinux ~]$ sudo yum update -y
[ec2-user@amazonlinux ~]$ sudo amazon-linux-extras install docker
Installing docker
Loaded plugins: langpacks, priorities, update-motd
Cleaning repos: amzn2-core amzn2extra-docker
12 metadata files removed
4 sqlite files removed
0 metadata files removed
Loaded plugins: langpacks, priorities, update-motd
amzn2-core | 3.7 kB 00:00:00
amzn2extra-docker | 3.0 kB 00:00:00
(1/5): amzn2-core/2/x86_64/group_gz | 2.5 kB 00:00:00
(2/5): amzn2extra-docker/2/x86_64/updateinfo | 5.9 kB 00:00:00
(3/5): amzn2-core/2/x86_64/updateinfo | 452 kB 00:00:01
(4/5): amzn2extra-docker/2/x86_64/primary_db | 86 kB 00:00:00
(5/5): amzn2-core/2/x86_64/primary_db | 60 MB 00:01:42
Resolving Dependencies
--> Running transaction check
---> Package docker.x86_64 0:20.10.7-5.amzn2 will be installed
--> Processing Dependency: runc >= 1.0.0 for package: docker-20.10.7-5.amzn2.x86_64
--> Processing Dependency: libcgroup >= 0.40.rc1-5.15 for package: docker-20.10.7-5.amzn2.x86_64
--> Processing Dependency: containerd >= 1.3.2 for package: docker-20.10.7-5.amzn2.x86_64
--> Processing Dependency: pigz for package: docker-20.10.7-5.amzn2.x86_64
--> Running transaction check
---> Package containerd.x86_64 0:1.4.6-8.amzn2 will be installed
---> Package libcgroup.x86_64 0:0.41-21.amzn2 will be installed
---> Package pigz.x86_64 0:2.3.4-1.amzn2.0.1 will be installed
---> Package runc.x86_64 0:1.0.0-2.amzn2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=========================================================================================================================================================================
Package Arch Version Repository Size
=========================================================================================================================================================================
Installing:
docker x86_64 20.10.7-5.amzn2 amzn2extra-docker 42 M
Installing for dependencies:
containerd x86_64 1.4.6-8.amzn2 amzn2extra-docker 24 M
libcgroup x86_64 0.41-21.amzn2 amzn2-core 66 k
pigz x86_64 2.3.4-1.amzn2.0.1 amzn2-core 81 k
runc x86_64 1.0.0-2.amzn2 amzn2extra-docker 3.3 M
Transaction Summary
=========================================================================================================================================================================
Install 1 Package (+4 Dependent packages)
Total download size: 69 M
Installed size: 285 M
Is this ok [y/d/N]: y
Downloading packages:
(1/5): pigz-2.3.4-1.amzn2.0.1.x86_64.rpm | 81 kB 00:00:00
(2/5): libcgroup-0.41-21.amzn2.x86_64.rpm | 66 kB 00:00:00
(3/5): containerd-1.4.6-8.amzn2.x86_64.rpm | 24 MB 00:01:14
(4/5): runc-1.0.0-2.amzn2.x86_64.rpm | 3.3 MB 00:00:10
(5/5): docker-20.10.7-5.amzn2.x86_64.rpm | 42 MB 00:01:50
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 641 kB/s | 69 MB 00:01:50
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : runc-1.0.0-2.amzn2.x86_64 1/5
Installing : containerd-1.4.6-8.amzn2.x86_64 2/5
Installing : libcgroup-0.41-21.amzn2.x86_64 3/5
Installing : pigz-2.3.4-1.amzn2.0.1.x86_64 4/5
Installing : docker-20.10.7-5.amzn2.x86_64 5/5
Verifying : docker-20.10.7-5.amzn2.x86_64 1/5
Verifying : containerd-1.4.6-8.amzn2.x86_64 2/5
Verifying : runc-1.0.0-2.amzn2.x86_64 3/5
Verifying : pigz-2.3.4-1.amzn2.0.1.x86_64 4/5
Verifying : libcgroup-0.41-21.amzn2.x86_64 5/5
Installed:
docker.x86_64 0:20.10.7-5.amzn2
Dependency Installed:
containerd.x86_64 0:1.4.6-8.amzn2 libcgroup.x86_64 0:0.41-21.amzn2 pigz.x86_64 0:2.3.4-1.amzn2.0.1 runc.x86_64 0:1.0.0-2.amzn2
Complete!
0 ansible2 available \
[ =2.4.2 =2.4.6 =2.8 =stable ]
2 httpd_modules available [ =1.0 =stable ]
3 memcached1.5 available \
[ =1.5.1 =1.5.16 =1.5.17 ]
5 postgresql9.6 available \
[ =9.6.6 =9.6.8 =stable ]
6 postgresql10 available [ =10 =stable ]
9 R3.4 available [ =3.4.3 =stable ]
10 rust1 available \
[ =1.22.1 =1.26.0 =1.26.1 =1.27.2 =1.31.0 =1.38.0
=stable ]
11 vim available [ =8.0 =stable ]
18 libreoffice available \
[ =5.0.6.2_15 =5.3.6.1 =stable ]
19 gimp available [ =2.8.22 ]
20 docker=latest enabled \
[ =17.12.1 =18.03.1 =18.06.1 =18.09.9 =stable ]
21 mate-desktop1.x available \
[ =1.19.0 =1.20.0 =stable ]
22 GraphicsMagick1.3 available \
[ =1.3.29 =1.3.32 =1.3.34 =stable ]
23 tomcat8.5 available \
[ =8.5.31 =8.5.32 =8.5.38 =8.5.40 =8.5.42 =8.5.50
=stable ]
24 epel available [ =7.11 =stable ]
25 testing available [ =1.0 =stable ]
26 ecs available [ =stable ]
27 corretto8 available \
[ =1.8.0_192 =1.8.0_202 =1.8.0_212 =1.8.0_222 =1.8.0_232
=1.8.0_242 =stable ]
28 firecracker available [ =0.11 =stable ]
29 golang1.11 available \
[ =1.11.3 =1.11.11 =1.11.13 =stable ]
30 squid4 available [ =4 =stable ]
32 lustre2.10 available \
[ =2.10.5 =2.10.8 =stable ]
33 java-openjdk11 available [ =11 =stable ]
34 lynis available [ =stable ]
35 kernel-ng available [ =stable ]
36 BCC available [ =0.x =stable ]
37 mono available [ =5.x =stable ]
38 nginx1 available [ =stable ]
39 ruby2.6 available [ =2.6 =stable ]
40 mock available [ =stable ]
41 postgresql11 available [ =11 =stable ]
42 php7.4 available [ =stable ]
43 livepatch available [ =stable ]
44 python3.8 available [ =stable ]
45 haproxy2 available [ =stable ]
46 collectd available [ =stable ]
47 aws-nitro-enclaves-cli available [ =stable ]
48 R4 available [ =stable ]
49 kernel-5.4 available [ =stable ]
50 selinux-ng available [ =stable ]
51 php8.0 available [ =stable ]
52 tomcat9 available [ =stable ]
53 unbound1.13 available [ =stable ]
54 mariadb10.5 available [ =stable ]
55 kernel-5.10 available [ =stable ]
56 redis6 available [ =stable ]
57 ruby3.0 available [ =stable ]
58 postgresql12 available [ =stable ]
59 postgresql13 available [ =stable ]
60 mock2 available [ =stable ]
61 dnsmasq2.85 available [ =stable ]
[ec2-user@amazonlinux ~]$
[ec2-user@amazonlinux ~]$ sudo service docker start
Redirecting to /bin/systemctl start docker.service
[ec2-user@amazonlinux ~]$ sudo systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[ec2-user@amazonlinux ~]$ sudo usermod -a -G docker ec2-user
[ec2-user@amazonlinux ~]$ docker info
Client:
Context: default
Debug Mode: false
Server:
ERROR: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info": dial unix /var/run/docker.sock: connect: permission denied
errors pretty printing info
[ec2-user@amazonlinux ~]$ exit
退出后再次连接:
[ec2-user@amazonlinux ~]$ docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.7
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc version: 84113eef6fc27af1b01b3181f31bbaf708715301
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.268-205.500.amzn2.x86_64
Operating System: Amazon Linux 2
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.828GiB
Name: amazonlinux.onprem
ID: GENW:47BV:UJR2:247P:CPFE:PHSO:RA6Z:H4RK:HYEE:LXN3:XDIZ:SI6Q
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
[ec2-user@amazonlinux ~]$ sudo yum install telnet -y
准备工作
可以先了解一些基本概念
* Kubernetes in Action 笔记 —— 部署第一个应用
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
## 确保每个节点上 MAC 地址和 product_uuid 的唯一性
[ec2-user@amazonlinux ~]$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:a4:a6:fc brd ff:ff:ff:ff:ff:ff
[ec2-user@amazonlinux ~]$ ifconfig -a
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.191.131 netmask 255.255.255.0 broadcast 192.168.191.255
inet6 fe80::20c:29ff:fea4:a6fc prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:a4:a6:fc txqueuelen 1000 (Ethernet)
RX packets 1737 bytes 288390 (281.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1704 bytes 139101 (135.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 625 bytes 57768 (56.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 625 bytes 57768 (56.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[ec2-user@amazonlinux ~]$ sudo cat /sys/class/dmi/id/product_uuid
564DD81E-DEBE-B06D-CF35-D7E3DDA4A6FC
## 检查网络适配器
# 只有一个网卡,跳过
## 允许 iptables 检查桥接流量
[ec2-user@amazonlinux ~]$ lsmod | grep br_netfilter
[ec2-user@amazonlinux ~]$
[ec2-user@amazonlinux ~]$ sudo modprobe br_netfilter
[ec2-user@amazonlinux ~]$ lsmod | grep br_netfilter
br_netfilter 24576 0
bridge 172032 1 br_netfilter
[ec2-user@amazonlinux ~]$
[ec2-user@amazonlinux ~]$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
[ec2-user@amazonlinux ~]$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[ec2-user@amazonlinux ~]$ sudo sysctl --system
* Applying /etc/sysctl.d/00-defaults.conf ...
kernel.printk = 8 4 1 7
kernel.panic = 30
net.ipv4.neigh.default.gc_thresh1 = 0
net.ipv6.neigh.default.gc_thresh1 = 0
net.ipv4.neigh.default.gc_thresh2 = 15360
net.ipv6.neigh.default.gc_thresh2 = 15360
net.ipv4.neigh.default.gc_thresh3 = 16384
net.ipv6.neigh.default.gc_thresh3 = 16384
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-amazon.conf ...
kernel.sched_autogroup_enabled = 0
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
## SELINUX
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0
## Docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
修改时区
1
2
sudo rm -rf /etc/localtime
sudo ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
主节点安装kubeadm并加载docker镜像
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
## 修改主机名
[ec2-user@amazonlinux ~]$ sudo hostnamectl --static set-hostname k8s
[ec2-user@amazonlinux ~]$ sudo hostname k8s
[ec2-user@k8s ~]$ rz
rz waiting to receive.
Starting zmodem transfer. Press Ctrl+C to cancel.
Transferring ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm...
100% 9253 KB 9253 KB/sec 00:00:01 0 Errors
Transferring d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm...
100% 21041 KB 21041 KB/sec 00:00:01 0 Errors
Transferring 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm...
100% 7228 KB 7228 KB/sec 00:00:01 0 Errors
Transferring db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm...
100% 19030 KB 19030 KB/sec 00:00:01 0 Errors
Transferring 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm...
100% 9689 KB 9689 KB/sec 00:00:01 0 Errors
[ec2-user@k8s ~]$ sudo yum install -y *.rpm
Loaded plugins: langpacks, priorities, update-motd
Examining 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm: cri-tools-1.23.0-0.x86_64
Marking 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm to be installed
Examining 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm: kubectl-1.23.5-0.x86_64
Marking 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm to be installed
Examining ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm: kubeadm-1.23.5-0.x86_64
Marking ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm to be installed
Examining d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm: kubelet-1.23.5-0.x86_64
Marking d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm to be installed
Examining db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm: kubernetes-cni-0.8.7-0.x86_64
Marking db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package cri-tools.x86_64 0:1.23.0-0 will be installed
---> Package kubeadm.x86_64 0:1.23.5-0 will be installed
---> Package kubectl.x86_64 0:1.23.5-0 will be installed
---> Package kubelet.x86_64 0:1.23.5-0 will be installed
--> Processing Dependency: conntrack for package: kubelet-1.23.5-0.x86_64
--> Processing Dependency: ebtables for package: kubelet-1.23.5-0.x86_64
--> Processing Dependency: socat for package: kubelet-1.23.5-0.x86_64
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 will be installed
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
---> Package ebtables.x86_64 0:2.0.10-16.amzn2.0.1 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.amzn2.0.1 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
======================================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================================
Installing:
cri-tools x86_64 1.23.0-0 /4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64 34 M
kubeadm x86_64 1.23.5-0 /ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64 43 M
kubectl x86_64 1.23.5-0 /96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64 44 M
kubelet x86_64 1.23.5-0 /d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64 119 M
kubernetes-cni x86_64 0.8.7-0 /db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64
55 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-5.amzn2.2 amzn2-core 186 k
ebtables x86_64 2.0.10-16.amzn2.0.1 amzn2-core 122 k
libnetfilter_cthelper x86_64 1.0.0-10.amzn2.1 amzn2-core 18 k
libnetfilter_cttimeout x86_64 1.0.0-6.amzn2.1 amzn2-core 18 k
libnetfilter_queue x86_64 1.0.2-2.amzn2.0.2 amzn2-core 24 k
socat x86_64 1.7.3.2-2.amzn2.0.1 amzn2-core 291 k
Transaction Summary
======================================================================================================================================================
Install 5 Packages (+6 Dependent packages)
Total size: 296 M
Total download size: 658 k
Installed size: 298 M
Downloading packages:
(1/6): ebtables-2.0.10-16.amzn2.0.1.x86_64.rpm | 122 kB 00:00:10
(2/6): libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64.rpm | 18 kB 00:00:00
(3/6): conntrack-tools-1.4.4-5.amzn2.2.x86_64.rpm | 186 kB 00:00:10
(4/6): libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64.rpm | 18 kB 00:00:00
(5/6): libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64.rpm | 24 kB 00:00:00
(6/6): socat-1.7.3.2-2.amzn2.0.1.x86_64.rpm | 291 kB 00:00:00
------------------------------------------------------------------------------------------------------------------------------------------------------
Total 45 kB/s | 658 kB 00:00:14
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 1/11
Installing : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 2/11
Installing : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 3/11
Installing : conntrack-tools-1.4.4-5.amzn2.2.x86_64 4/11
Installing : ebtables-2.0.10-16.amzn2.0.1.x86_64 5/11
Installing : cri-tools-1.23.0-0.x86_64 6/11
Installing : socat-1.7.3.2-2.amzn2.0.1.x86_64 7/11
Installing : kubelet-1.23.5-0.x86_64 8/11
Installing : kubernetes-cni-0.8.7-0.x86_64 9/11
Installing : kubectl-1.23.5-0.x86_64 10/11
Installing : kubeadm-1.23.5-0.x86_64 11/11
Verifying : kubernetes-cni-0.8.7-0.x86_64 1/11
Verifying : kubectl-1.23.5-0.x86_64 2/11
Verifying : socat-1.7.3.2-2.amzn2.0.1.x86_64 3/11
Verifying : cri-tools-1.23.0-0.x86_64 4/11
Verifying : ebtables-2.0.10-16.amzn2.0.1.x86_64 5/11
Verifying : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 6/11
Verifying : conntrack-tools-1.4.4-5.amzn2.2.x86_64 7/11
Verifying : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 8/11
Verifying : kubeadm-1.23.5-0.x86_64 9/11
Verifying : kubelet-1.23.5-0.x86_64 10/11
Verifying : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 11/11
Installed:
cri-tools.x86_64 0:1.23.0-0 kubeadm.x86_64 0:1.23.5-0 kubectl.x86_64 0:1.23.5-0 kubelet.x86_64 0:1.23.5-0 kubernetes-cni.x86_64 0:0.8.7-0
Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 ebtables.x86_64 0:2.0.10-16.amzn2.0.1 libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1
libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 socat.x86_64 0:1.7.3.2-2.amzn2.0.1
Complete!
[ec2-user@k8s ~]$
[ec2-user@k8s ~]$ sudo yum install ebtables ethtool
加载docker镜像:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[ec2-user@k8s ~]$ docker load -i k8s.tar.gz
194a408e97d8: Loading layer [==================================================>] 68.57MB/68.57MB
2b8347a02bc5: Loading layer [==================================================>] 1.509MB/1.509MB
618b3e11ccba: Loading layer [==================================================>] 44.17MB/44.17MB
Loaded image: k8s.gcr.io/kube-proxy:v1.23.5
5b1fa8e3e100: Loading layer [==================================================>] 3.697MB/3.697MB
83e216f0eb98: Loading layer [==================================================>] 1.509MB/1.509MB
a70573edad24: Loading layer [==================================================>] 121.1MB/121.1MB
Loaded image: k8s.gcr.io/kube-controller-manager:v1.23.5
46576c5a6a97: Loading layer [==================================================>] 49.63MB/49.63MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.23.5
6d75f23be3dd: Loading layer [==================================================>] 3.697MB/3.697MB
b6e8c573c18d: Loading layer [==================================================>] 2.257MB/2.257MB
d80003ff5706: Loading layer [==================================================>] 267MB/267MB
664dd6f2834b: Loading layer [==================================================>] 2.137MB/2.137MB
62ae031121b1: Loading layer [==================================================>] 18.86MB/18.86MB
Loaded image: k8s.gcr.io/etcd:3.5.1-0
256bc5c338a6: Loading layer [==================================================>] 336.4kB/336.4kB
80e4a2390030: Loading layer [==================================================>] 46.62MB/46.62MB
Loaded image: k8s.gcr.io/coredns/coredns:v1.8.6
1021ef88c797: Loading layer [==================================================>] 684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.6
50098fdfecae: Loading layer [==================================================>] 131.3MB/131.3MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.23.5
[ec2-user@k8s ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.23.5 3fc1d62d6587 15 hours ago 135MB
k8s.gcr.io/kube-proxy v1.23.5 3c53fa8541f9 15 hours ago 112MB
k8s.gcr.io/kube-controller-manager v1.23.5 b0c9e5e4dbb1 15 hours ago 125MB
k8s.gcr.io/kube-scheduler v1.23.5 884d49d6d8c9 15 hours ago 53.5MB
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
[ec2-user@k8s ~]$
主节点(控制平面control-plane node)启动服务
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
[ec2-user@k8s ~]$ sudo su -
[root@k8s ~]# kubeadm init
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Hostname]: hostname "k8s" could not be reached
[WARNING Hostname]: hostname "k8s": lookup k8s on 192.168.191.2:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.191.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s localhost] and IPs [192.168.191.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s localhost] and IPs [192.168.191.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 87.001525 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: sj6fff.bpak7gkd3hnyzcm5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.191.131:6443 --token sj6fff.bpak7gkd3hnyzcm5 \
--discovery-token-ca-cert-hash sha256:8e15649afc0771e80cce7f1dfdbb0933f4fdbd45ea1f9e03be1f3b78449a6d3c
[root@k8s ~]#
普通用户配置kubectl:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[ec2-user@k8s ~]$ mkdir -p $HOME/.kube
[ec2-user@k8s ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[ec2-user@k8s ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[ec2-user@k8s ~]$ which kubectl
/usr/bin/kubectl
[ec2-user@k8s ~]$
[ec2-user@k8s ~]$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.191.131:6443
CoreDNS is running at https://192.168.191.131:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[ec2-user@k8s ~]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s NotReady control-plane,master 14m v1.23.5 192.168.191.131 <none> Amazon Linux 2 4.14.268-205.500.amzn2.x86_64 docker://20.10.7
[ec2-user@k8s ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-64897985d-pcxpd 0/1 Pending 0 14m <none> <none> <none> <none>
kube-system coredns-64897985d-pfsj6 0/1 Pending 0 14m <none> <none> <none> <none>
kube-system etcd-k8s 1/1 Running 0 14m 192.168.191.131 k8s <none> <none>
kube-system kube-apiserver-k8s 1/1 Running 0 14m 192.168.191.131 k8s <none> <none>
kube-system kube-controller-manager-k8s 1/1 Running 0 14m 192.168.191.131 k8s <none> <none>
kube-system kube-proxy-qj6lw 1/1 Running 0 14m 192.168.191.131 k8s <none> <none>
kube-system kube-scheduler-k8s 1/1 Running 0 14m 192.168.191.131 k8s <none> <none>
[ec2-user@k8s ~]$
如果希望主节点(控制平面节点control-plane node)上也调度 Pod, 例如用于开发的单机 Kubernetes 集群,请运行:
1
2
3
4
[root@k8s ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/k8s untainted
[root@k8s ~]#
加入工作节点(nodes)
先把docker安装好,以及系统基础配置,参考上面的步骤。然后安装kubeadm,以及加载gcr的docker镜像。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
[ec2-user@amazonlinux ~]$ ll
total 285480
-rw-r--r-- 1 ec2-user ec2-user 7401938 Mar 17 15:22 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm
-rw-r--r-- 1 ec2-user ec2-user 9921646 Mar 17 15:22 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm
-rw-r--r-- 1 ec2-user ec2-user 9475514 Mar 17 15:22 ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm
-rw-r--r-- 1 ec2-user ec2-user 21546750 Mar 17 15:22 d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm
-rw-r--r-- 1 ec2-user ec2-user 19487362 Mar 17 15:22 db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
-rw-r--r-- 1 ec2-user ec2-user 224482960 Mar 17 15:22 k8s.tar.gz
[ec2-user@amazonlinux ~]$ sudo yum install *.rpm
Loaded plugins: langpacks, priorities, update-motd
Examining 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm: cri-tools-1.23.0-0.x86_64
Marking 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm to be installed
Examining 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm: kubectl-1.23.5-0.x86_64
Marking 96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64.rpm to be installed
Examining ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm: kubeadm-1.23.5-0.x86_64
Marking ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64.rpm to be installed
Examining d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm: kubelet-1.23.5-0.x86_64
Marking d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64.rpm to be installed
Examining db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm: kubernetes-cni-0.8.7-0.x86_64
Marking db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package cri-tools.x86_64 0:1.23.0-0 will be installed
---> Package kubeadm.x86_64 0:1.23.5-0 will be installed
---> Package kubectl.x86_64 0:1.23.5-0 will be installed
---> Package kubelet.x86_64 0:1.23.5-0 will be installed
--> Processing Dependency: conntrack for package: kubelet-1.23.5-0.x86_64
amzn2-core | 3.7 kB 00:00:00
--> Processing Dependency: ebtables for package: kubelet-1.23.5-0.x86_64
--> Processing Dependency: socat for package: kubelet-1.23.5-0.x86_64
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 will be installed
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64
---> Package ebtables.x86_64 0:2.0.10-16.amzn2.0.1 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.amzn2.0.1 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===================================================================================================================================================================
Package Arch Version Repository Size
===================================================================================================================================================================
Installing:
cri-tools x86_64 1.23.0-0 /4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64 34 M
kubeadm x86_64 1.23.5-0 /ab0e12925be5251baf5dd3b31493663d46e4a7b458c7a5b6b717f4ae87a81bd4-kubeadm-1.23.5-0.x86_64 43 M
kubectl x86_64 1.23.5-0 /96b208380314a19ded917eaf125ed748f5e2b28a3cc8707a10a76a9f5b61c0df-kubectl-1.23.5-0.x86_64 44 M
kubelet x86_64 1.23.5-0 /d39aa6eb38a6a8326b7e88c622107327dfd02ac8aaae32eceb856643a2ad9981-kubelet-1.23.5-0.x86_64 119 M
kubernetes-cni x86_64 0.8.7-0 /db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64 55 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-5.amzn2.2 amzn2-core 186 k
ebtables x86_64 2.0.10-16.amzn2.0.1 amzn2-core 122 k
libnetfilter_cthelper x86_64 1.0.0-10.amzn2.1 amzn2-core 18 k
libnetfilter_cttimeout x86_64 1.0.0-6.amzn2.1 amzn2-core 18 k
libnetfilter_queue x86_64 1.0.2-2.amzn2.0.2 amzn2-core 24 k
socat x86_64 1.7.3.2-2.amzn2.0.1 amzn2-core 291 k
Transaction Summary
===================================================================================================================================================================
Install 5 Packages (+6 Dependent packages)
Total size: 296 M
Total download size: 658 k
Installed size: 298 M
Is this ok [y/d/N]: y
Downloading packages:
(1/6): ebtables-2.0.10-16.amzn2.0.1.x86_64.rpm | 122 kB 00:00:00
(2/6): libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64.rpm | 18 kB 00:00:00
(3/6): libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64.rpm | 18 kB 00:00:00
(4/6): conntrack-tools-1.4.4-5.amzn2.2.x86_64.rpm | 186 kB 00:00:00
(5/6): libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64.rpm | 24 kB 00:00:00
(6/6): socat-1.7.3.2-2.amzn2.0.1.x86_64.rpm | 291 kB 00:00:00
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 1.0 MB/s | 658 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 1/11
Installing : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 2/11
Installing : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 3/11
Installing : conntrack-tools-1.4.4-5.amzn2.2.x86_64 4/11
Installing : ebtables-2.0.10-16.amzn2.0.1.x86_64 5/11
Installing : cri-tools-1.23.0-0.x86_64 6/11
Installing : socat-1.7.3.2-2.amzn2.0.1.x86_64 7/11
Installing : kubelet-1.23.5-0.x86_64 8/11
Installing : kubernetes-cni-0.8.7-0.x86_64 9/11
Installing : kubectl-1.23.5-0.x86_64 10/11
Installing : kubeadm-1.23.5-0.x86_64 11/11
Verifying : kubernetes-cni-0.8.7-0.x86_64 1/11
Verifying : kubectl-1.23.5-0.x86_64 2/11
Verifying : socat-1.7.3.2-2.amzn2.0.1.x86_64 3/11
Verifying : cri-tools-1.23.0-0.x86_64 4/11
Verifying : ebtables-2.0.10-16.amzn2.0.1.x86_64 5/11
Verifying : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 6/11
Verifying : conntrack-tools-1.4.4-5.amzn2.2.x86_64 7/11
Verifying : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 8/11
Verifying : kubeadm-1.23.5-0.x86_64 9/11
Verifying : kubelet-1.23.5-0.x86_64 10/11
Verifying : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 11/11
Installed:
cri-tools.x86_64 0:1.23.0-0 kubeadm.x86_64 0:1.23.5-0 kubectl.x86_64 0:1.23.5-0 kubelet.x86_64 0:1.23.5-0 kubernetes-cni.x86_64 0:0.8.7-0
Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 ebtables.x86_64 0:2.0.10-16.amzn2.0.1 libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1
libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 socat.x86_64 0:1.7.3.2-2.amzn2.0.1
Complete!
[ec2-user@amazonlinux ~]$ sudo yum install ebtables ethtool
Loaded plugins: langpacks, priorities, update-motd
Package ebtables-2.0.10-16.amzn2.0.1.x86_64 already installed and latest version
Package 2:ethtool-4.8-10.amzn2.x86_64 already installed and latest version
Nothing to do
加载docker镜像
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[ec2-user@amazonlinux ~]$ docker load -i k8s.tar.gz
194a408e97d8: Loading layer [==================================================>] 68.57MB/68.57MB
2b8347a02bc5: Loading layer [==================================================>] 1.509MB/1.509MB
618b3e11ccba: Loading layer [==================================================>] 44.17MB/44.17MB
Loaded image: k8s.gcr.io/kube-proxy:v1.23.5
5b1fa8e3e100: Loading layer [==================================================>] 3.697MB/3.697MB
83e216f0eb98: Loading layer [==================================================>] 1.509MB/1.509MB
a70573edad24: Loading layer [==================================================>] 121.1MB/121.1MB
Loaded image: k8s.gcr.io/kube-controller-manager:v1.23.5
46576c5a6a97: Loading layer [==================================================>] 49.63MB/49.63MB
Loaded image: k8s.gcr.io/kube-scheduler:v1.23.5
6d75f23be3dd: Loading layer [==================================================>] 3.697MB/3.697MB
b6e8c573c18d: Loading layer [==================================================>] 2.257MB/2.257MB
d80003ff5706: Loading layer [==================================================>] 267MB/267MB
664dd6f2834b: Loading layer [==================================================>] 2.137MB/2.137MB
62ae031121b1: Loading layer [==================================================>] 18.86MB/18.86MB
Loaded image: k8s.gcr.io/etcd:3.5.1-0
256bc5c338a6: Loading layer [==================================================>] 336.4kB/336.4kB
80e4a2390030: Loading layer [==================================================>] 46.62MB/46.62MB
Loaded image: k8s.gcr.io/coredns/coredns:v1.8.6
1021ef88c797: Loading layer [==================================================>] 684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.6
50098fdfecae: Loading layer [==================================================>] 131.3MB/131.3MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.23.5
[ec2-user@amazonlinux ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.23.5 3fc1d62d6587 15 hours ago 135MB
k8s.gcr.io/kube-proxy v1.23.5 3c53fa8541f9 15 hours ago 112MB
k8s.gcr.io/kube-controller-manager v1.23.5 b0c9e5e4dbb1 15 hours ago 125MB
k8s.gcr.io/kube-scheduler v1.23.5 884d49d6d8c9 15 hours ago 53.5MB
k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
[ec2-user@amazonlinux ~]$
中间出了个小插曲,一开始没有改主机名,导致加入节点的时刻用的是默认的,这样看起来不清晰,后面改了名称后就不认了。得重新弄一遍。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
## 加入节点
[ec2-user@amazonlinux ~]$ sudo su -
[root@amazonlinux ~]# kubeadm join 192.168.191.131:6443 --token sj6fff.bpak7gkd3hnyzcm5 \
--discovery-token-ca-cert-hash sha256:8e15649afc0771e80cce7f1dfdbb0933f4fdbd45ea1f9e03be1f3b78449a6d3c
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Hostname]: hostname "amazonlinux.onprem" could not be reached
[WARNING Hostname]: hostname "amazonlinux.onprem": lookup amazonlinux.onprem on 192.168.191.2:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@amazonlinux ~]#
## 改下主机名称:
[root@amazonlinux ~]# hostnamectl --static set-hostname worker1
[root@amazonlinux ~]# hostname worker1
[root@amazonlinux ~]# exit
## 改了一下名,重启后不行了,重新加入
[ec2-user@worker1 ~]$ sudo su -
Last login: Thu Mar 17 15:24:32 CST 2022 on pts/0
[root@worker1 ~]# kubeadm join 192.168.191.131:6443 --token sj6fff.bpak7gkd3hnyzcm5 \
--discovery-token-ca-cert-hash sha256:8e15649afc0771e80cce7f1dfdbb0933f4fdbd45ea1f9e03be1f3b78449a6d3c
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Hostname]: hostname "worker1" could not be reached
[WARNING Hostname]: hostname "worker1": lookup worker1 on 192.168.191.2:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
## 直接重新加入不行,需要先重置再加入
[root@worker1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0317 17:42:03.050519 6887 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@worker1 ~]# kubeadm join 192.168.191.131:6443 --token sj6fff.bpak7gkd3hnyzcm5 \
--discovery-token-ca-cert-hash sha256:8e15649afc0771e80cce7f1dfdbb0933f4fdbd45ea1f9e03be1f3b78449a6d3c
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Hostname]: hostname "worker1" could not be reached
[WARNING Hostname]: hostname "worker1": lookup worker1 on 192.168.191.2:53: no such host
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
加入节点后,查看状态:
1
2
3
4
[ec2-user@k8s ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s Ready control-plane,master 166m v1.23.5
worker1 NotReady <none> 30s v1.23.5
安装网络
github上的资源好像也不能下载了,打开后复制内容到新建的文件中。
1
2
3
4
# For Kubernetes v1.17+ kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
[ec2-user@k8s ~]$ vi kube-flannel.yml
[ec2-user@k8s ~]$ kubectl apply -f kube-flannel.yml
由于初始化 kubeadm init
时没有添加网络参数,导致这里flannel网络插件一直处于 CrashLoopBackOff 状态,查看日志提示没有分配 cidr 报错查日志
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
## https://cloud-atlas.readthedocs.io/zh_CN/latest/kubernetes/debug/k8s_crashloopbackoff.html
# 查看日志
[ec2-user@k8s ~]$ kubectl describe -n kube-system pod kube-flannel-ds-sbx86
Warning BackOff 2m5s (x69 over 16m) kubelet Back-off restarting failed container
[ec2-user@k8s ~]$ kubectl logs -n kube-system kube-flannel-ds-sbx86
E0317 09:19:27.915383 1 main.go:317] Error registering network: failed to acquire lease: node "k8s" pod cidr not assigned
W0317 09:19:27.915664 1 reflector.go:436] github.com/flannel-io/flannel/subnet/kube/kube.go:379: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding
可以再通过docker查看flannel日志
[root@test4 profile]# docker ps -l
[root@test4 profile]# docker logs f7be3ebe77fd
## https://www.talkwithtrend.com/Article/251751
# kube-controller-manager 没有给新加入的节点分配 IP 段,init 的时候没有指定 IP 段
# 加最后两行,和 kube-flannel.yml 中的 net-conf.json/Network 对应:
[ec2-user@k8s ~]$ sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
- --allocate-node-cidrs=true
- --cluster-cidr=10.244.0.0/16
# 重启服务
## https://stackoverflow.com/questions/51375940/kubernetes-master-node-is-down-after-restarting-host-machine
[ec2-user@k8s ~]$ sudo systemctl restart kubelet
# 重新部署
# 然后删除flannel容器,重新部署
[ec2-user@k8s ~]$ kubectl delete -f kube-flannel.yml
[ec2-user@k8s ~]$ kubectl apply -f kube-flannel.yml
注:还有可以临时编辑节点的配置 手动分配podCIDR。这里不做具体描述,参考: http://www.hyhblog.cn/2021/02/21/k8s-flannel-pod-cidr-not-assigned/ 。
再次查看pod状态,网络组件安装好后,dns组件也跑起来了。
1
2
3
4
5
6
7
8
9
10
11
12
[ec2-user@k8s ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-64897985d-4d5rx 1/1 Running 0 2m30s 10.244.0.3 k8s <none> <none>
kube-system coredns-64897985d-m9p9q 1/1 Running 0 2m30s 10.244.0.2 k8s <none> <none>
kube-system etcd-k8s 1/1 Running 0 166m 192.168.191.131 k8s <none> <none>
kube-system kube-apiserver-k8s 1/1 Running 0 166m 192.168.191.131 k8s <none> <none>
kube-system kube-controller-manager-k8s 1/1 Running 0 12m 192.168.191.131 k8s <none> <none>
kube-system kube-flannel-ds-q4qkt 1/1 Running 0 60s 192.168.191.132 worker1 <none> <none>
kube-system kube-flannel-ds-ttcwt 1/1 Running 0 6m1s 192.168.191.131 k8s <none> <none>
kube-system kube-proxy-pd77m 1/1 Running 0 60s 192.168.191.132 worker1 <none> <none>
kube-system kube-proxy-qj6lw 1/1 Running 0 166m 192.168.191.131 k8s <none> <none>
kube-system kube-scheduler-k8s 1/1 Running 0 166m 192.168.191.131 k8s <none> <none>
安装dashboard
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
## https://github.com/kubernetes/dashboard#kubernetes-dashboard
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml
[ec2-user@k8s ~]$ kubectl apply -f dashboard-v2.5.1.yml
[ec2-user@k8s ~]$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-64897985d-4d5rx 1/1 Running 0 36m 10.244.0.3 k8s <none> <none>
kube-system coredns-64897985d-m9p9q 1/1 Running 0 36m 10.244.0.2 k8s <none> <none>
kube-system etcd-k8s 1/1 Running 0 3h20m 192.168.191.131 k8s <none> <none>
kube-system kube-apiserver-k8s 1/1 Running 0 3h19m 192.168.191.131 k8s <none> <none>
kube-system kube-controller-manager-k8s 1/1 Running 0 46m 192.168.191.131 k8s <none> <none>
kube-system kube-flannel-ds-q4qkt 1/1 Running 0 34m 192.168.191.132 worker1 <none> <none>
kube-system kube-flannel-ds-ttcwt 1/1 Running 0 39m 192.168.191.131 k8s <none> <none>
kube-system kube-proxy-pd77m 1/1 Running 0 34m 192.168.191.132 worker1 <none> <none>
kube-system kube-proxy-qj6lw 1/1 Running 0 3h20m 192.168.191.131 k8s <none> <none>
kube-system kube-scheduler-k8s 1/1 Running 0 3h20m 192.168.191.131 k8s <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-q87wv 1/1 Running 0 59s 10.244.2.3 worker1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-fb8648fd9-vprpt 1/1 Running 0 59s 10.244.2.2 worker1 <none> <none>
安装还是很便捷和容易的,访问搞起来比较麻烦,由于是在虚拟机里面部署,kubectl命令也都在虚拟机操作,用 kubectl proxy
访问dashboard,如果不是localhost或者https的话不给访问的。
尝试了绑定网卡ip,但是由于不是https,还是不能访问dashboard:
1
2
[ec2-user@k8s ~]$ kubectl proxy --address='0.0.0.0' --accept-hosts='.*'
Starting to serve on [::]:8001
访问dashboard方法一:kubectl proxy 加上 ssh的locally port forward,把本地的8001的请求转发到 远程服务器的localhost:8001。
1
2
3
## https://github.com/kubernetes/dashboard#access
[ec2-user@k8s ~]$ kubectl proxy
Starting to serve on 127.0.0.1:8001
在SecureCRT的ssh会话的配置 Session Options 的 Connection - Port Forwarding 增加 Local Port Forwarding 的端口转发。在Local和Remote的Port输入框中都填入8001即可。
重新连接,这样我们访问 http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
就能访问到dashboard页面了。
方法二:后面直接通过查看dashboard服务的ip,通过 ssh的socks5代理 来访问 使用内部地址的dashboard。
1
2
3
4
5
6
7
[ec2-user@k8s ~]$ kubectl -n kubernetes-dashboard get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.101.193.109 <none> 443/TCP 6h38m
## 通过服务ip访问(Locally Port Forwarding - socks5方式代理):
https://10.101.193.109/#/login
https://10.101.193.109/#/pod?namespace=kube-system
方法三:或者网上说的使用端口转发:
1
2
## https://kubernetes.io/zh/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
kubectl port-forward -n kubernetes-dashboard service/kubernetes-dashboard 8080:443 --address='0.0.0.0'
能访问了,接下来就是获取token进行登录。同样有两种方式,第一种暴力设置跳过登录,第二种方式从系统中获取/创建一个token来登录。
登录方法一:
1
2
3
4
5
6
7
## https://www.cnblogs.com/tylerzhou/p/11117956.html
# 在1.10.1里面默认不再显然skip按钮,其实dashboard安装有很多坑,如果有读者按照以上设置仍然不能正常成功登陆,但是仍然想要体验dashboard,可以开启默认关闭的skip按钮,这样就可以进入到dashboard管理界面了.
containers:
- args:
- --auto-generate-certificates
- --enable-skip-login # <-- add this line
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
改了配置后记得重新加载。
方法二:另一种方式是从系统获取token,然后填到界面上然后登录。访问dashboard:
1
2
3
4
5
6
7
8
9
[ec2-user@k8s ~]$ kubectl -n kube-system get secret
# 这些secrets中的大部分都可以用来访问dashboard的,只有不同的账户权限不同,很多账户被限制不能进行操作.
[ec2-user@k8s ~]$ kubectl -n kube-system describe secret deployment-controller-token
# 使用一条命令来显示token
[ec2-user@k8s ~]$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | awk '/^deployment-controller-token-/{print $1}') | awk '$1=="token:"{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6IjQzNllWOFFBYU5qaXdtUmdLelJQSDU5T2FVbGVpREJFZTlMQU12MXFhN1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZXBsb3ltZW50LWNvbnRyb2xsZXItdG9rZW4tejVwbWQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVwbG95bWVudC1jb250cm9sbGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTcwNWJiMzYtMTMyNi00MGY5LWI3ZWUtNzE3ZTAyMTM1NzA2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRlcGxveW1lbnQtY29udHJvbGxlciJ9.Av-RwOQGdEyZn56xmH_siz-7yU07OrhLhfiPqfJRaNJ5DL8wEDIZkxgNMzHrrthTsOJl7Tky3ABo5z3c_4xjgADGSqKqP0rvWtaLSHZFZR16c5S2c08aHdSH7KIAdoCy0muMiKHRw67QRf7zo5bPUyqfCyPY2vcB-pxqYnrTTAw71f34rgIPA-LACc5LIQwv8DT5O-KE1TopYF7lX5hXZIHOGP3sYpmbR7yIzO3MDNRUIfiZutYiQnHwXRQGBwHu1iUVk8Lu69gnqggkjp2cXa4d2ZUpCxrpeLGGdjPv6JPZEFLDhLbiBLF04b7IOdFQO4bH6BbXBNs9e0AGPbvp4Q
[ec2-user@k8s ~]$
方法三:当然,也可以创建一个dashboard的拥有完整权限的token:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ kubectl create serviceaccount cluster-admin-dashboard-sa -n kube-system
$ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:cluster-admin-dashboard-sa -n kube-system
And then, you can use the token of just created cluster admin service account.
$ kubectl get secret | grep cluster-admin-dashboard-sa
cluster-admin-dashboard-sa-token-6xm8l kubernetes.io/service-account-token 3 18m
$ kubectl describe secret cluster-admin-dashboard-sa-token-6xm8l
# Parse the token
$ TOKEN=$(kubectl describe secret -n kube-system $(kubectl get secret -n kube-system | awk '/^cluster-admin-dashboard-sa-token-/{print $1}') | awk '$1=="token:"{print $2}')
$ echo $TOKEN
## -OR-
[ec2-user@k8s ~]$ kubectl describe secret cluster-admin-dashboard-sa
## -OR-
[ec2-user@k8s ~]$ kubectl describe secret -n kube-system | grep deployment -A 12
如果使用token登录,一段事件没有操作就会有超时的困扰,可以修改token-ttl配置。
1
2
3
4
5
6
7
8
9
10
11
##--> Unauthorized (401): You have been logged out because your token has expired.
## https://blog.csdn.net/otoyix/article/details/118758736
# 增加一行参数 token-ttl=68400
containers:
- name: kubernetes-dashboard
image: 'kubernetesui/dashboard:v2.0.0-rc5'
args:
- '--auto-generate-certificates'
- '--namespace=kubernetes-dashboard'
- '--token-ttl=68400' -- 增加了此行
安装metrics-server
如果没有安装metrics-server,在dashboard中不能看到cpu/内存使用情况图形,kubectl top的命令也获取不到数据。
1
2
3
4
[ec2-user@k8s ~]$ kubectl top nodes
error: Metrics API not available
[ec2-user@k8s ~]$ kubectl top pods -A
error: Metrics API not available
安装metrics-server会有镜像下载和证书的问题:
1
2
3
4
5
6
7
8
# 每台主机都导入一下该镜像
[ec2-user@k8s ~]$ docker load -i metrics-server-v0.6.1.tar.gz
3dc34f14eb83: Loading layer [==================================================>] 66.43MB/66.43MB
Loaded image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
# kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
[ec2-user@k8s ~]$ vi metrics-server.yml
[ec2-user@k8s ~]$ kubectl apply -f metrics-server.yml
还是启动不起来,由于metrics-server需要连服务端,证书不对,为了先跑起来,先忽略安全证书。在containers参数最后加上 --kubelet-insecure-tls
,然后删除后重新创建一次。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
## [k8s metrics-server 轻量化监控](https://www.jianshu.com/p/5fe108d70310)
## https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#cannot-use-the-metrics-server-securely-in-a-kubeadm-cluster
## 证书 https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/#kubelet-serving-certs
## https://github.com/kubernetes-sigs/metrics-server/blob/master/FAQ.md#how-to-run-metrics-server-securely
## https://github.com/kubernetes-sigs/metrics-server/issues/196
## https://cloud.tencent.com/developer/article/1819955
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
[ec2-user@k8s ~]$ kubectl delete -f metrics-server.yml
[ec2-user@k8s ~]$ kubectl apply -f metrics-server.yml
等一小会,再次查看top命令。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[ec2-user@k8s ~]$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s 91m 4% 913Mi 23%
worker1 38m 1% 431Mi 11%
[ec2-user@k8s ~]$
[ec2-user@k8s ~]$ kubectl top pods -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-64897985d-4d5rx 1m 12Mi
coredns-64897985d-m9p9q 1m 12Mi
etcd-k8s 11m 60Mi
kube-apiserver-k8s 32m 312Mi
kube-controller-manager-k8s 13m 46Mi
kube-flannel-ds-q4qkt 2m 11Mi
kube-flannel-ds-ttcwt 2m 11Mi
kube-proxy-pd77m 7m 16Mi
kube-proxy-qj6lw 2m 16Mi
kube-scheduler-k8s 3m 17Mi
metrics-server-7cf8b65d65-trtcj 33m 11Mi
[ec2-user@k8s ~]$
同时dashboard web界面就能看到cpu/内存的性能图形了。
Hello world
编写一个配置,然后运行一个实例,看看两台机器上的pod网络是否互通。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
## docker run -it public.ecr.aws/amazonlinux/amazonlinux /bin/bash
[ec2-user@k8s ~]$ cat replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: hello-world
spec:
replicas: 2
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: amazonlinux:2
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
[ec2-user@k8s ~]$
[ec2-user@k8s ~]$ kubectl apply -f replicaset.yml
replicaset.apps/hello-world created
[ec2-user@k8s ~]$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
hello-world-d2tss 1/1 Running 0 8s 10.244.0.7 k8s <none> <none>
hello-world-h9jxq 1/1 Running 0 8s 10.244.2.12 worker1 <none> <none>
[ec2-user@k8s ~]$
[ec2-user@k8s ~]$ kubectl exec -ti hello-world-d2tss bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.2# cat /etc/hosts
bash-4.2# yum install -y iputils net-tools
bash-4.2# ping hello-world-h9jxq
ping: hello-world-h9jxq: Name or service not known
服务service才有域名。后面试一下服务的,来ping域名,测试下dns。
bash-4.2# ping 10.244.0.7
PING 10.244.0.7 (10.244.0.7) 56(84) bytes of data.
64 bytes from 10.244.0.7: icmp_seq=1 ttl=255 time=0.012 ms
64 bytes from 10.244.0.7: icmp_seq=2 ttl=255 time=0.021 ms
^C
--- 10.244.0.7 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1007ms
rtt min/avg/max/mdev = 0.012/0.016/0.021/0.006 ms
bash-4.2# ping 10.244.2.12
PING 10.244.2.12 (10.244.2.12) 56(84) bytes of data.
64 bytes from 10.244.2.12: icmp_seq=1 ttl=253 time=0.508 ms
64 bytes from 10.244.2.12: icmp_seq=2 ttl=253 time=0.425 ms
^C
--- 10.244.2.12 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1027ms
rtt min/avg/max/mdev = 0.425/0.466/0.508/0.046 ms
Service domain
配置启动容器和服务:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[ec2-user@k8s ~]$ cat pg-db.yml
---
apiVersion: v1
kind: Pod
metadata:
name: db-op-1
labels:
name: postgres
spec:
hostname: db-op-1
containers:
- name: postgres
image: postgis/postgis:9.6-2.5
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: db-op-1
spec:
ports:
- protocol: TCP
port: 5432
selector:
name: postgres
[ec2-user@k8s ~]$ kubectl apply -f pg-db.yml
在默认的namespace中再启动一个postgis的容器,用来测试访问域名:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[ec2-user@k8s ~]$ kubectl run busybox --image=postgis/postgis:9.6-2.5 -ti --restart=Never --rm --command -- sh
If you don't see a command prompt, try pressing enter.
# apt update ; apt-get install net-tools iproute2 iputils-ping -y
# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local localdomain
options ndots:5
# ping db-op-1
PING db-op-1.default.svc.cluster.local (10.107.190.149) 56(84) bytes of data.
# psql -h db-op-1 -U postgres
Password for user postgres:
psql (9.6.24)
Type "help" for help.
postgres=# \q
参考
–END