Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

Kubeadm部署kubernetes

官网文档差,删文档倒是不手软。使用脚本启动、安装的文档(docker-multinode)已经删掉了,现在都推荐使用kubeadm来进行安装。

本文使用代理在master上安装、并缓冲rpm、下载docker镜像,然后做本地YUM仓库和拷贝镜像到其他worker节点的方式来部署集群。下一篇再介绍在拥有kubelet/kubeadm rpm、以及k8s docker镜像的情况下怎么去部署一个新的k8s集群。

这里使用两台虚拟机做测试:

  • k8s kube-master : 192.168.191.138
  • woker1 : 192.168.191.139

修改主机名,改时间、时区,防火墙

1
2
3
4
5
6
7
hostnamectl --static set-hostname k8s 
hostname k8s 

rm -rf /etc/localtime 
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 

systemctl disable firewalld ; service firewalld stop

安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14

tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF

yum list docker-engine --showduplicates

yum install docker-engine-1.12.6 docker-engine-selinux-1.12.6 -y
systemctl enable docker ; systemctl start docker

翻墙安装配置

具体操作参考 使用Privoxy把shadowsocks转换为Http代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s ~]# yum install -y epel-release ; yum install -y python-pip 
[root@k8s ~]# pip install shadowsocks
[root@k8s ~]# vi /etc/shadowsocks.json 
[root@k8s ~]# sslocal -c /etc/shadowsocks.json 
[root@k8s ~]# curl --socks5-hostname 127.0.0.1:1080 www.google.com

[root@k8s ~]# yum install privoxy -y
[root@k8s ~]# vi /etc/privoxy/config 
...
forward-socks5 / 127.0.0.1:1080 .
listen-address 192.168.191.138:8118

[root@k8s ~]# systemctl enable privoxy
[root@k8s ~]# systemctl start privoxy

[root@k8s ~]# curl -x 192.168.191.138:8118 www.google.com

  等k8s安装启动好后,把privoxy的服务disable掉
  [root@k8s ~]# systemctl disable privoxy.service

下载kubectl(怪了,这个竟然可以直接下载)

变化好快,现在都1.7.2了! https://kubernetes.io/docs/tasks/tools/install-kubectl/

在master机器(常用的操作机器)安装即可。

1
2
3
4
5
6
7
8
9
10
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl

# 启用shell的提示/自动完成autocompletion
echo "source <(kubectl completion bash)" >> ~/.bashrc

[root@k8s ~]# kubectl version 
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

通过VPN安装kubelet和kubeadm

参考 https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubelet-and-kubeadm

You will install these packages on all of your machines:

  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubeadm: the command to bootstrap the cluster.

所有机器都要安装的,我们先在master节点上通过代理安装这两个软件,并把安装的所有rpm缓冲起来。

  • 配置kubernetes的仓库源:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config 
setenforce 0

yum-config-manager --enable kubernetes

修改yum的配置,增加代理,并缓冲(用于其他机器安装)

1
2
3
4
[root@k8s ~]# vi /etc/yum.conf 
keepcache=1
...
proxy=socks5://127.0.0.1:1080
  • 安装并启动kubelet:
1
2
3
4
5
yum install -y kubelet kubeadm

[root@k8s ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@k8s ~]# 

通过VPN安装初始化集群

主要是配置代理下载docker容器

由于是直接docker去获取镜像的,首先需要修改docker的配置。

参考 https://docs.docker.com/v1.12/engine/admin/systemd/#/http-proxy

  • 配置代理并重启docker、kubelet
1
2
3
4
5
6
7
8
9
[root@k8s ~]# systemctl enable docker

[root@k8s ~]# mkdir -p /etc/systemd/system/docker.service.d/
[root@k8s ~]# vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.191.138:8118/" "HTTPS_PROXY=http://192.168.191.138:8118/" "NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,192.168.191.138"
                             
[root@k8s ~]# systemctl daemon-reload
[root@k8s ~]# systemctl restart docker

docker和kubelet的cgroup驱动方式不同,需要修复配置:https://github.com/kubernetes/kubeadm/issues/103

1
2
3
4
5
6
7
8
9
10
11
前面启动了一下kubelet,有如下的错误日志
[root@k8s ~]# journalctl -xeu kubelet
Jul 29 09:11:24 k8s kubelet[48557]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgr

修改配置
[root@k8s ~]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

[root@k8s ~]# systemctl daemon-reload
[root@k8s ~]# service kubelet restart
Redirecting to /bin/systemctl restart  kubelet.service
  • 使用kubeadm进行初始化

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ (可以使用 –kubernetes-version 来指定k8s的版本)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# 配置代理,kubeadm有部分请求应该也是需要走代理的(前面用脚本安装过multinode on docker的经历猜测的)

export NO_PROXY="localhost,127.0.0.1,10.0.0.0/8,192.168.191.138"
export https_proxy=http://192.168.191.138:8118/
export http_proxy=http://192.168.191.138:8118/

# 使用reset重置,网络代理的配置修改了多次(kubeadm初始换过程失败过),还有前几次的初始化没有配置pod地址段

[root@k8s ~]# kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

# 使用flannel需要指定pod的网卡地址段(文档要整体看一遍才能少踩坑,囧)

[root@k8s ~]# kubeadm init --skip-preflight-checks --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.191.138]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[apiclient] Created API client, waiting for the control plane to become ready  
<-> 这里会停的比较久,要去下载镜像,然后还得启动容器
[apiclient] All control plane components are healthy after 293.004469 seconds
[token] Using token: 2af779.b803df0b1effb3d9
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 2af779.b803df0b1effb3d9 192.168.191.138:6443

[root@k8s ~]# 

监控安装情况命令有: docker ps, docker images, journalctl -xeu kubelet (/var/log/messages) 。

如果有镜像下载和容器新增,说明安装过程在进行中。否则得检查下你的代理是否正常工作了!

初始化完成后,配置kubectl的kubeconfig。一般都是主节点了,直接在节点执行下面命令:

1
2
3
4
5
6
7
8
[root@k8s ~]# mkdir -p $HOME/.kube
[root@k8s ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s ~]# 
[root@k8s ~]# ll ~/.kube/
total 8
drwxr-xr-x. 3 root root   23 Jul 29 21:39 cache
-rw-------. 1 root root 5451 Jul 29 22:57 config

使用Kubeadm安装Kubernetes 介绍了很多作者自己安装过程,以及遇到的问题,非常详细。安装的差不多才发现这篇文章,感觉好迟,如果早点找到,至少安装的时刻心安一点啊。

OK,服务启动了,但是 dns容器 还没有正常启动。由于我们的网络组建还没有安装好啊。其实官网也有说明,但是这安装的顺序也是醉了。

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

安装flannel

参考: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

1
2
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml

flannel启动了后,再等一阵,dns才会启动好。

安装dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
现在就一台机器,得让master也能跑pods。 
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#master-isolation

[root@k8s ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node "k8s" untainted

# https://lukemarsden.github.io/docs/user-guide/ui/
# 部署dashboard

[root@k8s ~]# kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

[root@k8s ~]# kubectl get pods --all-namespaces 看看dashboard的情况

[root@k8s ~]# kubectl get services --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.96.0.1       <none>        443/TCP         1h
kube-system   kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   1h
kube-system   kubernetes-dashboard   10.107.103.17   <none>        80/TCP          9m

https://master:6443/ui 访问不了,可以直接用k8s的service地址访问 http://10.107.103.17/#!/overview?namespace=kube-system

或者通过 proxy 访问UI:https://github.com/kubernetes/kubernetes/issues/44275

先运行proxy,启动代理程序:

1
2
[root@k8s ~]# kubectl proxy
Starting to serve on 127.0.0.1:8001

然后访问: http://localhost:8001/ui

所有的pods、镜像、容器

基本的东西都跑起来,还是挺激动啊!!第N次安装部署K8S了啊,每次都还是得像坐过山车一样啊!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@k8s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP                NODE
kube-system   etcd-k8s                                1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kube-apiserver-k8s                      1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kube-controller-manager-k8s             1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kube-dns-2425271678-qwx9f               3/3       Running   0          9h        10.244.0.2        k8s
kube-system   kube-flannel-ds-s5f63                   2/2       Running   0          9h        192.168.191.138   k8s
kube-system   kube-proxy-4pjkg                        1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kube-scheduler-k8s                      1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kubernetes-dashboard-3313488171-xl25m   1/1       Running   0          8h        10.244.0.3        k8s
[root@k8s ~]# docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kubernetes-dashboard-amd64      v1.6.3              691a82db1ecd        35 hours ago        139 MB
gcr.io/google_containers/kube-apiserver-amd64            v1.7.2              4935105a20b1        8 days ago          186.1 MB
gcr.io/google_containers/kube-proxy-amd64                v1.7.2              13a7af96c7e8        8 days ago          114.7 MB
gcr.io/google_containers/kube-controller-manager-amd64   v1.7.2              2790e95830f6        8 days ago          138 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.7.2              5db1f9874ae0        8 days ago          77.18 MB
quay.io/coreos/flannel                                   v0.8.0-amd64        9db3bab8c19e        2 weeks ago         50.73 MB
gcr.io/google_containers/k8s-dns-sidecar-amd64           1.14.4              38bac66034a6        4 weeks ago         41.81 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64          1.14.4              a8e00546bcf3        4 weeks ago         49.38 MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64     1.14.4              f7f45b9cb733        4 weeks ago         41.41 MB
gcr.io/google_containers/etcd-amd64                      3.0.17              243830dae7dd        5 months ago        168.9 MB
gcr.io/google_containers/pause-amd64                     3.0                 99e59f495ffa        15 months ago       746.9 kB
[root@k8s ~]# docker ps 
CONTAINER ID        IMAGE                                                                                                                            COMMAND                  CREATED             STATUS              PORTS               NAMES
631dc2cab02e        gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:2c4421ed80358a0ee97b44357b6cd6dc09be6ccc27dfe9d50c9bfc39a760e5fe      "/dashboard --insecur"   7 hours ago         Up 7 hours                              k8s_kubernetes-dashboard_kubernetes-dashboard-3313488171-xl25m_kube-system_0e41b8ce-747a-11e7-befb-000c2944b96c_0
8f5e4d044a6e        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 8 hours ago         Up 8 hours                              k8s_POD_kubernetes-dashboard-3313488171-xl25m_kube-system_0e41b8ce-747a-11e7-befb-000c2944b96c_0
65881f9dd2dd        gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:97074c951046e37d3cbb98b82ae85ed15704a290cce66a8314e7f846404edde9           "/sidecar --v=2 --log"   9 hours ago         Up 9 hours                              k8s_sidecar_kube-dns-2425271678-qwx9f_kube-system_ebffa28d-746d-11e7-befb-000c2944b96c_0
994c2ec99663        gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:aeeb994acbc505eabc7415187cd9edb38cbb5364dc1c2fc748154576464b3dc2     "/dnsmasq-nanny -v=2 "   9 hours ago         Up 9 hours                              k8s_dnsmasq_kube-dns-2425271678-qwx9f_kube-system_ebffa28d-746d-11e7-befb-000c2944b96c_0
5b181a0ed809        gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:40790881bbe9ef4ae4ff7fe8b892498eecb7fe6dcc22661402f271e03f7de344          "/kube-dns --domain=c"   9 hours ago         Up 9 hours                              k8s_kubedns_kube-dns-2425271678-qwx9f_kube-system_ebffa28d-746d-11e7-befb-000c2944b96c_0
a0d3f166e992        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-dns-2425271678-qwx9f_kube-system_ebffa28d-746d-11e7-befb-000c2944b96c_0
9cc7d6faf0b0        quay.io/coreos/flannel@sha256:a8116d095a1a2c4e5a47d5fea20ef82bd556bafe15bb2e6aa2c79f8f22f9586f                                   "/bin/sh -c 'set -e -"   9 hours ago         Up 9 hours                              k8s_install-cni_kube-flannel-ds-s5f63_kube-system_7ba88f5a-7470-11e7-befb-000c2944b96c_0
2f41276df8e1        quay.io/coreos/flannel@sha256:a8116d095a1a2c4e5a47d5fea20ef82bd556bafe15bb2e6aa2c79f8f22f9586f                                   "/opt/bin/flanneld --"   9 hours ago         Up 9 hours                              k8s_kube-flannel_kube-flannel-ds-s5f63_kube-system_7ba88f5a-7470-11e7-befb-000c2944b96c_0
bc25b0c70264        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-flannel-ds-s5f63_kube-system_7ba88f5a-7470-11e7-befb-000c2944b96c_0
dc3e5641c273        gcr.io/google_containers/kube-proxy-amd64@sha256:d455480e81d60e0eff3415675278fe3daec6f56c79cd5b33a9b76548d8ab4365                "/usr/local/bin/kube-"   9 hours ago         Up 9 hours                              k8s_kube-proxy_kube-proxy-4pjkg_kube-system_ebee4211-746d-11e7-befb-000c2944b96c_0
6b8b9515f562        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-proxy-4pjkg_kube-system_ebee4211-746d-11e7-befb-000c2944b96c_0
72418ca8e94f        gcr.io/google_containers/kube-apiserver-amd64@sha256:a9ccc205760319696d2ef0641de4478ee90fb0b75fbe6c09b1d64058c8819f97            "kube-apiserver --ser"   9 hours ago         Up 9 hours                              k8s_kube-apiserver_kube-apiserver-k8s_kube-system_b69ae39bcc54d7b75c2e7325359f8f87_0
9c9a3f5d8919        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-apiserver-k8s_kube-system_b69ae39bcc54d7b75c2e7325359f8f87_0
43a1751ff2bb        gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940                      "etcd --listen-client"   9 hours ago         Up 9 hours                              k8s_etcd_etcd-k8s_kube-system_9fb4ea9ba2043e46f75eec93827c4ce3_0
b110fff29f66        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_etcd-k8s_kube-system_9fb4ea9ba2043e46f75eec93827c4ce3_0
66ae85500128        gcr.io/google_containers/kube-scheduler-amd64@sha256:b2e897138449e7a00508dc589b1d4b71e56498a4d949ff30eb07b1e9d665e439            "kube-scheduler --add"   9 hours ago         Up 9 hours                              k8s_kube-scheduler_kube-scheduler-k8s_kube-system_16c371efb8946190c917cd90c2ede8ca_0
d4343be2f2d0        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-scheduler-k8s_kube-system_16c371efb8946190c917cd90c2ede8ca_0
9934cd83f6b3        gcr.io/google_containers/kube-controller-manager-amd64@sha256:2b268ab9017fadb006ee994f48b7222375fe860dc7bd14bf501b98f0ddc2961b   "kube-controller-mana"   9 hours ago         Up 9 hours                              k8s_kube-controller-manager_kube-controller-manager-k8s_kube-system_6b826c4e872a9635472113953c4538f0_0
acc1d7d90180        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-controller-manager-k8s_kube-system_6b826c4e872a9635472113953c4538f0_0
[root@k8s ~]# 

Woker节点部署

时间,主机名,/etc/hosts,防火墙,selinux, 无密钥登录,安装docker-1.12.6就不再赘述了。

直接用master的yum缓冲,还有docker镜像直接拷贝:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# master机器已安装httpd服务

[root@k8s html]# ln -s /var/cache/yum/x86_64/7/kubernetes/packages/ k8s 
[root@k8s k8s]# createrepo .          

# 把镜像全部拷到worker节点

[root@k8s ~]# docker save $( echo $( docker images | grep -v REPOSITORY | awk '{print $1}' ) ) | ssh worker1 docker load 

# 配置私有仓库源

[root@worker1 yum.repos.d]# vi k8s.repo
[k8s]
name=Kubernetes
baseurl=http://master/k8s
enabled=1
gpgcheck=0
[root@worker1 yum.repos.d]# yum list | grep k8s 
kubeadm.x86_64                             1.7.2-0                     k8s      
kubectl.x86_64                             1.7.2-0                     k8s      
kubelet.x86_64                             1.7.2-0                     k8s      
kubernetes-cni.x86_64                      0.5.1-0                     k8s      

[root@worker1 yum.repos.d]# yum install -y kubelet kubeadm                          

# 修改cgroup-driver

[root@worker1 yum.repos.d]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf  
[root@worker1 yum.repos.d]# 
[root@worker1 yum.repos.d]# service docker restart
Redirecting to /bin/systemctl restart  docker.service

[root@worker1 yum.repos.d]# systemctl daemon-reload
[root@worker1 yum.repos.d]# systemctl enable kubelet.service
[root@worker1 yum.repos.d]# service kubelet restart
Redirecting to /bin/systemctl restart  kubelet.service

# worker节点加入集群(初始化)

[root@worker1 yum.repos.d]# kubeadm join --token 2af779.b803df0b1effb3d9 192.168.191.138:6443 --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "192.168.191.138:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.191.138:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.191.138:6443"
[discovery] Successfully established connection with API Server "192.168.191.138:6443"
[bootstrap] Detected server version: v1.7.2
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

[root@k8s ~]# kubectl get nodes
NAME      STATUS    AGE       VERSION
k8s       Ready     10h       v1.7.2
worker1   Ready     57s       v1.7.2

主节点运行的flannel网络组件是个 daemonset 的pod,只要加入到集群就会在每个节点上启动。不需要额外的操作。

关于重启:

使用RPM安装的好处是:程序系统都帮你管理了:

  • worker节点重启后,kubelet会把所有的服务都带起来。
  • master重启后,需要等一段时间,因为pods启动有顺序/依赖:dns需要等flannel,dashboard需要等dns。

POD间连通性测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
[root@k8s ~]# kubectl run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created
[root@k8s ~]# kubectl get pods
NAME                           READY     STATUS              RESTARTS   AGE
hello-nginx-1507731416-qh3fx   0/1       ContainerCreating   0          8s

# 脚本启动新的dockerd并配置加速器,下载好然后save导入都本地docker实例
# https://github.com/winse/docker-hadoop/blob/master/kube-deploy/hadoop/docker-download-mirror.sh

[root@k8s ~]# ./docker-download-mirror.sh nginx 
Using default tag: latest
latest: Pulling from library/nginx

94ed0c431eb5: Pull complete 
9406c100a1c3: Pull complete 
aa74daafd50c: Pull complete 
Digest: sha256:788fa27763db6d69ad3444e8ba72f947df9e7e163bad7c1f5614f8fd27a311c3
Status: Downloaded newer image for nginx:latest
eb78099fbf7f: Loading layer [==================================================>] 58.42 MB/58.42 MB
29f11c413898: Loading layer [==================================================>] 52.74 MB/52.74 MB
af5bd3938f60: Loading layer [==================================================>] 3.584 kB/3.584 kB
Loaded image: nginx:latest

# 拷贝镜像到其他的worker节点,就几台机器搭建register服务感觉太重了

[root@k8s ~]# docker save nginx | ssh worker1 docker load
Loaded image: nginx:latest

# 查看效果

[root@k8s ~]# kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
hello-nginx-1507731416-qh3fx   1/1       Running   0          1m

# 扩容

[root@k8s ~]# kubectl scale --replicas=4 deployment/hello-nginx  
deployment "hello-nginx" scaled
[root@k8s ~]# kubectl get pods -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP           NODE
hello-nginx-1507731416-h39f0   1/1       Running   0          34s       10.244.0.6   k8s
hello-nginx-1507731416-mnj3m   1/1       Running   0          34s       10.244.1.3   worker1
hello-nginx-1507731416-nsdr2   1/1       Running   0          34s       10.244.0.7   k8s
hello-nginx-1507731416-qh3fx   1/1       Running   0          5m        10.244.1.2   worker1
[root@k8s ~]# kubectl delete deployment hello-nginx

这容器太简洁了,PING都没有啊!!搞个熟悉的linux版本,再跑一遍

kubectl run centos --image=centos:centos6 --command -- vi 
kubectl scale --replicas=4 deployment/centos

[root@k8s ~]# kubectl get pods  -o wide 
NAME                      READY     STATUS    RESTARTS   AGE       IP            NODE
centos-3024873821-4490r   1/1       Running   0          49s       10.244.1.6    worker1
centos-3024873821-k74gn   1/1       Running   0          11s       10.244.0.11   k8s
centos-3024873821-l27xs   1/1       Running   0          11s       10.244.0.10   k8s
centos-3024873821-pbg52   1/1       Running   0          11s       10.244.1.7    worker1

[root@k8s ~]# kubectl exec -ti centos-3024873821-4490r bash
[root@centos-3024873821-4490r /]# yum install -y iputils
[root@centos-3024873821-4490r /]# ping 10.244.0.11 -c 1

以上IP都是互通的,从master节点PING这些IP也是通的。

# 查看pod状态的命令
kubectl -n ${NAMESPACE} describe pod ${POD_NAME}
kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}

源IP问题

原来部署hadoop的时刻,已经遇到过了。知道根源所在,但是这次使用的cni(直接改 dockerd --ip-masq=false 配置仅修改的是docker0)。

先来重现下源ip问题:

1
2
3
4
5
6
7
8
9
10
./pod_bash centos-3024873821-t3k3r 

yum install epel-release -y ; yum install nginx -y ;
service nginx start

ifconfig

# nginx安装后,访问查看access_log

less /var/log/nginx/access.log 

在 kube-flannel.yml 中添加 cni-conf.json 网络配置为 "ipMasq": false,,没啥效果,在iptables上面还是有cni的cbr0的MASQUERADE(SNAT)。

注意:重启后,发现一切都正常了。可能是通过apply修改的,没有生效!在配置flannel之前就修改属性应该就ok了!!后面的可以不要看了,方法还比较挫。

用比较极端点的方式,删掉docker0,替换成cni0。 https://kubernetes.io/docs/getting-started-guides/scratch/#docker

把docker的网卡设置成cni0(flannel会创建cni0的网卡) :

1
2
3
4
5
6
7
# 清空原来的策略
iptables -t nat -F
ip link set docker0 down
ip link delete docker0

[root@worker1 ~]# cat /usr/lib/systemd/system/docker.service  | grep dockerd
ExecStart=/usr/bin/dockerd --bridge=cni0 --ip-masq=false 

但是机器重启后cni0这个网卡设备就没有了,导致机器重启后docker启动失败!(cni-conf.json的”ipMasq”: false是有效果的,但是好像得是新建的网卡设备才行!)

1
2
3
4
5
6
7
8
9
> Aug 01 08:36:10 k8s dockerd[943]: time="2017-08-01T08:36:10.017266292+08:00" level=fatal msg="Error starting daemon: Error initializing network controller: Error creating default \"bridge\" network: bridge device with non default name cni0 must be created manually"

ip link add name cni0 type bridge
ip link set dev cni0 mtu 1460
# 让flannel来设置IP地址
# ip addr add $NODE_X_BRIDGE_ADDR dev cni0
ip link set dev cni0 up

systemctl restart docker kubelet

另一种网络部署方式 kubenet + hostroutes : https://jishu.io/kubernetes/deploy-production-ready-kubernetes-cluster-on-aliyun/

DNS

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always

kubectl create -f busybox.yaml
kubectl exec -ti busybox -- nslookup kubernetes.default
kubectl exec busybox cat /etc/resolv.conf

DNS问题

在master节点上的POD容器内访问DNS(service)服务,但是返回数据却是域名服务内部POD的IP,而不是Service服务的IP地址。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@k8s ~]# kubectl describe services kube-dns -n kube-system
Name:                   kube-dns
Namespace:              kube-system
Labels:                 k8s-app=kube-dns
                        kubernetes.io/cluster-service=true
                        kubernetes.io/name=KubeDNS
Annotations:            <none>
Selector:               k8s-app=kube-dns
Type:                   ClusterIP
IP:                     10.96.0.10
Port:                   dns     53/UDP
Endpoints:              10.244.0.30:53
Port:                   dns-tcp 53/TCP
Endpoints:              10.244.0.30:53
Session Affinity:       None
Events:                 <none>

[root@k8s ~]# kubectl exec -ti centos-3024873821-b6d48 -- nslookup kubernetes.default
;; reply from unexpected source: 10.244.0.30#53, expected 10.96.0.10#53
;; reply from unexpected source: 10.244.0.30#53, expected 10.96.0.10#53

相关问题的一些资源:

解决方法:

kube-proxy加上 –masquerade-all 解决了。

处理方法:

https://kubernetes.io/docs/admin/kubeadm/ kubeadm installs add-on components via the API server. Right now this is the internal DNS server and the kube-proxy DaemonSet.

修改有技巧,正如官网文档所说:kube-proxy是内部容器启动的。没找到yaml配置,不能直接改配置文件,这里有如下两种方式修改:

  • 通过Dashboard页面的编辑对配置进行修改
  • 通过edit命令对配置进行修改:kubectl edit daemonset kube-proxy -n=kube-system 命令添加 - --masquerade-all

Heapster

参考

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s ~]# git clone https://github.com/kubernetes/heapster.git
Cloning into 'heapster'...
remote: Counting objects: 26084, done.
remote: Total 26084 (delta 0), reused 0 (delta 0), pack-reused 26084
Receiving objects: 100% (26084/26084), 36.33 MiB | 2.66 MiB/s, done.
Resolving deltas: 100% (13084/13084), done.
Checking out files: 100% (2531/2531), done.

[root@k8s ~]# cd heapster/
[root@k8s heapster]# kubectl create -f deploy/kube-config/influxdb/
deployment "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
[root@k8s heapster]# kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml 
clusterrolebinding "heapster" created

其他资源:

DNS的问题耗了比较多的时间。弄好了DNS后,以及heapster的docker镜像的下载都OK的话,就万事俱备了。最后重新启动下dashboard就行了:

1
2
[root@k8s ~]# kubectl delete -f kubernetes-dashboard.yaml 
[root@k8s ~]# kubectl create -f kubernetes-dashboard.yaml 

然后就可以在dashboard上看到美美的曲线图了。

harbor

参考 https://github.com/vmware/harbor/blob/master/docs/kubernetes_deployment.md

日新月异啊,1.1.2版本了!! 用迅雷直接下载 https://github.com/vmware/harbor/releases/download/v1.1.2/harbor-offline-installer-v1.1.2.tgz 这个地址。

操作方式还是和原来的版本一样。也就是说可以用原来简化的脚本来安装!

搭建好了后,会基本的使用就差不多了。测试环境资源有限,并且其实用save和load也能解决(咔咔)。

livenessProbe - Nexus的无响应处理

github仓库 上有一份开发环境的NEXUS的启动脚本,从一开始的单pods,改成replicationcontroller。觉得万事大吉了。

但,现在又出现一个问题,就是容器还在,但是8081不提供服务了。这很尴尬,其他开发人员说nexus又不能访问了,我想不对,不是已经改成rc了么,容器应该不会挂才对啊。上环境一看,容器是在,但是服务是真没响应。

怎么办?

搞定时任务,觉得有点low。后面想如果判断一下服务不能访问了就重启,其实k8s已经想到了这一点了,提供了存活探针livenessProbe 。直接按照官网给的http的例子写就行了。等过几天看效果。

参考

官方的一些资源

使用kubeadm安装集群

DNS问题参考

其他一些资源

—END

Comments