Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

Kubeadm部署kubernetes

官网文档差,删文档倒是不手软。使用脚本启动、安装的文档(docker-multinode)已经删掉了,现在都推荐使用kubeadm来进行安装。

本文使用代理在master上安装、并缓冲rpm、下载docker镜像,然后做本地YUM仓库和拷贝镜像到其他worker节点的方式来部署集群。下一篇再介绍在拥有kubelet/kubeadm rpm、以及k8s docker镜像的情况下怎么去部署一个新的k8s集群。

这里使用两台虚拟机做测试:

  • k8s kube-master : 192.168.191.138
  • woker1 : 192.168.191.139

修改主机名,改时间、时区,防火墙

1
2
3
4
5
6
7
hostnamectl --static set-hostname k8s 
hostname k8s 

rm -rf /etc/localtime 
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 

systemctl disable firewalld ; service firewalld stop

安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14

tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF

yum list docker-engine --showduplicates

yum install docker-engine-1.12.6 docker-engine-selinux-1.12.6 -y
systemctl enable docker ; systemctl start docker

翻墙安装配置

具体操作参考 使用Privoxy把shadowsocks转换为Http代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s ~]# yum install -y epel-release ; yum install -y python-pip 
[root@k8s ~]# pip install shadowsocks
[root@k8s ~]# vi /etc/shadowsocks.json 
[root@k8s ~]# sslocal -c /etc/shadowsocks.json 
[root@k8s ~]# curl --socks5-hostname 127.0.0.1:1080 www.google.com

[root@k8s ~]# yum install privoxy -y
[root@k8s ~]# vi /etc/privoxy/config 
...
forward-socks5 / 127.0.0.1:1080 .
listen-address 192.168.191.138:8118

[root@k8s ~]# systemctl enable privoxy
[root@k8s ~]# systemctl start privoxy

[root@k8s ~]# curl -x 192.168.191.138:8118 www.google.com

  等k8s安装启动好后,把privoxy的服务disable掉
  [root@k8s ~]# systemctl disable privoxy.service

下载kubectl(怪了,这个竟然可以直接下载)

变化好快,现在都1.7.2了! https://kubernetes.io/docs/tasks/tools/install-kubectl/

在master机器(常用的操作机器)安装即可。

1
2
3
4
5
6
7
8
9
10
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl

# 启用shell的提示/自动完成autocompletion
echo "source <(kubectl completion bash)" >> ~/.bashrc

[root@k8s ~]# kubectl version 
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

通过VPN安装kubelet和kubeadm

参考 https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-kubelet-and-kubeadm

You will install these packages on all of your machines:

  • kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
  • kubeadm: the command to bootstrap the cluster.

所有机器都要安装的,我们先在master节点上通过代理安装这两个软件,并把安装的所有rpm缓冲起来。

  • 配置kubernetes的仓库源:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config 
setenforce 0

yum-config-manager --enable kubernetes

修改yum的配置,增加代理,并缓冲(用于其他机器安装)

1
2
3
4
[root@k8s ~]# vi /etc/yum.conf 
keepcache=1
...
proxy=socks5://127.0.0.1:1080
  • 安装并启动kubelet:
1
2
3
4
5
yum install -y kubelet kubeadm

[root@k8s ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@k8s ~]# 

通过VPN安装初始化集群

主要是配置代理下载docker容器

由于是直接docker去获取镜像的,首先需要修改docker的配置。

参考 https://docs.docker.com/v1.12/engine/admin/systemd/#/http-proxy

  • 配置代理并重启docker、kubelet
1
2
3
4
5
6
7
8
9
[root@k8s ~]# systemctl enable docker

[root@k8s ~]# mkdir -p /etc/systemd/system/docker.service.d/
[root@k8s ~]# vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.191.138:8118/" "HTTPS_PROXY=http://192.168.191.138:8118/" "NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,192.168.191.138"
                             
[root@k8s ~]# systemctl daemon-reload
[root@k8s ~]# systemctl restart docker

docker和kubelet的cgroup驱动方式不同,需要修复配置:https://github.com/kubernetes/kubeadm/issues/103

1
2
3
4
5
6
7
8
9
10
11
前面启动了一下kubelet,有如下的错误日志
[root@k8s ~]# journalctl -xeu kubelet
Jul 29 09:11:24 k8s kubelet[48557]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgr

修改配置
[root@k8s ~]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

[root@k8s ~]# systemctl daemon-reload
[root@k8s ~]# service kubelet restart
Redirecting to /bin/systemctl restart  kubelet.service
  • 使用kubeadm进行初始化

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ (可以使用 –kubernetes-version 来指定k8s的版本)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# 配置代理,kubeadm有部分请求应该也是需要走代理的(前面用脚本安装过multinode on docker的经历猜测的)

export NO_PROXY="localhost,127.0.0.1,10.0.0.0/8,192.168.191.138"
export https_proxy=http://192.168.191.138:8118/
export http_proxy=http://192.168.191.138:8118/

# 使用reset重置,网络代理的配置修改了多次(kubeadm初始换过程失败过),还有前几次的初始化没有配置pod地址段

[root@k8s ~]# kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

# 使用flannel需要指定pod的网卡地址段(文档要整体看一遍才能少踩坑,囧)

[root@k8s ~]# kubeadm init --skip-preflight-checks --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.191.138]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[apiclient] Created API client, waiting for the control plane to become ready  
<-> 这里会停的比较久,要去下载镜像,然后还得启动容器
[apiclient] All control plane components are healthy after 293.004469 seconds
[token] Using token: 2af779.b803df0b1effb3d9
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 2af779.b803df0b1effb3d9 192.168.191.138:6443

[root@k8s ~]# 

监控安装情况命令有: docker ps, docker images, journalctl -xeu kubelet (/var/log/messages) 。

如果有镜像下载和容器新增,说明安装过程在进行中。否则得检查下你的代理是否正常工作了!

初始化完成后,配置kubectl的kubeconfig。一般都是主节点了,直接在节点执行下面命令:

1
2
3
4
5
6
7
8
[root@k8s ~]# mkdir -p $HOME/.kube
[root@k8s ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s ~]# 
[root@k8s ~]# ll ~/.kube/
total 8
drwxr-xr-x. 3 root root   23 Jul 29 21:39 cache
-rw-------. 1 root root 5451 Jul 29 22:57 config

使用Kubeadm安装Kubernetes 介绍了很多作者自己安装过程,以及遇到的问题,非常详细。安装的差不多才发现这篇文章,感觉好迟,如果早点找到,至少安装的时刻心安一点啊。

OK,服务启动了,但是 dns容器 还没有正常启动。由于我们的网络组建还没有安装好啊。其实官网也有说明,但是这安装的顺序也是醉了。

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

安装flannel

参考: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

1
2
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml

flannel启动了后,再等一阵,dns才会启动好。

安装dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
现在就一台机器,得让master也能跑pods。 
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#master-isolation

[root@k8s ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
node "k8s" untainted

# https://lukemarsden.github.io/docs/user-guide/ui/
# 部署dashboard

[root@k8s ~]# kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

[root@k8s ~]# kubectl get pods --all-namespaces 看看dashboard的情况

[root@k8s ~]# kubectl get services --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.96.0.1       <none>        443/TCP         1h
kube-system   kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   1h
kube-system   kubernetes-dashboard   10.107.103.17   <none>        80/TCP          9m

https://master:6443/ui 访问不了,可以直接用k8s的service地址访问 http://10.107.103.17/#!/overview?namespace=kube-system

或者通过 proxy 访问UI:https://github.com/kubernetes/kubernetes/issues/44275

先运行proxy,启动代理程序:

1
2
[root@k8s ~]# kubectl proxy
Starting to serve on 127.0.0.1:8001

然后访问: http://localhost:8001/ui

所有的pods、镜像、容器

基本的东西都跑起来,还是挺激动啊!!第N次安装部署K8S了啊,每次都还是得像坐过山车一样啊!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@k8s ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP                NODE
kube-system   etcd-k8s                                1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kube-apiserver-k8s                      1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kube-controller-manager-k8s             1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kube-dns-2425271678-qwx9f               3/3       Running   0          9h        10.244.0.2        k8s
kube-system   kube-flannel-ds-s5f63                   2/2       Running   0          9h        192.168.191.138   k8s
kube-system   kube-proxy-4pjkg                        1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kube-scheduler-k8s                      1/1       Running   0          9h        192.168.191.138   k8s
kube-system   kubernetes-dashboard-3313488171-xl25m   1/1       Running   0          8h        10.244.0.3        k8s
[root@k8s ~]# docker images
REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/kubernetes-dashboard-amd64      v1.6.3              691a82db1ecd        35 hours ago        139 MB
gcr.io/google_containers/kube-apiserver-amd64            v1.7.2              4935105a20b1        8 days ago          186.1 MB
gcr.io/google_containers/kube-proxy-amd64                v1.7.2              13a7af96c7e8        8 days ago          114.7 MB
gcr.io/google_containers/kube-controller-manager-amd64   v1.7.2              2790e95830f6        8 days ago          138 MB
gcr.io/google_containers/kube-scheduler-amd64            v1.7.2              5db1f9874ae0        8 days ago          77.18 MB
quay.io/coreos/flannel                                   v0.8.0-amd64        9db3bab8c19e        2 weeks ago         50.73 MB
gcr.io/google_containers/k8s-dns-sidecar-amd64           1.14.4              38bac66034a6        4 weeks ago         41.81 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64          1.14.4              a8e00546bcf3        4 weeks ago         49.38 MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64     1.14.4              f7f45b9cb733        4 weeks ago         41.41 MB
gcr.io/google_containers/etcd-amd64                      3.0.17              243830dae7dd        5 months ago        168.9 MB
gcr.io/google_containers/pause-amd64                     3.0                 99e59f495ffa        15 months ago       746.9 kB
[root@k8s ~]# docker ps 
CONTAINER ID        IMAGE                                                                                                                            COMMAND                  CREATED             STATUS              PORTS               NAMES
631dc2cab02e        gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:2c4421ed80358a0ee97b44357b6cd6dc09be6ccc27dfe9d50c9bfc39a760e5fe      "/dashboard --insecur"   7 hours ago         Up 7 hours                              k8s_kubernetes-dashboard_kubernetes-dashboard-3313488171-xl25m_kube-system_0e41b8ce-747a-11e7-befb-000c2944b96c_0
8f5e4d044a6e        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 8 hours ago         Up 8 hours                              k8s_POD_kubernetes-dashboard-3313488171-xl25m_kube-system_0e41b8ce-747a-11e7-befb-000c2944b96c_0
65881f9dd2dd        gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:97074c951046e37d3cbb98b82ae85ed15704a290cce66a8314e7f846404edde9           "/sidecar --v=2 --log"   9 hours ago         Up 9 hours                              k8s_sidecar_kube-dns-2425271678-qwx9f_kube-system_ebffa28d-746d-11e7-befb-000c2944b96c_0
994c2ec99663        gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:aeeb994acbc505eabc7415187cd9edb38cbb5364dc1c2fc748154576464b3dc2     "/dnsmasq-nanny -v=2 "   9 hours ago         Up 9 hours                              k8s_dnsmasq_kube-dns-2425271678-qwx9f_kube-system_ebffa28d-746d-11e7-befb-000c2944b96c_0
5b181a0ed809        gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:40790881bbe9ef4ae4ff7fe8b892498eecb7fe6dcc22661402f271e03f7de344          "/kube-dns --domain=c"   9 hours ago         Up 9 hours                              k8s_kubedns_kube-dns-2425271678-qwx9f_kube-system_ebffa28d-746d-11e7-befb-000c2944b96c_0
a0d3f166e992        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-dns-2425271678-qwx9f_kube-system_ebffa28d-746d-11e7-befb-000c2944b96c_0
9cc7d6faf0b0        quay.io/coreos/flannel@sha256:a8116d095a1a2c4e5a47d5fea20ef82bd556bafe15bb2e6aa2c79f8f22f9586f                                   "/bin/sh -c 'set -e -"   9 hours ago         Up 9 hours                              k8s_install-cni_kube-flannel-ds-s5f63_kube-system_7ba88f5a-7470-11e7-befb-000c2944b96c_0
2f41276df8e1        quay.io/coreos/flannel@sha256:a8116d095a1a2c4e5a47d5fea20ef82bd556bafe15bb2e6aa2c79f8f22f9586f                                   "/opt/bin/flanneld --"   9 hours ago         Up 9 hours                              k8s_kube-flannel_kube-flannel-ds-s5f63_kube-system_7ba88f5a-7470-11e7-befb-000c2944b96c_0
bc25b0c70264        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-flannel-ds-s5f63_kube-system_7ba88f5a-7470-11e7-befb-000c2944b96c_0
dc3e5641c273        gcr.io/google_containers/kube-proxy-amd64@sha256:d455480e81d60e0eff3415675278fe3daec6f56c79cd5b33a9b76548d8ab4365                "/usr/local/bin/kube-"   9 hours ago         Up 9 hours                              k8s_kube-proxy_kube-proxy-4pjkg_kube-system_ebee4211-746d-11e7-befb-000c2944b96c_0
6b8b9515f562        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-proxy-4pjkg_kube-system_ebee4211-746d-11e7-befb-000c2944b96c_0
72418ca8e94f        gcr.io/google_containers/kube-apiserver-amd64@sha256:a9ccc205760319696d2ef0641de4478ee90fb0b75fbe6c09b1d64058c8819f97            "kube-apiserver --ser"   9 hours ago         Up 9 hours                              k8s_kube-apiserver_kube-apiserver-k8s_kube-system_b69ae39bcc54d7b75c2e7325359f8f87_0
9c9a3f5d8919        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-apiserver-k8s_kube-system_b69ae39bcc54d7b75c2e7325359f8f87_0
43a1751ff2bb        gcr.io/google_containers/etcd-amd64@sha256:d83d3545e06fb035db8512e33bd44afb55dea007a3abd7b17742d3ac6d235940                      "etcd --listen-client"   9 hours ago         Up 9 hours                              k8s_etcd_etcd-k8s_kube-system_9fb4ea9ba2043e46f75eec93827c4ce3_0
b110fff29f66        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_etcd-k8s_kube-system_9fb4ea9ba2043e46f75eec93827c4ce3_0
66ae85500128        gcr.io/google_containers/kube-scheduler-amd64@sha256:b2e897138449e7a00508dc589b1d4b71e56498a4d949ff30eb07b1e9d665e439            "kube-scheduler --add"   9 hours ago         Up 9 hours                              k8s_kube-scheduler_kube-scheduler-k8s_kube-system_16c371efb8946190c917cd90c2ede8ca_0
d4343be2f2d0        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-scheduler-k8s_kube-system_16c371efb8946190c917cd90c2ede8ca_0
9934cd83f6b3        gcr.io/google_containers/kube-controller-manager-amd64@sha256:2b268ab9017fadb006ee994f48b7222375fe860dc7bd14bf501b98f0ddc2961b   "kube-controller-mana"   9 hours ago         Up 9 hours                              k8s_kube-controller-manager_kube-controller-manager-k8s_kube-system_6b826c4e872a9635472113953c4538f0_0
acc1d7d90180        gcr.io/google_containers/pause-amd64:3.0                                                                                         "/pause"                 9 hours ago         Up 9 hours                              k8s_POD_kube-controller-manager-k8s_kube-system_6b826c4e872a9635472113953c4538f0_0
[root@k8s ~]# 

Woker节点部署

时间,主机名,/etc/hosts,防火墙,selinux, 无密钥登录,安装docker-1.12.6就不再赘述了。

直接用master的yum缓冲,还有docker镜像直接拷贝:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
# master机器已安装httpd服务

[root@k8s html]# ln -s /var/cache/yum/x86_64/7/kubernetes/packages/ k8s 
[root@k8s k8s]# createrepo .          

# 把镜像全部拷到worker节点

[root@k8s ~]# docker save $( echo $( docker images | grep -v REPOSITORY | awk '{print $1}' ) ) | ssh worker1 docker load 

# 配置私有仓库源

[root@worker1 yum.repos.d]# vi k8s.repo
[k8s]
name=Kubernetes
baseurl=http://master/k8s
enabled=1
gpgcheck=0
[root@worker1 yum.repos.d]# yum list | grep k8s 
kubeadm.x86_64                             1.7.2-0                     k8s      
kubectl.x86_64                             1.7.2-0                     k8s      
kubelet.x86_64                             1.7.2-0                     k8s      
kubernetes-cni.x86_64                      0.5.1-0                     k8s      

[root@worker1 yum.repos.d]# yum install -y kubelet kubeadm                          

# 修改cgroup-driver

[root@worker1 yum.repos.d]# vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf  
[root@worker1 yum.repos.d]# 
[root@worker1 yum.repos.d]# service docker restart
Redirecting to /bin/systemctl restart  docker.service

[root@worker1 yum.repos.d]# systemctl daemon-reload
[root@worker1 yum.repos.d]# systemctl enable kubelet.service
[root@worker1 yum.repos.d]# service kubelet restart
Redirecting to /bin/systemctl restart  kubelet.service

# worker节点加入集群(初始化)

[root@worker1 yum.repos.d]# kubeadm join --token 2af779.b803df0b1effb3d9 192.168.191.138:6443 --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "192.168.191.138:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.191.138:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.191.138:6443"
[discovery] Successfully established connection with API Server "192.168.191.138:6443"
[bootstrap] Detected server version: v1.7.2
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

[root@k8s ~]# kubectl get nodes
NAME      STATUS    AGE       VERSION
k8s       Ready     10h       v1.7.2
worker1   Ready     57s       v1.7.2

主节点运行的flannel网络组件是个 daemonset 的pod,只要加入到集群就会在每个节点上启动。不需要额外的操作。

关于重启:

使用RPM安装的好处是:程序系统都帮你管理了:

  • worker节点重启后,kubelet会把所有的服务都带起来。
  • master重启后,需要等一段时间,因为pods启动有顺序/依赖:dns需要等flannel,dashboard需要等dns。

POD间连通性测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
[root@k8s ~]# kubectl run hello-nginx --image=nginx --port=80
deployment "hello-nginx" created
[root@k8s ~]# kubectl get pods
NAME                           READY     STATUS              RESTARTS   AGE
hello-nginx-1507731416-qh3fx   0/1       ContainerCreating   0          8s

# 脚本启动新的dockerd并配置加速器,下载好然后save导入都本地docker实例
# https://github.com/winse/docker-hadoop/blob/master/kube-deploy/hadoop/docker-download-mirror.sh

[root@k8s ~]# ./docker-download-mirror.sh nginx 
Using default tag: latest
latest: Pulling from library/nginx

94ed0c431eb5: Pull complete 
9406c100a1c3: Pull complete 
aa74daafd50c: Pull complete 
Digest: sha256:788fa27763db6d69ad3444e8ba72f947df9e7e163bad7c1f5614f8fd27a311c3
Status: Downloaded newer image for nginx:latest
eb78099fbf7f: Loading layer [==================================================>] 58.42 MB/58.42 MB
29f11c413898: Loading layer [==================================================>] 52.74 MB/52.74 MB
af5bd3938f60: Loading layer [==================================================>] 3.584 kB/3.584 kB
Loaded image: nginx:latest

# 拷贝镜像到其他的worker节点,就几台机器搭建register服务感觉太重了

[root@k8s ~]# docker save nginx | ssh worker1 docker load
Loaded image: nginx:latest

# 查看效果

[root@k8s ~]# kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
hello-nginx-1507731416-qh3fx   1/1       Running   0          1m

# 扩容

[root@k8s ~]# kubectl scale --replicas=4 deployment/hello-nginx  
deployment "hello-nginx" scaled
[root@k8s ~]# kubectl get pods -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP           NODE
hello-nginx-1507731416-h39f0   1/1       Running   0          34s       10.244.0.6   k8s
hello-nginx-1507731416-mnj3m   1/1       Running   0          34s       10.244.1.3   worker1
hello-nginx-1507731416-nsdr2   1/1       Running   0          34s       10.244.0.7   k8s
hello-nginx-1507731416-qh3fx   1/1       Running   0          5m        10.244.1.2   worker1
[root@k8s ~]# kubectl delete deployment hello-nginx

这容器太简洁了,PING都没有啊!!搞个熟悉的linux版本,再跑一遍

kubectl run centos --image=centos:centos6 --command -- vi 
kubectl scale --replicas=4 deployment/centos

[root@k8s ~]# kubectl get pods  -o wide 
NAME                      READY     STATUS    RESTARTS   AGE       IP            NODE
centos-3024873821-4490r   1/1       Running   0          49s       10.244.1.6    worker1
centos-3024873821-k74gn   1/1       Running   0          11s       10.244.0.11   k8s
centos-3024873821-l27xs   1/1       Running   0          11s       10.244.0.10   k8s
centos-3024873821-pbg52   1/1       Running   0          11s       10.244.1.7    worker1

[root@k8s ~]# kubectl exec -ti centos-3024873821-4490r bash
[root@centos-3024873821-4490r /]# yum install -y iputils
[root@centos-3024873821-4490r /]# ping 10.244.0.11 -c 1

以上IP都是互通的,从master节点PING这些IP也是通的。

# 查看pod状态的命令
kubectl -n ${NAMESPACE} describe pod ${POD_NAME}
kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}

源IP问题

原来部署hadoop的时刻,已经遇到过了。知道根源所在,但是这次使用的cni(直接改 dockerd --ip-masq=false 配置仅修改的是docker0)。

先来重现下源ip问题:

1
2
3
4
5
6
7
8
9
10
./pod_bash centos-3024873821-t3k3r 

yum install epel-release -y ; yum install nginx -y ;
service nginx start

ifconfig

# nginx安装后,访问查看access_log

less /var/log/nginx/access.log 

在 kube-flannel.yml 中添加 cni-conf.json 网络配置为 "ipMasq": false,,没啥效果,在iptables上面还是有cni的cbr0的MASQUERADE(SNAT)。

注意:重启后,发现一切都正常了。可能是通过apply修改的,没有生效!在配置flannel之前就修改属性应该就ok了!!后面的可以不要看了,方法还比较挫。

用比较极端点的方式,删掉docker0,替换成cni0。 https://kubernetes.io/docs/getting-started-guides/scratch/#docker

把docker的网卡设置成cni0(flannel会创建cni0的网卡) :

1
2
3
4
5
6
7
# 清空原来的策略
iptables -t nat -F
ip link set docker0 down
ip link delete docker0

[root@worker1 ~]# cat /usr/lib/systemd/system/docker.service  | grep dockerd
ExecStart=/usr/bin/dockerd --bridge=cni0 --ip-masq=false 

但是机器重启后cni0这个网卡设备就没有了,导致机器重启后docker启动失败!(cni-conf.json的”ipMasq”: false是有效果的,但是好像得是新建的网卡设备才行!)

1
2
3
4
5
6
7
8
9
> Aug 01 08:36:10 k8s dockerd[943]: time="2017-08-01T08:36:10.017266292+08:00" level=fatal msg="Error starting daemon: Error initializing network controller: Error creating default \"bridge\" network: bridge device with non default name cni0 must be created manually"

ip link add name cni0 type bridge
ip link set dev cni0 mtu 1460
# 让flannel来设置IP地址
# ip addr add $NODE_X_BRIDGE_ADDR dev cni0
ip link set dev cni0 up

systemctl restart docker kubelet

另一种网络部署方式 kubenet + hostroutes : https://jishu.io/kubernetes/deploy-production-ready-kubernetes-cluster-on-aliyun/

DNS

https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# cat busybox.yaml
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always

kubectl create -f busybox.yaml
kubectl exec -ti busybox -- nslookup kubernetes.default
kubectl exec busybox cat /etc/resolv.conf

DNS问题

在master节点上的POD容器内访问DNS(service)服务,但是返回数据却是域名服务内部POD的IP,而不是Service服务的IP地址。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@k8s ~]# kubectl describe services kube-dns -n kube-system
Name:                   kube-dns
Namespace:              kube-system
Labels:                 k8s-app=kube-dns
                        kubernetes.io/cluster-service=true
                        kubernetes.io/name=KubeDNS
Annotations:            <none>
Selector:               k8s-app=kube-dns
Type:                   ClusterIP
IP:                     10.96.0.10
Port:                   dns     53/UDP
Endpoints:              10.244.0.30:53
Port:                   dns-tcp 53/TCP
Endpoints:              10.244.0.30:53
Session Affinity:       None
Events:                 <none>

[root@k8s ~]# kubectl exec -ti centos-3024873821-b6d48 -- nslookup kubernetes.default
;; reply from unexpected source: 10.244.0.30#53, expected 10.96.0.10#53
;; reply from unexpected source: 10.244.0.30#53, expected 10.96.0.10#53

相关问题的一些资源:

解决方法:

kube-proxy加上 –masquerade-all 解决了。

处理方法:

https://kubernetes.io/docs/admin/kubeadm/ kubeadm installs add-on components via the API server. Right now this is the internal DNS server and the kube-proxy DaemonSet.

修改有技巧,正如官网文档所说:kube-proxy是内部容器启动的。没找到yaml配置,不能直接改配置文件,这里有如下两种方式修改:

  • 通过Dashboard页面的编辑对配置进行修改
  • 通过edit命令对配置进行修改:kubectl edit daemonset kube-proxy -n=kube-system 命令添加 - --masquerade-all

Heapster

参考

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s ~]# git clone https://github.com/kubernetes/heapster.git
Cloning into 'heapster'...
remote: Counting objects: 26084, done.
remote: Total 26084 (delta 0), reused 0 (delta 0), pack-reused 26084
Receiving objects: 100% (26084/26084), 36.33 MiB | 2.66 MiB/s, done.
Resolving deltas: 100% (13084/13084), done.
Checking out files: 100% (2531/2531), done.

[root@k8s ~]# cd heapster/
[root@k8s heapster]# kubectl create -f deploy/kube-config/influxdb/
deployment "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
[root@k8s heapster]# kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml 
clusterrolebinding "heapster" created

其他资源:

DNS的问题耗了比较多的时间。弄好了DNS后,以及heapster的docker镜像的下载都OK的话,就万事俱备了。最后重新启动下dashboard就行了:

1
2
[root@k8s ~]# kubectl delete -f kubernetes-dashboard.yaml 
[root@k8s ~]# kubectl create -f kubernetes-dashboard.yaml 

然后就可以在dashboard上看到美美的曲线图了。

harbor

参考 https://github.com/vmware/harbor/blob/master/docs/kubernetes_deployment.md

日新月异啊,1.1.2版本了!! 用迅雷直接下载 https://github.com/vmware/harbor/releases/download/v1.1.2/harbor-offline-installer-v1.1.2.tgz 这个地址。

操作方式还是和原来的版本一样。也就是说可以用原来简化的脚本来安装!

搭建好了后,会基本的使用就差不多了。测试环境资源有限,并且其实用save和load也能解决(咔咔)。

livenessProbe - Nexus的无响应处理

github仓库 上有一份开发环境的NEXUS的启动脚本,从一开始的单pods,改成replicationcontroller。觉得万事大吉了。

但,现在又出现一个问题,就是容器还在,但是8081不提供服务了。这很尴尬,其他开发人员说nexus又不能访问了,我想不对,不是已经改成rc了么,容器应该不会挂才对啊。上环境一看,容器是在,但是服务是真没响应。

怎么办?

搞定时任务,觉得有点low。后面想如果判断一下服务不能访问了就重启,其实k8s已经想到了这一点了,提供了存活探针livenessProbe 。直接按照官网给的http的例子写就行了。等过几天看效果。

参考

官方的一些资源

使用kubeadm安装集群

DNS问题参考

其他一些资源

—END

[转]一致性Hash

一致性哈希

图文并茂,写的非常好。

要点:

  1. 解决Hash的随机分布带来的增删节点的需重新全部映射的问题:对主机使用同样的函数把主机A分布到环上(其实就是分配一段范围),然后在Hash后在这段范围内的数据全部存储到主机A上。这样增删节点只需要对部分数据重新映射。

  1. 由此又引入了一个优化的点。(随机在环上放置节点)机器硬件不同,能力不同,以及数据分布均衡(热点机器)等的问题。所以,虚拟节点就是用来节点这个问题的。每个节点可以指定分配的虚拟节点数。

–END

togo简单的RPM打包工具

源码: https://github.com/genereese/togo

安装

1
yum install https://github.com/genereese/togo/releases/download/v2.3r7/togo-2.3-7.noarch.rpm

实际案例使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 创建类似rpmbuild的骨架
togo project create my-new-rpm; cd my-new-rpm

# 内容准备
mkdir -p root/usr/local/bin; touch root/usr/local/bin/exmaple.sh
chmod +x root/usr/local/bin/exmaple.sh

# 排除目录、文件
togo file exclude root/usr/local/bin
  Removed '/usr/local/bin' from project ownership.
  Removed '/usr/local' from project ownership.
  Removed '/usr' from project ownership.

# 修改属性,如第二次重新打包就需要修改下release
vi spec/header

# 编译打包
togo build package

成果

1
2
3
4
$ ll rpms/my-new-rpm-1.0-1.noarch.rpm
-rw-r--r-- 1 root root 2236 Jul 14 12:17 rpms/my-new-rpm-1.0-1.noarch.rpm
$ rpm -qpl rpms/my-new-rpm-1.0-1.noarch.rpm
/usr/local/bin/exmaple.sh

打出来的就是第一个标准的rpm包,然后就可以按照rpm包的方式进行处理了:直接安装、或者使用createrepo来制作本地仓库等等。

用来简单打包文件还是挺方便的。相当于把骨架都搭建好了,然后还提供了一些方便的命令来进行维护修改。

还有一个 rpmdevtools 也是一个创建编译项目的脚手架,只不过这仅仅是对rpmbuild方式的辅助。更多的还是需要自己精心的维护spec。

还有提到的 docker-rpm-builder 需要centos7。如果要打那种N个环境的rpm包,才能体现出它的优势吧。

–END

爬虫之CasperJS

用jsoup(java, scala, groovy)爬过数据,用cheerio(nodejs)爬过数据,每次爬取都要对页面HTML结构,数据来源URL进行研究。还要对网站的反扒做一些HEADER的设置。各种繁琐,主要还有一些数据型的网站验证复杂,很难通过简单的方式来破解它的那套反扒流程。

CasperJS是在phantomjs基础上的一套工具库用来简化phantomjs的操作,降低使用和入门的门槛。而PhantomJS是类似浏览器的一个工具(headless browsers),你可以把它看做浏览器。所以可以通过CasperJS来操作浏览器访问地址,然后加载完页面后再提取数据,这样就不要考虑被反扒的风险,并且获取数据的方式相对容易和简单。

先从官网的案例体验下HelloWorld以及如何调试

下载最新的CasperJS(npm install)即可,PhantomJS下载1.9.8版本,不推荐2+版本,有些功能有问题。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

R:\test>set PATH=C:\Users\winse\AppData\Roaming\npm\node_modules\casperjs\bin;E:\phantomjs-1.9.8-windows;%PATH

R:\test>cat hello.js
var casper = require('casper').create();
// debugger

casper.start('http://casperjs.org/', function() {
    this.echo(this.getTitle());
    
    this.echo("Star: " + this.evaluate(function () { 
        return $(".octicon-star").parent().text().trim()
    }) )
});

casper.thenOpen('http://phantomjs.org', function() {
    this.echo(this.getTitle());
    
    this.echo("Intro: " + this.evaluate(function () { 
        return $(".intro h1").innerHTML
        // return document.querySelector(".intro h1").innerHTML
    }) )
});

casper.run();

R:\test>casperjs  hello.js
CasperJS, a navigation scripting and testing utility for PhantomJS and SlimerJS
Star: 6,337 Stargazers
PhantomJS | PhantomJS
Intro: null

用js的方式来获取页面数据,非常完美,相比直接通过URL请求来获取数据,CasperJS就是慢了点(有点像我们每次都打开浏览器然后再访问,可以通过建立服务,然后在常驻PhantomJS访问页面)。

上面第二次获取的数据不是我们想要的,这里我们通过调试看看到底是什么原因导致的。在start前增加一行 debugger 。然后执行:

1
casperjs hello.js --verbose --log-level=debug --remote-debugger-port=9000

打开浏览器方式 localhost:9000 点击 about:blank 链接,然后在Console窗口执行 __run() ,等一下下会停在debugger那一行,再然后就是愉快的debug就好了。

http://phantomjs.org 那一段的evaluate代码处增加一个断点,运行到该断点后,再次打开 localhost:9000 会多出一个当前访问页面的链接,点击进去就像平时F12看到的调式窗口了。

注意: Chrome浏览器要用V54版本以下 的。

调试详情如下:

1
2
3
4
5
6
7
8
9
10
11
> $(".intro h1")
null
> $
bound: function () {
        return document.getElementById.apply(document, arguments);
    }
> document.querySelector(".intro h1").innerHTML
"
        Full web stack<br>
        No browser required
      "

那我们把js脚本修改成querySelector来获取数据。再次执行:

1
2
3
4
5
6
7
R:\test>casperjs  hello.js
CasperJS, a navigation scripting and testing utility for PhantomJS and SlimerJS
Star: 6,337 Stargazers
PhantomJS | PhantomJS
Intro:
        Full web stack<br>
        No browser required

功能特性

  • 截图

有现成的方法,但是需要自己处理下背景颜色 Tips and Tricks

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
> cat capture.js
var casper = require('casper').create({
    waitTimeout: 120000,
    logLevel: "debug",
    verbose: true
});
casper.userAgent('Mozilla/5.0 (Windows NT 10.0; WOW64; rv:54.0) Gecko/20100101 Firefox/54.0')

casper.start('https://xueqiu.com/2054435398/32283614', function () {
    this.waitForSelector("div.status-content a[title*=xueqiu]");
}).then(function () {
    // white background
    this.evaluate(function () {
        var style = document.createElement('style'),
            text = document.createTextNode('body { background: #fff }');
        style.setAttribute('type', 'text/css');
        style.appendChild(text);
        document.head.insertBefore(style, document.head.firstChild);
    });
}).then(function () {
    this.capture('结庐问山.jpg');
});

casper.run()

> casperjs capture.js --load-images=yes --disk-cache=yes --ignore-ssl-errors=true --output-encoding=gbk

用来截全屏的图片相当厉害,Chrome等自带的截图工具如果内容长了后很慢很麻烦,这种方式毫无压力啊。

  • 抓取层次页面

一般抓数据有个列表页,然后根据列表页的详情地址,根据详情地址再获取数据。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
> cat xueqiu.js
debugger

var fs = require('fs');
var casper = require('casper').create({
    waitTimeout: 120000,
    logLevel: "debug",
    verbose: true
});
casper.userAgent('Mozilla/5.0 (Windows NT 10.0; WOW64; rv:54.0) Gecko/20100101 Firefox/54.0')

var links = []
var basedir = '.'
casper.start('https://xueqiu.com/2054435398/32283614', function () {
    this.waitForSelector("div.status-content a[title*=xueqiu]");
}).then(function () {
    var items = this.evaluate(function () {
        return $("div.status-content a[title*=xueqiu]").map(function (i, a) {
            return $(a).attr('href')
        })
    })

    for (var i = 0; i < items.length; i++) {
        links.push(items[i]);
    }
    
    fs.write('all.html', this.getHTML(), 'w');
}).then(function () {
    this.eachThen(links, function (link) {
        var pathname = undefined;
        var url = link.data;

        this.thenOpen(url, function () {
            this.waitForSelector("div.status-content .detail");
        }).then(function () {
            pathname = this.evaluate(function () {
                var style = document.createElement('style'),
                    text = document.createTextNode('body { background: #fff }');
                style.setAttribute('type', 'text/css');
                style.appendChild(text);
                document.head.insertBefore(style, document.head.firstChild);

                return window.location.pathname;
            });
        }).then(function () {
            if (url.indexOf(pathname))
                this.capture(basedir + pathname + ".jpg");
            else
                this.echo(url);
        });

    })

});

casper.run()

> casperjs xueqiu.js --load-images=yes --disk-cache=yes --ignore-ssl-errors=true --output-encoding=gbk --remote-debugger-port=9000

然后一堆堆的图片就生成出来了。由于访问的速度有限,有利有弊,慢一点还不要做时间上面的控制了,有点像人在操作的感觉。然后处理下异常的个别再导一次就可以了(错误的那一篇还是404的…哭笑)。

1
2
3
4
5
$("div.status-content a[title*=xueqiu]").map(function(i, a){ return $(a).attr('href') }).length
177

$ find . -name '*.jpg' | wc -l
176

注意:Windows的命令窗口,多按几次Enter,有时一不小心就进入编辑模式了。

压缩后100多M啊!CasperJS足够强大,更多的模式等待你的开启。就写到此。

后记

关于爬虫获取数据 抓取前端渲染的页面 这篇文章讲的挺中肯的,如果可能的话,用作者写的 WebMagic 也是一个不错的选择。

–END

导出微信照片

开篇寄语:还是脚本厉害啊!

手机空间不够,又不能加卡,只能删删删。想着把手机上的照片拷贝出来啊,手机拍的,在DCIM目录下的还好,但是微信里面的照片我也想备份下来啊。怎么办?

手机上翻一张微信的照片,然后目录在: tencent/MicroMsg/ea722ad09b762f27f86b29ac43bf6eb8/image2 ,连上电脑一看蒙圈了,这尼玛36(10+26)的平方啊,直接复制完全没反应,在系统上面通过查找*.jpg也不靠谱。还有尼玛的,不是挂在到系统盘的,没办法用脚本。

想着,要不用个助手试试,下载了PP和豌豆荚,导出带反应的都没有啊!你们这程序怎么做的啊!老牌子啊!!!

没办法咯,一个个复制想死的心都有了。最后实在没的办法,用adb shell来整把,然后就一个命令就搞定了(苦笑):

1
2
3
4
shell@hydrogen:/sdcard/tencent/MicroMsg/ea722ad09b762f27f86b29ac43bf6eb8/image2 $ which find
/system/bin/find
shell@hydrogen:/sdcard/tencent/MicroMsg/ea722ad09b762f27f86b29ac43bf6eb8/image2 $
$ find . -name "*.*" -exec cp {} /sdcard/Download/ \; 

最后拷贝download文件夹就好了。

总共600M的样子。拷贝的时刻,又TMD没权限,在explorer窗口就看不到文件。好吧,再用命令拷贝一下吧:

1
2
E:\local\usr\share\adt-bundle-windows-x86-20140702\platform-tools>adb pull -a /sdcard/Download/ R:\image2\
[ 14%] /sdcard/Download/9d01c6e9b722366970f33c948ca4435f.jpg: 76%

好久没弄了,SDK还是14年的,不过还能用啊,赫赫。到此,备份微信图片的工作顺利完成,事情一桩一桩的了。

啥,最后你说还要删掉刚刚复制的图片啊,不能一个个的删啊,好吧,收下我–|的眼神:

1
2
3
4
E:\local\usr\share\adt-bundle-windows-x86-20140702\platform-tools>adb shell
shell@hydrogen:/ $ cd /sdcard/Download/
shell@hydrogen:/sdcard/Download $ rm -rf *.jpg
shell@hydrogen:/sdcard/Download $ rm -rf *.png

拷贝完后,翻了一翻挺有回忆的。

–END