Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

Try K8s

1. 登录配置主机信息:

1
2
3
4
5
6
7
8
$ hostnamectl --static set-hostname master-1

$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.251.51 master-1
192.168.251.50 node-1

2. 安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
cat | bash <<EOF
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum clean all
yum makecache

## docker version:(Version:           18.09.3)
# https://kubernetes.io/docs/setup/release/notes/#external-dependencies
# https://docs.docker.com/install/linux/docker-ce/centos/

yum remove docker \
  docker-client \
  docker-client-latest \
  docker-common \
  docker-latest \
  docker-latest-logrotate \
  docker-logrotate \
  docker-engine

yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io

yum list docker-ce --showduplicates | sort -r

systemctl enable docker
systemctl start docker

systemctl disable firewalld
service firewalld stop

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config 
setenforce 0
EOF

3. 翻墙

需要有在国外的主机!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ssh -NC -D 1080 9.9.9.9 -p 88888

curl --socks5-hostname 127.0.0.1:1080 www.google.com

mkdir /etc/systemd/system/docker.service.d
cat > /etc/systemd/system/docker.service.d/socks5-proxy.conf <<EOF
[Service]
Environment="ALL_PROXY=socks5://127.0.0.1:1080" "NO_PROXY=localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16"
EOF

systemctl daemon-reload
systemctl restart docker

# cache rpm
sed -i 's/keepcache=0/keepcache=1/' /etc/yum.conf 

4. 安装K8S

https://kubernetes.io/docs/setup/independent/install-kubeadm/

添加repo并增加代理配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
proxy=socks5://127.0.0.1:1080
EOF


  ## yum.conf allows you to have per-repository settings as well as global ([main]) settings, 也可以定义在单个repo的配置里面!
  ##sed '$a\\nproxy=socks5://127.0.0.1:1080' /etc/yum.conf 
  ## proxy=_none_

  
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

5. 配置K8S

5.1 先加载镜像

1
2
3
4
5
6
7
8
9
10
$ kubeadm config images pull
I0409 00:04:13.693615   18479 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0409 00:04:13.694196   18479 version.go:97] falling back to the local client version: v1.14.0
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.14.0
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.14.0
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.3.10
[config/images] Pulled k8s.gcr.io/coredns:1.3.1

5.2 初始化

1
$ kubeadm init --pod-network-cidr=10.244.0.0/16

会遇到的问题1: https://github.com/kubernetes/kubeadm/issues/610

1
2
3
4
5
6
7
$ journalctl -xeu kubelet
....
Apr 09 00:35:33 docker81 kubelet[24062]: I0409 00:35:33.996517   24062 server.go:625] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Apr 09 00:35:33 docker81 kubelet[24062]: F0409 00:35:33.996923   24062 server.go:265] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap
Apr 09 00:35:33 docker81 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Apr 09 00:35:34 docker81 systemd[1]: Unit kubelet.service entered failed state.
Apr 09 00:35:34 docker81 systemd[1]: kubelet.service failed.

处理:

1
2
3
4
5
6
7
8
9
10
11
$ swapoff -a
$ sed -i '/swap/s/^/#/' /etc/fstab


  # 禁用命令
  sudo swapoff -a
  # 启用命令
  sudo swapon -a
  # 把根目录文件系统设为可读写
  sudo mount -n -o remount,rw /
  

5.3 再次初始化

先清理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
$ 
kubeadm reset
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

$ kubeadm init --pod-network-cidr=10.244.0.0/16

I0409 05:19:35.856967    3656 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0409 05:19:35.857127    3656 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Hostname]: hostname "master-1" could not be reached
        [WARNING Hostname]: hostname "master-1": lookup master-1 on 192.168.253.254:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.251.51]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master-1 localhost] and IPs [192.168.251.51 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master-1 localhost] and IPs [192.168.251.51 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.506192 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: zpf7je.xarawormfaeapib3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.251.51:6443 --token zpf7je.xarawormfaeapib3 \
    --discovery-token-ca-cert-hash sha256:d7ff941542a03645209ad4149e1baa1c40ddad7e9c8296f82fe3bd2a91191f66 

执行添加kubeconfig配置

1
2
3
4
$ 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

5.4 查看状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ kubectl cluster-info 

Kubernetes master is running at https://192.168.251.51:6443
KubeDNS is running at https://192.168.251.51:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.


$ kubectl get pods -n kube-system 
$ kubectl get pods --all-namespaces

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-hcrgw            0/1     Pending   0          100s
kube-system   coredns-fb8b8dccf-zct25            0/1     Pending   0          100s
kube-system   etcd-master-1                      1/1     Running   0          57s
kube-system   kube-apiserver-master-1            1/1     Running   0          47s
kube-system   kube-controller-manager-master-1   1/1     Running   0          62s
kube-system   kube-proxy-p962p                   1/1     Running   3          100s
kube-system   kube-scheduler-master-1            1/1     Running   0          45s

5.5 添加网卡,dns的pod启动需要网络组建的支撑

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system


$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

查看状态,现在coredns也已经启动了

1
2
3
4
5
6
7
8
9
10
11
$ kubectl get pods --all-namespaces

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-hcrgw            1/1     Running   0          8m7s
kube-system   coredns-fb8b8dccf-zct25            1/1     Running   0          8m7s
kube-system   etcd-master-1                      1/1     Running   0          7m24s
kube-system   kube-apiserver-master-1            1/1     Running   0          7m14s
kube-system   kube-controller-manager-master-1   1/1     Running   0          7m29s
kube-system   kube-flannel-ds-amd64-947zx        1/1     Running   0          2m32s
kube-system   kube-proxy-p962p                   1/1     Running   3          8m7s
kube-system   kube-scheduler-master-1            1/1     Running   0          7m12s

6. 安装Dashboard

先解除master不能部署pod的限制,然后部署dashboard:

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl taint nodes --all node-role.kubernetes.io/master-

node/master-1 untainted

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

查看日志,故障定位

1
kubectl describe pod kubernetes-dashboard-5f7b999d65-lt2df -n kube-system

查看状态

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl get pods --all-namespaces

NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-hcrgw                 1/1     Running   0          15m
kube-system   coredns-fb8b8dccf-zct25                 1/1     Running   0          15m
kube-system   etcd-master-1                           1/1     Running   0          14m
kube-system   kube-apiserver-master-1                 1/1     Running   0          14m
kube-system   kube-controller-manager-master-1        1/1     Running   0          15m
kube-system   kube-flannel-ds-amd64-947zx             1/1     Running   0          10m
kube-system   kube-proxy-p962p                        1/1     Running   3          15m
kube-system   kube-scheduler-master-1                 1/1     Running   0          14m
kube-system   kubernetes-dashboard-5f7b999d65-lt2df   1/1     Running   0          6m6s

7. 访问Dashboard

7.1 本地查看

1
2
3
4
$ kubectl proxy
Starting to serve on 127.0.0.1:8001

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

7.2 用户浏览器查看

1* 失败的方法:

disable-filter=true表示禁用请求过滤功能,否则我们的请求会被拒绝,并提示 Forbidden (403) Unauthorized

1
$ kubectl proxy --address=0.0.0.0 --disable-filter=true

可以成功访问到登录界面,但是却无法登录,这是因为Dashboard使用HTTP连接只允许localhost和127.0.0.1进行访问(限制为必须在kubectl执行的机器上访问),而其它地址只允许使用HTTPS。

2* 应该可行方法:(没有试)

Kubernetes API Server新增了 -–anonymous-auth 选项设置为 false,允许匿名请求访问secure port;再使用 --basic-auth-file 配置使用用户名登录。

https://www.okay686.cn/984.html

3* 证书+Token的方法:

3-1 证书

官方文档介绍:

方法0:

申请证书

方法1:

对于API Server来说,它是使用证书进行认证的,我们需要先创建一个证书。首先找到kubectl命令的配置文件,默认情况下为 /etc/kubernetes/admin.conf 已经复制到了 ~/.kube/config 中。然后我们使用client-certificate-data和client-key-data生成一个p12文件,可使用下列命令:

1
2
3
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

最后导入上面生成的p12文件,重新打开浏览器。

方法偷懒2:

What’s causing: forbidden: User “system:anonymous” in some Cloud Providers https://github.com/kubernetes-incubator/apiserver-builder-alpha/issues/225

After reading this: https://kubernetes.io/docs/admin/authentication/#anonymous-requests then I tried this:

1
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

and it solved the problem.

3-2 权限

方法1:创建新的用户

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@docker81 ~]# vi dashboard-admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---
# ------------ role binding ---------------- #
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

[root@docker81 ~]# kubectl create -f dashboard-admin-user.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

[root@docker81 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-28dwk
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: c23340a7-5a70-11e9-b2ca-005056887940

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTI4ZHdrIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjMjMzNDBhNy01YTcwLTExZTktYjJjYS0wMDUwNTY4ODc5NDAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.uaG_faYzLhiadXfz4XuQ_-X9tdl5exKQjbCK7OJqBFMCYve532O-8jH_zg5E2rgFUQycQUhH_siS_GCi0MoE8mqc-WJwIfaGB6QnLYOFRjvWWNhO_16FH56YaEZxGY2p62OPt4d1O9NK4KZLEcoZNbYYuol_9kBfAj9Imf3ii58TNGZ0WiRigXjLOsJK5P2IPyE4c_rqunsrb_sO1z56jgRTL9qnu2zsby8obJxNZefBnsTgakXnu-P8PwXg0PekLBWQNNr-G7TeiKCpfCGCjHM6gmEKdTjiernFbD1GxOG588pmZfWsFtjNNWuNAlfMe1bXpy2m981taQUTQa3kWQ

访问HTTPS地址:

https://192.168.251.51:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

方法2:源头下手

kubernetes-dashboard.yaml的介绍,现在就理解了为什么其角色的名称为kubernetes-dashboard-minimal。一句话,这个Role的权限不够! 因此,我们可以更改RoleBinding修改为ClusterRoleBinding,并且修改roleRef中的kind和name,使用cluster-admin这个非常牛逼的CusterRole(超级用户权限,其拥有访问kube-apiserver的所有权限)。如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

修改后,重新创建kubernetes-dashboard.yaml,Dashboard就可以拥有访问整个K8S 集群API的权限。

3-3 忽略登录

1
2
3
4
5
kubectl edit deployment/kubernetes-dashboard --namespace=kube-system

      - args:
        - --auto-generate-certificates
        - --enable-skip-login

8. 部署应用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@s1 ~]# kubectl create -f https://k8s.io/docs/tasks/run-application/deployment.yaml
deployment.apps/nginx-deployment created

kubectl describe deployment nginx-deployment
kubectl get pods -l app=nginx

[root@s1 ~]# kubectl describe pod nginx-deployment-76bf4969df-bmslp 

kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml
kubectl apply -f https://k8s.io/docs/tutorials/stateless-application/deployment-update.yaml
kubectl apply -f https://k8s.io/examples/application/deployment-scale.yaml

kubectl describe deployment nginx-deployment
kubectl get pods -l app=nginx
kubectl describe pod <pod-name>

[root@s1 ~]# curl 172.17.0.4

kubectl delete deployment nginx-deployment

https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@docker81 ~]# curl localhost:8001/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.193.81:6443"
    }
  ]
}

[root@docker81 ~]# curl localhost:8001/api/v1/namespaces/default/pods
{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "selfLink": "/api/v1/namespaces/default/pods",
    "resourceVersion": "25607"
  },
  "items": []
}

9. 一些命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
kubectl cluster-info

kubectl get nodes --all-namespaces -o wide

kubectl get pods –namespace=kube-system
kubectl get pod --all-namespaces=true

kubectl describe pods
kubectl describe pod coredns-7748f7f6df-7p58x --namespace=kube-system

kubectl get services kube-dns --namespace=kube-system

kubectl logs -n cattle-system cattle-node-agent-w5rj4

kubectl -n kube-system get secret
kubectl -n kube-system describe secret kubernetes-dashboard-token-zlfj7
kubectl -n kube-system get secret kubernetes-dashboard-token-zlfj7 -o yaml

kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token

kubectl -n kube-system get service kubernetes-dashboard
kubectl -n kube-system get svc kubernetes-dashboard
kubectl -n kube-system get secret admin-token-nwphb -o jsonpath={.data.token}|base64 -d
kubectl get secret $(kubectl get serviceaccount my-admin-user -n kube-system -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" -n kube-system | base64 --decode

kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/alternative/kubernetes-dashboard.yaml 

kubectl -n kube-system edit service kubernetes-dashboard

kubectl -n kube-system delete $(kubectl -n kube-system get pod -o name | grep dashboard)

kubectl delete pod NAME --grace-period=0 --force
  • DNS解析:进入容器执行命令
1
2
[root@k8s-master app]# kubectl exec -it coredns-78fcdf6894-244mp /bin/sh  -n kube-system                         
/ # nslookup kubernetes.default 127.0.0.1

–END

编译JCEF - Win64

参考

简单过程摘录

注意点:

  1. 依赖:git/python2.7/java8/cmake3/visual studio 2017(这是我编译的环境,具体的版本要求请查看官网文档)
  2. 需要用git下载源码
  3. 先把 chromium-clang-format 下载放到 tools 目录下面

步骤:

  1. cmake-3.12.3-win64-x64.zip,配置环境变量PATH
  2. 安装版python-2.7.15.amd64.msi、同时配置把python.exe加入到PATH
  3. 把 PortableGit 加入PATH: set PATH=D:\PortableGit\bin;%PATH%
  4. 使用 VS 的命令行运行
  5. 命令:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
F:\wv>git clone https://bitbucket.org/chromiumembedded/java-cef.git src

F:\wv\src>mkdir jcef_build && cd jcef_build

F:\wv\src\jcef_build>cmake -G "Visual Studio 14 Win64" ..
用vs编译本地代码:
# Open jcef.sln in Visual Studio
# - Select Build > Configuration Manager and change the "Active solution configuration" to "Release"
# - Select Build > Build Solution.

cd ..\tools
compile.bat win64

run.bat win64 Release detailed

F:\wv\src\tools>make_distrib.bat win64

生成的运行包放在 binary_distrib 目录下。查看下 binary_distrib/win64/run.bat 了解运行配置,同时参考上面文章中提到的项目的创建方法。

1
java -cp "./bin;./bin/*" -Djava.library.path=./bin/lib/win64 tests.detailed.MainFrame

过程截图:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
**********************************************************************
** Visual Studio 2017 Developer Command Prompt v15.7.4
** Copyright (c) 2017 Microsoft Corporation
**********************************************************************
[vcvarsall.bat] Environment initialized for: 'x64'

C:\Program Files (x86)\Microsoft Visual Studio\2017\Community>F:

F:\>cd wv
F:\wv>set PATH=D:\PortableGit\bin;%PATH%
F:\wv>git clone https://bitbucket.org/chromiumembedded/java-cef.git src

F:\wv\src>mkdir jcef_build && cd jcef_build

F:\wv\src\jcef_build>cmake -G "Visual Studio 14 Win64" ..
-- Selecting Windows SDK version  to target Windows 10.0.16299.
-- The C compiler identification is MSVC 19.0.24234.1
-- The CXX compiler identification is MSVC 19.0.24234.1
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Downloading F:/wv/src/third_party/cef/cef_binary_3.3497.1831.g461fa1f_windows64.tar.bz2.sha1...
-- Downloading F:/wv/src/third_party/cef/cef_binary_3.3497.1831.g461fa1f_windows64.tar.bz2...
-- [download 0% complete]
-- [download 1% complete]
。。。
-- [download 98% complete]
-- [download 99% complete]
-- [download 100% complete]
-- Extracting F:/wv/src/third_party/cef/cef_binary_3.3497.1831.g461fa1f_windows64.tar.bz2...
CMake Warning (dev) at CMakeLists.txt:153 (find_package):
  Policy CMP0074 is not set: find_package uses <PackageName>_ROOT variables.
  Run "cmake --help-policy CMP0074" for policy details.  Use the cmake_policy
  command to set the policy and suppress this warning.

  CMake variable CEF_ROOT is set to:

    F:/wv/src/third_party/cef/cef_binary_3.3497.1831.g461fa1f_windows64

  For compatibility, CMake is ignoring the variable.
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found PythonInterp: C:/Python27/python.exe (found version "2.7.15")
-- Found JNI: C:/Java/jdk/lib/jawt.lib (Required is at least version "1.7")
-- Generating native/jcef_version.h file...
File native/jcef_version.h updated.
-- Downloading clang-format from Google Storage...
0> Failed to fetch file gs://chromium-clang-format/6ddedd571c56b8c184f30a3c1fc36984e8c10ccd for tools/buildtools/win/clang-format.exe, skipping. [Err: F:\wv\src\tools\buildtools\external_bin\gsutil\gsutil_4.15\gsutil\third_party\boto\boto\pyami\config.py:71: UserWarning: Unable to load AWS_CREDENTIAL_FILE ()
  warnings.warn('Unable to load AWS_CREDENTIAL_FILE (%s)' % full_path)
Failure: Unable to find the server at www.googleapis.com.
]
Downloading 1 files took 3153.963000 second(s)
Failed to fetch file gs://chromium-clang-format/6ddedd571c56b8c184f30a3c1fc36984e8c10ccd for tools/buildtools/win/clang-format.exe. [Err: F:\wv\src\tools\buildtools\external_bin\gsutil\gsutil_4.15\gsutil\third_party\boto\boto\pyami\config.py:71: UserWarning: Unable to load AWS_CREDENTIAL_FILE ()
  warnings.warn('Unable to load AWS_CREDENTIAL_FILE (%s)' % full_path)
Failure: Unable to find the server at www.googleapis.com.
]
CMake Error at CMakeLists.txt:265 (message):
  Execution failed with unexpected result: 1


-- Configuring incomplete, errors occurred!
See also "F:/wv/src/jcef_build/CMakeFiles/CMakeOutput.log".

<<== https://my.oschina.net/penngo/blog/1538071
<<<<----https://storage.googleapis.com/chromium-clang-format/6ddedd571c56b8c184f30a3c1fc36984e8c10ccd
~~~~~~

F:\wv\src\jcef_build>cmake -G "Visual Studio 14 Win64" ..
-- Selecting Windows SDK version  to target Windows 10.0.16299.
CMake Warning (dev) at CMakeLists.txt:153 (find_package):
  Policy CMP0074 is not set: find_package uses <PackageName>_ROOT variables.
  Run "cmake --help-policy CMP0074" for policy details.  Use the cmake_policy
  command to set the policy and suppress this warning.

  CMake variable CEF_ROOT is set to:

    F:/wv/src/third_party/cef/cef_binary_3.3497.1831.g461fa1f_windows64

  For compatibility, CMake is ignoring the variable.
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Generating native/jcef_version.h file...
File native/jcef_version.h is already up to date.
-- Downloading clang-format from Google Storage...
0> File tools/buildtools/win/clang-format.exe exists and SHA1 matches. Skipping.
Success!
Downloading 1 files took 0.006000 second(s)
-- *** CEF CONFIGURATION SETTINGS ***
-- Generator:                    Visual Studio 14 2015 Win64
-- Platform:                     Windows
-- Project architecture:         x86_64
-- Binary distribution root:     F:/wv/src/third_party/cef/cef_binary_3.3497.1831.g461fa1f_windows64
-- CEF Windows sandbox:          ON
-- Visual Studio ATL support:    ON
-- Standard libraries:           comctl32.lib;rpcrt4.lib;shlwapi.lib;ws2_32.lib;dbghelp.lib;psapi.lib;version.lib;winmm.lib
-- Compile defines:              __STDC_CONSTANT_MACROS;__STDC_FORMAT_MACROS;WIN32;_WIN32;_WINDOWS;UNICODE;_UNICODE;WINVER=0x0601;_WIN32_WINNT=0x601;NOMINMAX;WIN32_LEAN_AND_MEAN;_HAS_EXCEPTIONS=0;PSAPI_VERSION=1;CEF_USE_SANDBOX;CEF_USE_ATL
-- Compile defines (Debug):
-- Compile defines (Release):    NDEBUG;_NDEBUG
-- C compile flags:              /MP;/Gy;/GR-;/W4;/WX;/wd4100;/wd4127;/wd4244;/wd4481;/wd4512;/wd4701;/wd4702;/wd4996;/Zi
-- C compile flags (Debug):      /MTd;/RTC1;/Od
-- C compile flags (Release):    /MT;/O2;/Ob2;/GF
-- C++ compile flags:            /MP;/Gy;/GR-;/W4;/WX;/wd4100;/wd4127;/wd4244;/wd4481;/wd4512;/wd4701;/wd4702;/wd4996;/Zi
-- C++ compile flags (Debug):    /MTd;/RTC1;/Od
-- C++ compile flags (Release):  /MT;/O2;/Ob2;/GF
-- Exe link flags:                /MANIFEST:NO;/LARGEADDRESSAWARE
-- Exe link flags (Debug):       /DEBUG
-- Exe link flags (Release):
-- Shared link flags:
-- Shared link flags (Debug):    /DEBUG
-- Shared link flags (Release):
-- CEF Binary files:             chrome_elf.dll;d3dcompiler_43.dll;d3dcompiler_47.dll;libcef.dll;libEGL.dll;libGLESv2.dll;natives_blob.bin;snapshot_blob.bin;v8_context_snapshot.bin;swiftshader
-- CEF Resource files:           cef.pak;cef_100_percent.pak;cef_200_percent.pak;cef_extensions.pak;devtools_resources.pak;icudtl.dat;locales
-- *** JCEF CONFIGURATION SETTINGS ***
-- Python executable:            C:/Python27/python.exe
-- Java directory:               C:/Java/jdk
-- JNI libraries:                C:/Java/jdk/lib/jawt.lib;C:/Java/jdk/lib/jvm.lib
-- JNI include directories:      C:/Java/jdk/include;C:/Java/jdk/include/win32;C:/Java/jdk/include
-- Configuring done
-- Generating done
-- Build files have been written to: F:/wv/src/jcef_build

F:\wv\src\jcef_build>

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
F:\wv\src\tools>make_distrib.bat win64
正在加载程序包org.cef.browser的源文件...
正在加载程序包org.cef.browser.mac的源文件...
正在加载程序包org.cef.callback的源文件...
正在加载程序包org.cef的源文件...
正在加载程序包org.cef.handler的源文件...
正在加载程序包org.cef.misc的源文件...
正在加载程序包org.cef.network的源文件...
正在构造 Javadoc 信息...
.\org\cef\browser\CefRenderer.java:11: 错误: 程序包com.jogamp.opengl不存在
import com.jogamp.opengl.GL2;
                        ^
.\org\cef\browser\CefRenderer.java:15: 错误: 找不到符号
    private GL2 initialized_context_ = null;
            ^
  符号:   类 GL2
  位置: 类 CefRenderer
.\org\cef\browser\CefRenderer.java:34: 错误: 找不到符号
    protected void initialize(GL2 gl2) {
                              ^
  符号:   类 GL2
  位置: 类 CefRenderer
.\org\cef\browser\CefRenderer.java:64: 错误: 找不到符号
    protected void cleanup(GL2 gl2) {
                           ^
  符号:   类 GL2
  位置: 类 CefRenderer
.\org\cef\browser\CefRenderer.java:69: 错误: 找不到符号
    protected void render(GL2 gl2) {
                          ^
  符号:   类 GL2
  位置: 类 CefRenderer
.\org\cef\browser\CefRenderer.java:161: 错误: 找不到符号
    protected void onPaint(GL2 gl2, boolean popup, Rectangle[] dirtyRects, ByteBuffer buffer,
                           ^
  符号:   类 GL2
  位置: 类 CefRenderer
.\org\cef\browser\CefBrowserOsr.java:23: 错误: 程序包com.jogamp.nativewindow不存在
import com.jogamp.nativewindow.NativeSurface;
                              ^
.\org\cef\browser\CefBrowserOsr.java:24: 错误: 程序包com.jogamp.opengl.awt不存在
import com.jogamp.opengl.awt.GLCanvas;
                            ^
.\org\cef\browser\CefBrowserOsr.java:25: 错误: 程序包com.jogamp.opengl不存在
import com.jogamp.opengl.GLAutoDrawable;
                        ^
.\org\cef\browser\CefBrowserOsr.java:26: 错误: 程序包com.jogamp.opengl不存在
import com.jogamp.opengl.GLEventListener;
                        ^
.\org\cef\browser\CefBrowserOsr.java:27: 错误: 程序包com.jogamp.opengl不存在
import com.jogamp.opengl.GLProfile;
                        ^
.\org\cef\browser\CefBrowserOsr.java:28: 错误: 程序包com.jogamp.opengl不存在
import com.jogamp.opengl.GLCapabilities;
                        ^
.\org\cef\browser\CefBrowserOsr.java:44: 错误: 找不到符号
    private GLCanvas canvas_;
            ^
  符号:   类 GLCanvas
  位置: 类 CefBrowserOsr
.\org\cef\browser\mac\CefBrowserWindowMac.java:9: 错误: 程序包sun.lwawt不存在
import sun.lwawt.LWComponentPeer;
                ^
.\org\cef\browser\mac\CefBrowserWindowMac.java:10: 错误: 程序包sun.lwawt不存在
import sun.lwawt.PlatformWindow;
                ^
.\org\cef\browser\mac\CefBrowserWindowMac.java:11: 错误: 程序包sun.lwawt.macosx不存在
import sun.lwawt.macosx.CFRetainedResource;
                       ^
.\org\cef\browser\mac\CefBrowserWindowMac.java:12: 错误: 程序包sun.lwawt.macosx不存在
import sun.lwawt.macosx.CPlatformWindow;
                       ^
标准 Doclet 版本 1.8.0_181
正在构建所有程序包和类的树...
正在生成..\out\docs\org\cef\browser\CefBrowser.html...
正在生成..\out\docs\org\cef\browser\CefBrowserFactory.html...
正在生成..\out\docs\org\cef\browser\CefBrowserWindow.html...
正在生成..\out\docs\org\cef\browser\CefFrame.html...
正在生成..\out\docs\org\cef\browser\CefMessageRouter.html...
.\org\cef\browser\CefMessageRouter.java:185: 警告 - @return 标记没有参数。
.\org\cef\browser\CefMessageRouter.java:185: 警告 - @param argument "config" 不是参数名称。
正在生成..\out\docs\org\cef\browser\CefMessageRouter.CefMessageRouterConfig.html...
正在生成..\out\docs\org\cef\browser\CefRequestContext.html...
。。。
正在生成..\out\docs\constant-values.html...
正在构建所有程序包和类的索引...
正在生成..\out\docs\overview-tree.html...
正在生成..\out\docs\index-all.html...
正在构建所有类的索引...
正在生成..\out\docs\allclasses-frame.html...
正在生成..\out\docs\allclasses-noframe.html...
正在生成..\out\docs\index.html...
正在生成..\out\docs\overview-summary.html...
正在生成..\out\docs\help-doc.html...
29 个警告
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
F:\wv\src\java\tests\detailed\BrowserFrame.java -> F:\wv\src\binary_distrib\win64\bin\tests\detailed\BrowserFrame.java
F:\wv\src\java\tests\detailed\MainFrame.java -> F:\wv\src\binary_distrib\win64\bin\tests\detailed\MainFrame.java
。。。
F:\wv\src\java\tests\detailed\ui\StatusPanel.java -> F:\wv\src\binary_distrib\win64\bin\tests\detailed\ui\StatusPanel.java
F:\wv\src\java\tests\simple\MainFrame.java -> F:\wv\src\binary_distrib\win64\bin\tests\simple\MainFrame.java
复制了 34 个文件
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
系统找不到指定的文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
已复制         1 个文件。
F:\wv\src\jcef_build\native\Release\cef.pak -> F:\wv\src\binary_distrib\win64\bin\lib\win64\cef.pak
F:\wv\src\jcef_build\native\Release\cef_100_percent.pak -> F:\wv\src\binary_distrib\win64\bin\lib\win64\cef_100_percent.pak
。。。
F:\wv\src\jcef_build\native\Release\locales\zh-CN.pak -> F:\wv\src\binary_distrib\win64\bin\lib\win64\locales\zh-CN.pak
F:\wv\src\jcef_build\native\Release\locales\zh-TW.pak -> F:\wv\src\binary_distrib\win64\bin\lib\win64\locales\zh-TW.pak
复制了 58 个文件
F:\wv\src\jcef_build\native\Release\swiftshader\libEGL.dll -> F:\wv\src\binary_distrib\win64\bin\lib\win64\swiftshader\libEGL.dll
F:\wv\src\jcef_build\native\Release\swiftshader\libGLESv2.dll -> F:\wv\src\binary_distrib\win64\bin\lib\win64\swiftshader\libGLESv2.dll
复制了 2 个文件
F:\wv\src\out\docs\allclasses-frame.html -> F:\wv\src\binary_distrib\win64\docs\allclasses-frame.html
F:\wv\src\out\docs\allclasses-noframe.html -> F:\wv\src\binary_distrib\win64\docs\allclasses-noframe.html
。。。
F:\wv\src\out\docs\org\cef\network\package-summary.html -> F:\wv\src\binary_distrib\win64\docs\org\cef\network\package-summary.html
F:\wv\src\out\docs\org\cef\network\package-tree.html -> F:\wv\src\binary_distrib\win64\docs\org\cef\network\package-tree.html
复制了 151 个文件
Creating README.TXT file.
已复制         1 个文件。
F:\wv\src\third_party\jogamp\gluegen.LICENSE.txt -> F:\wv\src\binary_distrib\win64\gluegen.LICENSE.txt
F:\wv\src\third_party\jogamp\jogl.LICENSE.txt -> F:\wv\src\binary_distrib\win64\jogl.LICENSE.txt
复制了 2 个文件
F:\wv\src\tools\distrib\win64\compile.bat -> F:\wv\src\binary_distrib\win64\compile.bat
F:\wv\src\tools\distrib\win64\run.bat -> F:\wv\src\binary_distrib\win64\run.bat
复制了 2 个文件

F:\wv\src\tools>

–END

视频自动翻译

现在语音翻译应用越来越广泛了。其实视频内的音频应该也可以通过语音的处理方式,来达到添加字幕以及翻译的效果。

google翻译页面已有语音输入的按钮,只是需要我们把电脑视频的声音转换作为 电脑输入 就行了。

语音识别翻译链接

https://speechlogger.appspot.com/zh/

翻译

实现/处理方法

转换工具

第一种(推荐): 使用 VoiceMeeter

第二种:使用 virtual audio cable sofeware

详细步骤:

  1. 安装(以上任意一种)转换工具
  2. 设置系统声音的 播放设备
  3. 在浏览器中点击 录音按钮 后,点击 浏览器地址栏 的右侧麦克风按钮,麦克风 下拉菜单中设置使用的设备(如:VoiceMeeter Output)
  4. (可选)如果想翻译同时自己也听到,打开 Voicemeeter 软件就行了,程序会自动输出选择一个输出。

记住,不能静音,同时要打开 系统的麦克风 !!

–END

斐讯K2刷机记录

很久以前就在JD弄了一个K2,当时没有啥需求,所以也没有折腾 。最近尝试DDNS域名绑定到动态的IP,想在家有一个能提供SSH访问的机器。原来的树莓派被弄坏了,就想着折腾折腾刷刷K2,在上面安装一个SSH。

同时也把官网提供的系统净化净化。

原K2的详细信息

斐讯K2 1200M智能双频无线路由器 WIFI穿墙 PSG1218

了解刷机流程

  • 官方版本可能存在的问题:

http://www.right.com.cn/forum/thread-208302-1-1.html

  • 刷机直接参考

【2017-12-01】斐讯K2 V22.5.9.163官方固件定制版,集成breed,支持官版直刷【V1.8】

详细步骤

  1. 更新版本到 V22.5.9.163

    查看官网提供的软件, 下载对应的版本

    • K2_A2_V21.4.6.12.bin
    • K2_V22.5.9.163.bin
  2. 刷净化版(带Bread)k2_163_v18_breed.rar

    • 下载地址

    • breed刷入第三方固件

      进入Bread方法,这个了解下就行,这里不刷第三方的。

      拔除K2上Wan口的网线,路由器断电,持续按住路由器上的reset按钮,接通路由器电源,3秒后松开reset按钮。 在浏览器地址栏输入 http://192.168.1.1 访问Breed Web。

  3. 启动telnet/手动安装SSH

3.1. 启动telnet

用 高级设置 - 系统设置 - WebShell 执行命令

1
/www/cgi-bin# /usr/sbin/telnetd -l /bin/login.sh

直接连,不用密码!!

1
winse@DESKTOP-ADH7K1Q:~$ telnet 192.168.2.1

同时修改下密码:

1
2
# 更改root密码为 admin
echo -e 'admin\nadmin' | passwd root

3.2. 安装SSH

这个版本没有带opkg,需要首先把opkg安装好。

直接下载 opkg.zip 然后本地起一个 httpserver 提供一个下载的服务。

1
2
winse@DESKTOP-ADH7K1Q:/mnt/e/SOFTWARE/k2$ python -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...

在telnet窗口执行:

1
2
3
4
5
6
7
8
9
10
11
12
13
root@K2:/www/cgi-bin# cd /bin
root@K2:/bin# wget http://192.168.2.160:8000/opkg
--2018-06-20 22:50:18--  http://192.168.2.160:8000/opkg
Connecting to 192.168.2.160:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 130247 (127K) [application/octet-stream]
Saving to: 'opkg'

opkg                                  100%[=========================================================================>] 127.19K   176KB/s   in 0.7s

2018-06-20 22:50:18 (176 KB/s) - 'opkg' saved [130247/130247]

root@K2:/bin# chmod +x opkg
注意:用完后就删掉吧 `rm -rf /bin/opkg` ,空间不够!!查看[安装了那些软件](https://unix.stackexchange.com/questions/157097/how-to-know-disk-space-occupied-by-packages-in-openwrt)

```
rm -rf /bin/opkg

root@K2:/overlay# du -sh */*/*
root@K2:/overlay# rm -rf usr/lib/opkg
```

然后安装ssh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
opkg install http://downloads.openwrt.org/barrier_breaker/14.07/ramips/mt7620a/packages/base/dropbear_2014.63-2_ramips_24kec.ipk
# 开机自启
/etc/init.d/dropbear enable

# https://openwrt.org/docs/guide-user/base-system/ssh_configuration
# https://wiki.openwrt.org/doc/uci/dropbear
vi /etc/config/dropbear
        option GatewayPorts '1'
        
# 启动
/etc/init.d/dropbear start

uci show dropbear

# 如果需要放开防火墙
iptables -I INPUT 1 -p tcp -m tcp --dport 22 -j ACCEPT


vi /etc/firewall.user
# 删除无用文件
rm -rf /etc/dropbear/dropbear_dss_host_key

注意:需要持久化的话,把这句开放22端口的指令写到 /etc/firewall.user 。

客户端登录:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
winse@DESKTOP-ADH7K1Q:~$ ssh root@192.168.2.1
The authenticity of host '192.168.2.1 (192.168.2.1)' can't be established.
RSA key fingerprint is SHA256:vuAY65qk3Us4MyjYT8KPT8lYsTSTqru6W4e7My6CRkk.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.2.1' (RSA) to the list of known hosts.
root@192.168.2.1's password:


BusyBox v1.22.1 (2017-02-15 13:52:46 CST) built-in shell (ash)
Enter 'help' for a list of built-in commands.

    ___  __ _______________  __  _____  ___  ________  ___
   / _ \/ // /  _/ ___/ __ \/  |/  /  |/  / / __/ __ \/ _ \
  / ___/ _  // // /__/ /_/ / /|_/ / /|_/ / _\ \/ /_/ / ___/
 /_/  /_//_/___/\___/\____/_/  /_/_/  /_/ /___/\____/_/
 ----------------------------------------------------------
 Barrier Breaker, unknown
 ----------------------------------------------------------
 PID=K2
 BUILD_TYPE=release
 BUILD_NUMBER=163
 BUILD_TIME=20170215-134532
 ----------------------------------------------------------
 MTK OpenWrt SDK V3.4
 revision : adab2180
 benchmark : APSoC SDK 5.0.1.0
 kernel : 144992
 ----------------------------------------------------------
root@K2:~#

不推荐用密码,最好使用公钥的方式来处理。但公钥访问有点问题,.ssh的目录权限是个麻烦事 (其实文件的位置不对!!)。

参考: Dropbear public-key authentication HowTo

ssh root@192.168.1.1 “tee -a /etc/dropbear/authorized_keys” < ~/.ssh/id_rsa.pub

把 authorized_keys 文件移到 /etc/dropbear 下面就可以了!

1
2
3
4
5
root@K2:~/.ssh# ls -la
drwx------    2 root     root             0 Jun 21 10:35 .
drwx------    1 root     root             0 Jun 21 08:57 ..
-rw-------    1 root     root           397 Jun 21 10:35 authorized_keys
root@K2:~/.ssh# mv authorized_keys /etc/dropbear/

其他拓展

增加空间,挂载windows共享目录

https://blog.vircloud.net/linux/openwrt-psg1218.html

K2 官方版式不带 USB,因此就限制了很多可玩的东西,但是我们可以通过 SMB 挂载的方式来增加存储空间,需要注意的是老毛子挂载 SMB 的方式与其他 OpenWRT 不同,使用 mount 命令是挂载不成功的,正确的方法是:

位置:高级设置 - 自定义设置 - 脚本 - 在路由器启动后执行 配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
### SMB资源挂载(局域网共享映射,无USB也能挂载储存空间)
### 说明:共享路径填写时,【\】要写成【\\】。
sleep 10
modprobe des_generic
modprobe cifs CIFSMaxBufSize=64512
#mkdir -p /media/cifs
#mount -t cifs \\\\{host}\\{share} /media/cifs -o username={user},password={pass}
mount -t cifs \\\\192.168.31.100\\移动磁盘-C /mnt -o username=guest,password=guest

sleep 10
mdev -s
sleep 5
stop_ftpsamba
sleep 2
run_ftpsamba
sleep 5

Breed进入方式

  1. 将要刷的第三方固件准备好。
  2. 断电按着reset键不松手,然后通电5秒后再松开reset键。
  3. 打开浏览器输入http://192.168.1.1%E5%8D%B3%E5%8F%AFBreed Web恢复控制台(记得先在Breed Web恢复控制台中的固件备份里备份下EEPROM和编程器固件,以后可能用得着)。
  4. 恢复固件之前最好在Breed Web恢复控制台恢复一下出厂设置,固件类型:Config区(公版)

参考:

其他参考

–END

使用VMWare安装Mac OS X

参考:

实际操作:

  • 安装 VMware-workstation-full-12.5.7-5813279 。
  • 下载 unlocker208.zip 并使用管理员权限安装 win-install.cmd 。
  • 添加虚拟机,选择 Apple Mac OS X(M) - OS X 10.9;然后修改vmx配置,在 smc.present = "TRUE" 后面添加 smc.version = "0"
  • 然后光盘选择 Mavericks_Install_13A603.cdr 安装系统。磁盘格式化:实用工具 - 磁盘工具
  • 安装VMWare Tools。光盘选择 darwiniso.zip 压缩包里面的 darwin6.0.3.iso 。
  • 配置共享文件夹。进入系统后,Finder - 偏好设置 - 已连接的服务器

–END