Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

Vagrant创建自定义的BOX

在《奔跑吧Ansible》中接触了Vagrant+VirtualBox,但是感觉一般般,也没觉得很特别的:就自己安装虚拟机差不多嘛。

后面在网上了解了一些关于这两工具,很多人用来搭建开发环境,脑子瞬间被击中了—还可以这么玩。这样系统重装的时刻就不用那么纠结和犹豫了,很多软件都安装在VirtualBox里面,重装后,直接启动虚拟机,就一切的开发环境的软件就都回来了。还有集群的搭建也挺方便的:由于Vagrant是命令行的方式结合配置来启动了,非常方便。

官方网站 Vagrantbox.es Discover Vagrant Boxes 有提供一些镜像,如Centos6:

但是网络提供的不总能满足需要。所以有时还得亲自下手从零开始创建自己的Box。制作Vagrant的Box需要遵循一些要求/规范,官网有提供文档和说明:

为啥用vagrant:https://www.oschina.net/translate/get-vagrant-up-and-running-in-no-time

在本地开发爽。用Vagrant快,简单,并可帮助你同时管理多个开发环境。

想象一下,你正在和据说15人的团队开发一个应用程序。这个程序真是狂棒!它使用Laravel的PHP框架,Redis和Memcached,ImageMagick和GD的PHP模块,curl,MySQL和PostgreSQL, 甚至MongoDB。 另外,Laravel明确依赖PHP版本5.3.7或更高版本,以及mcrypt的PHP扩展。

理想情况下,你会希望团队所有的15人在开发这个应用程序时,都是相同的开发环境。 但是不是所有的开发团队,都有系统管理的专家或者培养一个系统管理。获得相同设置的开发环境可能是一个非常艰巨的任务。 最重要的是,有些人使用的是Mac,而其他人则使用Linux或Windows。在它之前,开发人员会纠结在无尽的配置中,用电脑扔墙而筋疲力尽。

其实,步骤不多也不是很复杂,但是总会遇到一些特定环境的问题。下来是我制作的过程(Vagrant1.9+VirutalBox5.1+Centos6.9_i686)。

还有其他的优点:

  • 还有配置化后,就可以可以进行版本管理。
  • 分享。

下载安装系统

不要安装LiveDVD的版本会把桌面也安装了,系统大几个G,其实用不到图形界面。用DVD的安装没有mininal的系统。

系统网络

安装VirutalBox5.1完后,Windows宿主机多了一个 VirtualBox Host-Only Ethernet Adapter 本地网卡,可以先在VirtualBox菜单 [管理-全局设定-网络] 里删除Host-Only Network网卡。

在安装之前需要先了解VirtualBox的网卡的配置,它的选项/含义和VmWare不太一致,需要单独学习了解下:

  • 未指定: 相当于虚拟机没有插上网线的情况,此时与宿主机也连不通。
  • 网络地址转换(NAT):通过NAT转换仅通过HOST主机访问网络,但是访问不到虚拟机(单向的)。需要通过端口转发功能HOST主机才能连接到虚拟机。单机上网最简单的方式。
  • NAT网络
  • 桥接网卡:虚拟机桥接到宿主机的一块网卡,直接与外部交换数据包,像是不经过宿主机一样。虚拟机能够设置一个独立的IP,所有网络功能完全和在网络中的真实机器一样(通过路由器来自动分配IP地址)。
  • 内部网络:只虚拟机互通的网络。可以相互访问,前提是在设置网络时,两台虚拟机设置同一网络名称。
  • 仅主机(Host-Only)网络:内部网络和桥接模式的混合,需要一个虚拟的网卡来配合。此时虚拟机可以和宿主机及宿主机所在的局域网通信,无法与外网通信。看F1帮助文档里面的,感觉和内部网络差不多,由于HOST主机 多了个网卡可以和HOST通信(通过Host Only网卡的IP),但虚拟机需要上网的话还需要再多配置一个桥接网络。
  • 通用驱动

网上的一些资料:

配置

安装系统后默认eth0的网卡是没有启用的。修改网络配置然后重启网络。

如果网卡启动失败,用 ifconfig -a 看看设备是不是eth0。

接下来就是连接系统,然后配置Vagrant了。

为了后面的配置更加顺利,需要先把网络调通。在虚拟机的黑窗口操作是非常不方便的,添加端口转发然后本地用Putty/git-ssh等工具登录系统操作 SSH to Vagrant box in Windows?

接下来按照官网的说明进行配置:

步骤如下:

  1. 增加帐号密码均为 vagrant ,root密码也是 vagrant
  2. 配置sudo
  3. 配置无密钥登录使用密钥进行登录,同时把insecure的 vagrant的公钥 写入authorized_key
  4. 安装tools
  5. 清理yum缓冲,tmp目录下的内容,以及其他的一些临时文件
  6. 删掉、禁用虚拟机多余的设备
  7. 第一个网卡设置为NAT(用于vagrant的端口转发,并且这网卡要boot启动啊!) boxes.html#virtual-machine
  8. 打包,进入到虚拟机存储的目录(可以通过【设置-高级】的备份位置确定),然后执行 vagrant package --base centos6_i386
1
2
3
4
5
6
7
8
9
10
11
[root@localhost ~]# passwd

[root@localhost ~]# useradd vagrant
[root@localhost ~]# passwd vagrant

[root@localhost ~]# echo 'vagrant ALL=(ALL) NOPASSWD: ALL' >/etc/sudoers

[root@localhost ~]# su - vagrant
[vagrant@localhost ~]$ mkdir .ssh && chmod 700 .ssh && cd .ssh
[vagrant@localhost .ssh]$ curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub -o authorized_keys 
[vagrant@localhost .ssh]$ chmod 600 authorized_keys 

这里单独把安装tools执行的命令抽取出来:

1
2
3
4
# wget http://download.virtualbox.org/virtualbox/5.1.26/VBoxGuestAdditions_5.1.26.iso
curl -o VBoxGuestAdditions_5.1.26.iso http://download.virtualbox.org/virtualbox/5.1.26/VBoxGuestAdditions_5.1.26.iso
mkdir /media/VBoxGuestAdditions
mount -o loop,ro VBoxGuestAdditions_5.1.26.iso /media/VBoxGuestAdditions

事情总归不会一帆风顺的,依赖需要进行处理,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@localhost ~]# sh /media/VBoxGuestAdditions/VBoxLinuxAdditions.run 
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.26 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Starting the VirtualBox Guest Additions.
Failed to set up service vboxadd, please check the log file
/var/log/VBoxGuestAdditions.log for details.
[root@localhost ~]# cat /var/log/VBoxGuestAdditions.log

vboxadd.sh: failed: Look at /var/log/vboxadd-install.log to find out what went wrong.
vboxadd.sh: failed: Look at /var/log/vboxadd-install.log to find out what went wrong.
vboxadd.sh: failed: modprobe vboxguest failed.
[root@localhost ~]# cat /var/log/vboxadd-install.log
/tmp/vbox.0/Makefile.include.header:112: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again.  Stop.
Creating user for the Guest Additions.
Creating udev rule for the Guest Additions kernel module.

# 处理
[root@localhost ~]# yum install gcc make patch glibc-headers glibc-devel kernel-headers -y 
[root@localhost ~]# yum install kernel-devel # / yum install kernel-devel-2.6.32-696.el6.i686  
[root@localhost ~]# export KERN_DIR=/usr/src/kernels/2.6.32-696.6.3.el6.i686  <- 根据情况改
[root@localhost ~]# sh /media/VBoxGuestAdditions/VBoxLinuxAdditions.run 
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.26 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Removing installed version 5.1.26 of VirtualBox Guest Additions...
vboxadd.sh: Stopping VirtualBox Additions.
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Starting the VirtualBox Guest Additions.

Could not find the X.Org or XFree86 Window System, skipping.

安装配置(jdk/tomcat/mysql/pgsql/redis/…)好后,打包前清理缓冲:

1
2
3
4
yum clean all
history -c
rm -rf ~/.bash_history
rm -rf /tmp/* /var/log/* /var/cache/*

然后打开windows的命令行,进入到虚拟机磁盘文件目录打包:

1
2
3
4
5
C:\Users\XXXX\VirtualBox VMs\centos6_i386>vagrant package --base centos6_i386
2017/08/24 07:18:04 launcher: detected 32bit Windows installation
==> centos6_i386: Clearing any previously set forwarded ports...
==> centos6_i386: Exporting VM...
==> centos6_i386: Compressing package to: C:/Users/XXXX/VirtualBox VMs/centos6_i386/package.box

搭建开发环境

实际操作命令

重装系统后再绑定

重新安装后,vagrant和virtualbox在C盘用户目录的文件没有保存。再次启动发现vagrant是去重新启动一个新的虚拟机。

虚拟机嘛,总还是台机器,不会和对待docker那样操作。很多的文件、配置等等还是存储在虚拟机里面的。现在vagrant和virtualbox脱钩了。我们要做的就是把他们再绑定起来:

  • 首先启动直接双击box,启动虚拟机。会在用户目录.VirtualBox下面产生/修改VirtualBox.xml,打开文件找到当前虚拟机MachineEntry对应的uuid。
  • 打开原vagrant的目录下 .vagrant\machines\default\virtualbox 的id文件。内容替换为virtualbox的最新的id。
  • 上面的步骤已经把两者关联起来了,但是无密钥登录不行了。需要重新把github上的内容写入到虚拟机用户vagrant的authorzied_key里面。

至此,就可以用 vagrant up 启动虚拟机了。还原绑定成功。

其他

vagrant + virtualbox + nginx cache

vagrant + java deveploe env

git

–END

Kubeadm部署k8s(资源已有)

上一篇安装的文章操作过程中需要用到代理,期间会遇到穿插了各种问题,显的有点乱。在本地虚拟机安装调通后,今天把测试环境也升级了一下。再写一篇思路清晰一点的总结。

安装需要的rpm和docker images可以通过百度网盘下载:http://pan.baidu.com/s/1hrRs5MW

预先需要做的工作,这些都已经配置好了的:
* 时间同步,
* 主机名,
* /etc/hosts, * 防火墙,
* selinux,
* 无密钥登录,
* 安装docker-1.12.6

主机集群的情况: * 机器:cu[1-5] * 主节点: cu3 * 跳板机: cu2(有外网IP)

首先做YUM本地仓库,并把docker镜像导入到所有node节点

首先在一台主机上部署YUM本地仓库

1
2
3
4
5
6
7
8
9
[root@cu2 ~]# cd /var/www/html/kubernetes/
[root@cu2 kubernetes]# createrepo .
[root@cu2 kubernetes]# ll
total 42500
-rw-r--r-- 1 hadoop hadoop  8974214 Aug 10 15:22 1a6f5f73f43077a50d877df505481e5a3d765c979b89fda16b8b9622b9ebd9a4-kubeadm-1.7.2-0.x86_64.rpm
-rw-r--r-- 1 hadoop hadoop 17372710 Aug 10 15:22 1e508e26f2b02971a7ff5f034b48a6077d613e0b222e0ec973351117b4ff45ea-kubelet-1.7.2-0.x86_64.rpm
-rw-r--r-- 1 hadoop hadoop  9361006 Aug 10 15:22 dc8329515fc3245404fea51839241b58774e577d7736f99f21276e764c309db5-kubectl-1.7.2-0.x86_64.rpm
-rw-r--r-- 1 hadoop hadoop  7800562 Aug 10 15:22 e7a4403227dd24036f3b0615663a371c4e07a95be5fee53505e647fd8ae58aa6-kubernetes-cni-0.5.1-0.x86_64.rpm
drwxr-xr-x 2 root   root       4096 Aug 10 15:58 repodata

(所有node)导入新镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
在cu2上操作,导入docker镜像

docker load </home/hadoop/kubeadm.tar
ssh cu1 docker load </home/hadoop/kubeadm.tar 
ssh cu3 docker load </home/hadoop/kubeadm.tar
ssh cu4 docker load </home/hadoop/kubeadm.tar
ssh cu5 docker load </home/hadoop/kubeadm.tar

Loaded image: gcr.io/google_containers/etcd-amd64:3.0.17
Loaded image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
Loaded image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.2
Loaded image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
Loaded image: gcr.io/google_containers/heapster-amd64:v1.3.0
Loaded image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.2
Loaded image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.1
Loaded image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
Loaded image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
Loaded image: centos:centos6
Loaded image: gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1
Loaded image: gcr.io/google_containers/pause-amd64:3.0
Loaded image: nginx:latest
Loaded image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.2
Loaded image: gcr.io/google_containers/kube-proxy-amd64:v1.7.2
Loaded image: quay.io/coreos/flannel:v0.8.0-amd64

YUM仓库配置

1
2
3
4
5
6
7
8
9
10
11
在cu2上操作

cat > /etc/yum.repos.d/dta.repo  <<EOF
[K8S]
name=K8S Local
baseurl=http://cu2:801/kubernetes
enabled=1
gpgcheck=0
EOF

for h in cu{1,3:5} ; do scp /etc/yum.repos.d/dta.repo $h:/etc/yum.repos.d/ ; done

安装kubeadm、kubelet

1
pdsh -w cu[1-5] "yum clean all; yum install -y kubelet kubeadm; systemctl enable kubelet "

使用kubeadm部署集群

master节点

初始化

1
[root@cu3 ~]# kubeadm init --skip-preflight-checks --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.7.2 

启动后会卡在了 Created API client, waiting for the control plane to become ready , 不要关闭当前的窗口。新开一个窗口,查看并定位解决错误:

问题1

新打开一个窗口,查看 /var/log/messages 有如下错误:

1
Aug 12 23:40:10 cu3 kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

docker和kubelet的cgroup driver不一样,修改kubelet的配置。同时把docker启动参数 masq 一起改了。

1
2
3
4
[root@cu3 ~]# sed -i 's/KUBELET_CGROUP_ARGS=--cgroup-driver=systemd/KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
[root@cu3 ~]# sed -i 's#/usr/bin/dockerd.*#/usr/bin/dockerd --ip-masq=false#' /usr/lib/systemd/system/docker.service

[root@cu3 ~]# systemctl daemon-reload; systemctl restart docker kubelet 

多开几个窗口来解决问题,不会影响kubeadm运行的。就是说,由于其他的问题导致kubeadm中间卡住,只要你解决了问题,kubeadm就会继续配置直到成功。

初始化完后,窗口完整日志如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@cu3 ~]# kubeadm init --skip-preflight-checks --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.7.2 
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [cu3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.148]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[apiclient] Created API client, waiting for the control plane to become ready
 [apiclient] All control plane components are healthy after 494.001036 seconds
[token] Using token: ad430d.beff5be4b98dceec
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token ad430d.beff5be4b98dceec 192.168.0.148:6443

然后按照上面的提示,把kubectl要用的配置文件弄好:

1
2
3
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

到这里K8S的基础服务controller,apiserver,scheduler是起来了,但是dns还是有问题:

1
2
3
4
5
6
7
8
[root@cu3 kubeadm]# kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   etcd-cu3                      1/1       Running   0          6m
kube-system   kube-apiserver-cu3            1/1       Running   0          5m
kube-system   kube-controller-manager-cu3   1/1       Running   0          6m
kube-system   kube-dns-2425271678-wwnkp     0/3       Pending   0          6m
kube-system   kube-proxy-ptnlx              1/1       Running   0          6m
kube-system   kube-scheduler-cu3            1/1       Running   0          6m

dns的容器是使用bridge网络,需要配置网络才能跑起来。有如下错误日志:

1
2
Aug 12 23:54:04 cu3 kubelet: W0812 23:54:04.800316   12886 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 12 23:54:04 cu3 kubelet: E0812 23:54:04.800472   12886 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

下载 https://github.com/winse/docker-hadoop/tree/master/kube-deploy/kubeadm 目录下的 flannel 配置:

flannel配置文件稍微改了一下,在官网的文件基础上 cni-conf.json 增加了: "ipMasq": false,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 配置网络
[root@cu3 kubeadm]# kubectl apply -f kube-flannel.yml 
kubectl apply -f kube-flannel-rbac.yml 
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
[root@cu3 kubeadm]# kubectl apply -f kube-flannel-rbac.yml 
clusterrole "flannel" created
clusterrolebinding "flannel" created

# 等待一段时间后,dns的pods也启动好了
[root@cu3 kubeadm]# kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   etcd-cu3                      1/1       Running   0          7m
kube-system   kube-apiserver-cu3            1/1       Running   0          7m
kube-system   kube-controller-manager-cu3   1/1       Running   0          7m
kube-system   kube-dns-2425271678-wwnkp     3/3       Running   0          8m
kube-system   kube-flannel-ds-dbvkj         2/2       Running   0          38s
kube-system   kube-proxy-ptnlx              1/1       Running   0          8m
kube-system   kube-scheduler-cu3            1/1       Running   0          7m

Node节点部署

配置kubelet、docker

1
2
3
4
sed -i 's/KUBELET_CGROUP_ARGS=--cgroup-driver=systemd/KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
sed -i 's#/usr/bin/dockerd.*#/usr/bin/dockerd --ip-masq=false#' /usr/lib/systemd/system/docker.service 

systemctl daemon-reload; systemctl restart docker kubelet 

注意:加了 ip-masq=false 后,docker0就不能上外网了。也就是说用docker命令单独起的docker容器不能上外网了!

1
ExecStart=/usr/bin/dockerd --ip-masq=false

加入集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
kubeadm join --token ad430d.beff5be4b98dceec 192.168.0.148:6443 --skip-preflight-checks

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "192.168.0.148:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.148:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.0.148:6443"
[discovery] Successfully established connection with API Server "192.168.0.148:6443"
[bootstrap] Detected server version: v1.7.2
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

CU2是跳板机,把kubectl的config配置拷贝过来,然后就可以在CU2上面运行命令:

1
2
3
4
5
6
7
[root@cu2 kube-deploy]# kubectl get nodes
NAME      STATUS     AGE         VERSION
cu2       NotReady   <invalid>   v1.7.2
cu3       Ready      25m         v1.7.2

[root@cu2 kube-deploy]# kubectl proxy 
Starting to serve on 127.0.0.1:8001

我SecureCRT Socks代理做在这台机器上,本地浏览器访问 http://localhost:8001/ui。。。咔咔

5台机器都添加成功后:

1
2
3
4
5
6
7
[root@cu3 ~]# kubectl get nodes 
NAME      STATUS    AGE       VERSION
cu1       Ready     32s       v1.7.2
cu2       Ready     3m        v1.7.2
cu3       Ready     29m       v1.7.2
cu4       Ready     26s       v1.7.2
cu5       Ready     20s       v1.7.2

所有节点防火墙配置(由于是云主机,增加防火墙):

1
2
3
firewall-cmd --zone=trusted --add-source=192.168.0.0/16 --permanent 
firewall-cmd --zone=trusted --add-source=10.0.0.0/8 --permanent 
firewall-cmd --complete-reload

SOURCE IP测试

上次操作时有Sourceip的问题,现在应该不存在。。。看了iptables-save的信息,没有cni0/cbr0的相关的数据

还是再来测一遍:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
kubectl run centos --image=cu.esw.cn/library/java:jdk8 --command -- vi 
kubectl scale --replicas=4 deployment/centos

[root@cu2 kube-deploy]# pods
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE         IP              NODE
default       centos-3954723268-62tpc                 1/1       Running   0          <invalid>   10.244.2.2      cu1
default       centos-3954723268-6cmf9                 1/1       Running   0          <invalid>   10.244.1.2      cu2
default       centos-3954723268-blfc4                 1/1       Running   0          <invalid>   10.244.3.2      cu4
default       centos-3954723268-tb1rn                 1/1       Running   0          <invalid>   10.244.4.2      cu5
default       nexus-djr9c                             1/1       Running   0          2m          192.168.0.37    cu1

# ping互通没问题 TEST

[root@cu2 hadoop]# ./pod_bash centos-3954723268-62tpc default
[root@centos-3024873821-4490r /]# ping 10.244.4.2 -c 1

# 源IP没问题 TEST

[root@centos-3954723268-62tpc opt]# yum install epel-release -y  
[root@centos-3954723268-62tpc opt]# yum install -y nginx 
[root@centos-3954723268-62tpc opt]# service nginx start

[root@centos-3954723268-blfc4 opt]# curl 10.244.2.2
[root@centos-3954723268-tb1rn opt]# curl 10.244.2.2

[root@centos-3954723268-62tpc opt]# less /var/log/nginx/access.log 

DNS/heaspter

奇了怪了,这次重新安装DNS时没遇到问题,heaspter安装也一次通过。

在cu3起的pods上执行 nslookup kubernetes.default 也是通的!

监控

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# -- heaspter
[root@cu2 kubeadm]# kubectl apply -f heapster/influxdb/
deployment "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
[root@cu2 kubeadm]# kubectl apply -f heapster/rbac/
clusterrolebinding "heapster" created

# -- dashboard
[root@cu2 kubeadm]# kubectl apply -f kubernetes-dashboard.yaml 
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

[root@cu2 kubeadm]# kubectl get service --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.96.0.1       <none>        443/TCP         18m
kube-system   kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   18m
kube-system   kubernetes-dashboard   10.104.165.81   <none>        80/TCP          5m

等一小段时间,查看所有的服务:

1
2
3
4
5
6
7
8
[root@cu2 kubeadm]# kubectl get services --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             10.96.0.1        <none>        443/TCP         2h
kube-system   heapster               10.102.176.168   <none>        80/TCP          3m
kube-system   kube-dns               10.96.0.10       <none>        53/UDP,53/TCP   2h
kube-system   kubernetes-dashboard   10.110.2.118     <none>        80/TCP          2m
kube-system   monitoring-grafana     10.106.251.155   <none>        80/TCP          3m
kube-system   monitoring-influxdb    10.100.168.147   <none>        8086/TCP        3m

直接访问 10.106.251.155 或者查看 monitoring的pod 日志,查看heaspter的状态。dashboard上面出图要等一小段时间才行。

如果通过 monitoring-grafana 的IP访问能看到CLUSTER和POD的监控图,但是dashboard上的图就是出不来,可以重新部署dashboard:

1
2
kubectl delete -f kubernetes-dashboard.yaml 
kubectl create -f kubernetes-dashboard.yaml 

到此整个K8S就在测试环境上重新运行起来了。

harbor就不安装了,平时没怎么用,也就5台机器直接save然后load工作量也不多。

参考

–END

保护/加密JAVA代码

由于Java代码生成的是中间过程字节码,javap以及一些反编译的工具基本能看代码的大概,对于提供给客户的代码需要做一些处理:混淆或者加密。下面分几块把在实际操作过程中参考的内容罗列出来,希望对看到本文并感兴趣的你有所帮助。

自定义ClassLoader

混淆+ClassLoader

自定义ClassLoader并用Java实现解密

自定义ClassLoader(jvmti)用C++实现解密

其他一些

JNI

javah

环境部署及入门

配jni.h的 附加目录 的时刻,需要选择 配置 和 平台 的配置!!需要对应好! jni的.h文件需要放到c++的项目下面去,引用外部的好像找不到,有问题。

java与c++类型之间的转换

JNI调用C++的加密算法

OPENSSL

WINDOWS安装/编译安装OPENSSL然后在VS里面应用:

NuGet安装OpenSSL on VS2015-1.0.2版本:(我用的这种方式)

GCC

DES

AES

AES CBC 相互加解密 Java/PHP/C++ java和c++加解密,互通

OPENSSL MD5: VS + GCC + JAVA + 命令行

1
2
3
4
5
6
7
8
9
10
11
# SHA256, used in chef cookbooks
openssl dgst -sha256 path/to/myfile
# MD5
openssl dgst -md5 path/to/myfile
echo -n 'text to be encrypted' | md5sum -
$ echo -n 123456 | md5sum | awk '{print $1}'
$ echo -n Welcome | md5sum

[root@cu2 ~]# gcc -Wall -lcrypto -lssl opensslmd5.cpp -o md5
[root@cu2 ~]# ./md5
md5 digest: 56ab24c15b72a457069c5ea42fcfc640

makefile

1
2
3
4
5
6
7
8
9
10
11
CC=g++
CFLAGS=-Wall -g -O2
LIBS=-lcrypto

all: aes

aes: aes.cc
    $(CC) $(CFLAGS) aes.cc -o $@ $(LIBS)

clean:
    @rm -f aes

OPENSSL命令行

1
2
3
4
5
6
7
8
9
10
openssl des3 -nosalt -k abc123 -in file.txt -out file.des3 #不加盐,key为abc123来加密
openssl des3 -d -nosalt -in file.des3 -out f.txt -k abc123#解密

默认是-salt,加盐的,如果不加盐,则根据pass生成的key和iv不变,例:

You can get openssl to base64-encode the message by using the -a
stefano:~$ openssl aes-256-cbc -in attack-plan.txt -a

[root@cu2 ~]# echo -n DES | openssl aes-128-cbc -a -salt -k abcdefghijklmnop
[root@cu2 ~]# echo -n DES | openssl aes-128-cbc -k abcdefghijklmnop |  openssl aes-128-cbc -d -k abcdefghijklmnop

其他

SHELL二进制编码:

1
2
3
4
5
6
7
8
9
10
11
el@defiant ~ $ printf '%x\n' 26
el@defiant ~ $ echo $((0xAA))
printf -v result1 "%x" "$decimal1"
% xxd -l 16 -p /dev/random
193f6c54814f0576bc27d51ab39081dc
$ echo -n $'\x12\x34' | xxd -p

$ echo -n $'\x12\x34' | hexdump -e '"%x"'

od -vt x1|awk '{$1="";print}'
echo "obase=16; 34" | bc

c++命令行不直接关闭。。。最后用断点的方式替代了,没找到好的方法!!

文件读写

g++

git

重要的参考文章再列一遍

TODO 编译打包

–END

NFS on Centos7

参考

指令

安装

1
2
3
4
5
6
7
8
9
10
11
12
[root@cu3 data]# yum install nfs-utils -y 
[root@cu3 data]# chmod -R 777 /data/k8s-dta

systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap

systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap

配置

1
2
[root@cu3 data]# vi /etc/exports
/data/k8s-dta 192.168.0.0/24(rw,sync,no_root_squash,no_all_squash)

说明:

1
2
3
4
5
6
7
8
/data/k8s-dta – 共享目录
192.168.0.0/24 – 允许访问NFS的客户端IP地址段
rw – 允许对共享目录进行读写
sync – 实时同步共享目录
no_root_squash – 允许root访问
no_all_squash - 允许用户授权
no_subtree_check - 如果卷的一部分被输出,从客户端发出请求文件的一个常规的调用子目录检查验证卷的相应部分。如果是整个卷输出,禁止这个检查可以加速传输。
no_subtree_check - If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers. Setting Up an NFS Server

然后重启服务,并开放防火墙(或者关闭)

1
2
3
4
5
systemctl restart nfs-server

firewall-cmd --permanent --zone=public --add-service=ssh
firewall-cmd --permanent --zone=public --add-service=nfs
firewall-cmd --reload

客户端配置

1
2
3
4
5
6
7
8
9
10
11
[root@cu2 opt]# yum install -y nfs-utils

[root@cu2 opt]# mount cu3:/data/k8s-dta dta
[root@cu2 opt]# touch dta/abc
[root@cu2 opt]# ll dta
total 0
-rw-r--r-- 1 root root 0 Aug  3  2017 abc

[root@cu3 data]# ll k8s-dta/
total 0
-rw-r--r-- 1 root root 0 Aug  3 15:19 abc

on ubuntu

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# In this post 10.8.133.83 will be the IP of our NFS Server.
$ apt update && sudo apt upgrade -y
$ sudo apt-get install nfs-kernel-server nfs-common -y

$ mkdir /vol
$ chown -R nobody:nogroup /vol

# We need to set in the exports file, the clients we would like to allow:
# 
# rw: Allows Client R/W Access to the Volume.
# sync: This option forces NFS to write changes to disk before replying. More stable and Consistent. Note, it does reduce the speed of file operations.
# no_subtree_check: This prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
# In order for the containers to be able to change permissions, you need to set (rw,async,no_subtree_check,no_wdelay,crossmnt,insecure,all_squash,insecure_locks,sec=sys,anonuid=0,anongid=0)
$ echo '/vol 10.8.133.83(rw,sync,no_subtree_check) 10.8.166.19(rw,sync,no_subtree_check) 10.8.142.195(rw,sync,no_subtree_check)' >> /etc/exports

$ sudo systemctl restart nfs-kernel-server
$ sudo systemctl enable nfs-kernel-server

Client Side:

1
2
3
4
5
6
7
8
9
$ sudo apt-get install nfs-common -y

$ sudo mount 10.8.133.83:/vol /mnt
$ sudo umount /mnt
$ df -h

$ sudo bash -c "echo '10.8.133.83:/vol /mnt nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0' >> /etc/fstab"
$ sudo mount -a
$ df -h

后记

建好NFS服务后,可以把它作为k8s容器的存储,这样就不怕丢数据了。

–END

Encfs加密文件系统

为了数据安全,最近领导给了个链接让去了解了解 eCryptfs 。通过yum和自己手动编译安装后都运行失败,系统的Centos7内核不支持ecryptfs模块

通过一个介绍ecryptfs的关联的链接 了解到 encfs 也是做 ecryptfs 类似的事情。然后就去下载安装,最后发现windows下面也可以用(惊喜)。

epel下面已经发布了 encfs 的rpm包。现在只要是仓库有的包就不自己编译(进行过N次升级的洗礼,最终发现yum、rpm才是最终归宿啊)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
[root@k8s ~]# yum install fuse 
[root@k8s ~]# yum install encfs

挂载、创建
[root@k8s shm]# encfs /dev/shm/.test /dev/shm/test
The directory "/dev/shm/.test/" does not exist. Should it be created? (y,n) y
The directory "/dev/shm/test/" does not exist. Should it be created? (y,n) y
Creating new encrypted volume.
Please choose from one of the following options:
 enter "x" for expert configuration mode,
 enter "p" for pre-configured paranoia mode,
 anything else, or an empty line will select standard mode.
?>

Standard configuration selected.

Configuration finished.  The filesystem to be created has
the following properties:
Filesystem cipher: "ssl/aes", version 3:0:2
Filename encoding: "nameio/block", version 4:0:2
Key Size: 192 bits
Block Size: 1024 bytes
Each file contains 8 byte header with unique IV data.
Filenames encoded using IV chaining mode.
File holes passed through to ciphertext.

Now you will need to enter a password for your filesystem.
You will need to remember this password, as there is absolutely
no recovery mechanism.  However, the password can be changed
later using encfsctl.

New Encfs Password: 123456
Verify Encfs Password:

[root@k8s shm]# echo $(hostname) > test/hostname.txt
[root@k8s shm]# ll -R -a
.:
total 0
drwxrwxrwt.  4 root root   80 Aug  4 22:04 .
drwxr-xr-x. 20 root root 3260 Aug  4 21:16 ..
drwx------.  2 root root   80 Aug  4 22:06 test
drwx------.  2 root root   80 Aug  4 22:06 .test

./test:
total 4
drwx------. 2 root root 80 Aug  4 22:06 .
drwxrwxrwt. 4 root root 80 Aug  4 22:04 ..
-rw-r--r--. 1 root root  4 Aug  4 22:06 hostname.txt

./.test:
total 8
drwx------. 2 root root   80 Aug  4 22:06 .
drwxrwxrwt. 4 root root   80 Aug  4 22:04 ..
-rw-r--r--. 1 root root 1263 Aug  4 22:04 .encfs6.xml
-rw-r--r--. 1 root root   12 Aug  4 22:06 pAqhW671kQSK4kPLJM-TF6sp

卸载
[root@k8s shm]# fusermount -u test
[root@k8s shm]# ll -R -a
.:
total 0
drwxrwxrwt.  4 root root   80 Aug  4 22:04 .
drwxr-xr-x. 20 root root 3260 Aug  4 21:16 ..
drwx------.  2 root root   40 Aug  4 22:04 test
drwx------.  2 root root   80 Aug  4 22:06 .test

./test:
total 0
drwx------. 2 root root 40 Aug  4 22:04 .
drwxrwxrwt. 4 root root 80 Aug  4 22:04 ..

./.test:
total 8
drwx------. 2 root root   80 Aug  4 22:06 .
drwxrwxrwt. 4 root root   80 Aug  4 22:04 ..
-rw-r--r--. 1 root root 1263 Aug  4 22:04 .encfs6.xml
-rw-r--r--. 1 root root   12 Aug  4 22:06 pAqhW671kQSK4kPLJM-TF6sp

注意: 最好将 .encfs6.xml 备份起來, 这个文件损坏或丢失将无法还原加密的文件。

把加密的文件备份到云盘,然后本地挂载就能看到原始内容了。安全的云盘就这么简单的实现了,咔咔。。。

在windows安装 EncFSMP 就可以和在Linux上面一样操作encfs文件系统了。

EncFS从原理不同TrueCrypt的容器 ,它存储在一个单一的大文件的加密文件。 相反,EncFS为您添加的每个文件创建单独的文件。 它更好地与云存储服务,每次更改时重新上传整个TrueCrypt容器。

参考链接

–END