[root@localhost ~]# sh /media/VBoxGuestAdditions/VBoxLinuxAdditions.run
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.26 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Starting the VirtualBox Guest Additions.
Failed to set up service vboxadd, please check the log file
/var/log/VBoxGuestAdditions.log for details.
[root@localhost ~]# cat /var/log/VBoxGuestAdditions.log
vboxadd.sh: failed: Look at /var/log/vboxadd-install.log to find out what went wrong.
vboxadd.sh: failed: Look at /var/log/vboxadd-install.log to find out what went wrong.
vboxadd.sh: failed: modprobe vboxguest failed.
[root@localhost ~]# cat /var/log/vboxadd-install.log
/tmp/vbox.0/Makefile.include.header:112: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop.
Creating user for the Guest Additions.
Creating udev rule for the Guest Additions kernel module.
# 处理
[root@localhost ~]# yum install gcc make patch glibc-headers glibc-devel kernel-headers -y
[root@localhost ~]# yum install kernel-devel # / yum install kernel-devel-2.6.32-696.el6.i686
[root@localhost ~]# export KERN_DIR=/usr/src/kernels/2.6.32-696.6.3.el6.i686 <- 根据情况改
[root@localhost ~]# sh /media/VBoxGuestAdditions/VBoxLinuxAdditions.run
Verifying archive integrity... All good.
Uncompressing VirtualBox 5.1.26 Guest Additions for Linux...........
VirtualBox Guest Additions installer
Removing installed version 5.1.26 of VirtualBox Guest Additions...
vboxadd.sh: Stopping VirtualBox Additions.
Copying additional installer modules ...
Installing additional modules ...
vboxadd.sh: Starting the VirtualBox Guest Additions.
Could not find the X.Org or XFree86 Window System, skipping.
安装配置(jdk/tomcat/mysql/pgsql/redis/…)好后,打包前清理缓冲:
1234
yum clean all
history -c
rm -rf ~/.bash_history
rm -rf /tmp/* /var/log/* /var/cache/*
然后打开windows的命令行,进入到虚拟机磁盘文件目录打包:
12345
C:\Users\XXXX\VirtualBox VMs\centos6_i386>vagrant package --base centos6_i386
2017/08/24 07:18:04 launcher: detected 32bit Windows installation
==> centos6_i386: Clearing any previously set forwarded ports...
==> centos6_i386: Exporting VM...
==> centos6_i386: Compressing package to: C:/Users/XXXX/VirtualBox VMs/centos6_i386/package.box
在cu2上操作
cat > /etc/yum.repos.d/dta.repo <<EOF
[K8S]
name=K8S Local
baseurl=http://cu2:801/kubernetes
enabled=1
gpgcheck=0
EOF
for h in cu{1,3:5} ; do scp /etc/yum.repos.d/dta.repo $h:/etc/yum.repos.d/ ; done
启动后会卡在了 Created API client, waiting for the control plane to become ready , 不要关闭当前的窗口。新开一个窗口,查看并定位解决错误:
问题1
新打开一个窗口,查看 /var/log/messages 有如下错误:
1
Aug 12 23:40:10 cu3 kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
[root@cu3 ~]# kubeadm init --skip-preflight-checks --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.7.2
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [cu3 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.148]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 494.001036 seconds
[token] Using token: ad430d.beff5be4b98dceec
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token ad430d.beff5be4b98dceec 192.168.0.148:6443
Aug 12 23:54:04 cu3 kubelet: W0812 23:54:04.800316 12886 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 12 23:54:04 cu3 kubelet: E0812 23:54:04.800472 12886 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
kubeadm join --token ad430d.beff5be4b98dceec 192.168.0.148:6443 --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "192.168.0.148:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.148:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.0.148:6443"
[discovery] Successfully established connection with API Server "192.168.0.148:6443"
[bootstrap] Detected server version: v1.7.2
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
CU2是跳板机,把kubectl的config配置拷贝过来,然后就可以在CU2上面运行命令:
1234567
[root@cu2 kube-deploy]# kubectl get nodes
NAME STATUS AGE VERSION
cu2 NotReady <invalid> v1.7.2
cu3 Ready 25m v1.7.2
[root@cu2 kube-deploy]# kubectl proxy
Starting to serve on 127.0.0.1:8001
# -- heaspter
[root@cu2 kubeadm]# kubectl apply -f heapster/influxdb/
deployment "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
[root@cu2 kubeadm]# kubectl apply -f heapster/rbac/
clusterrolebinding "heapster" created
# -- dashboard
[root@cu2 kubeadm]# kubectl apply -f kubernetes-dashboard.yaml
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@cu2 kubeadm]# kubectl get service --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.96.0.1 <none> 443/TCP 18m
kube-system kube-dns 10.96.0.10 <none> 53/UDP,53/TCP 18m
kube-system kubernetes-dashboard 10.104.165.81 <none> 80/TCP 5m
[root@cu3 data]# vi /etc/exports
/data/k8s-dta 192.168.0.0/24(rw,sync,no_root_squash,no_all_squash)
说明:
12345678
/data/k8s-dta – 共享目录
192.168.0.0/24 – 允许访问NFS的客户端IP地址段
rw – 允许对共享目录进行读写
sync – 实时同步共享目录
no_root_squash – 允许root访问
no_all_squash - 允许用户授权
no_subtree_check - 如果卷的一部分被输出,从客户端发出请求文件的一个常规的调用子目录检查验证卷的相应部分。如果是整个卷输出,禁止这个检查可以加速传输。
no_subtree_check - If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers. Setting Up an NFS Server
# In this post 10.8.133.83 will be the IP of our NFS Server.
$ apt update && sudo apt upgrade -y
$ sudo apt-get install nfs-kernel-server nfs-common -y
$ mkdir /vol
$ chown -R nobody:nogroup /vol
# We need to set in the exports file, the clients we would like to allow:
#
# rw: Allows Client R/W Access to the Volume.
# sync: This option forces NFS to write changes to disk before replying. More stable and Consistent. Note, it does reduce the speed of file operations.
# no_subtree_check: This prevents subtree checking, which is a process where the host must check whether the file is actually still available in the exported tree for every request. This can cause many problems when a file is renamed while the client has it opened. In almost all cases, it is better to disable subtree checking.
# In order for the containers to be able to change permissions, you need to set (rw,async,no_subtree_check,no_wdelay,crossmnt,insecure,all_squash,insecure_locks,sec=sys,anonuid=0,anongid=0)
$ echo '/vol 10.8.133.83(rw,sync,no_subtree_check) 10.8.166.19(rw,sync,no_subtree_check) 10.8.142.195(rw,sync,no_subtree_check)' >> /etc/exports
$ sudo systemctl restart nfs-kernel-server
$ sudo systemctl enable nfs-kernel-server
[root@k8s ~]# yum install fuse
[root@k8s ~]# yum install encfs
挂载、创建
[root@k8s shm]# encfs /dev/shm/.test /dev/shm/test
The directory "/dev/shm/.test/" does not exist. Should it be created? (y,n) y
The directory "/dev/shm/test/" does not exist. Should it be created? (y,n) y
Creating new encrypted volume.
Please choose from one of the following options:
enter "x" for expert configuration mode,
enter "p" for pre-configured paranoia mode,
anything else, or an empty line will select standard mode.
?>
Standard configuration selected.
Configuration finished. The filesystem to be created has
the following properties:
Filesystem cipher: "ssl/aes", version 3:0:2
Filename encoding: "nameio/block", version 4:0:2
Key Size: 192 bits
Block Size: 1024 bytes
Each file contains 8 byte header with unique IV data.
Filenames encoded using IV chaining mode.
File holes passed through to ciphertext.
Now you will need to enter a password for your filesystem.
You will need to remember this password, as there is absolutely
no recovery mechanism. However, the password can be changed
later using encfsctl.
New Encfs Password: 123456
Verify Encfs Password:
[root@k8s shm]# echo $(hostname) > test/hostname.txt
[root@k8s shm]# ll -R -a
.:
total 0
drwxrwxrwt. 4 root root 80 Aug 4 22:04 .
drwxr-xr-x. 20 root root 3260 Aug 4 21:16 ..
drwx------. 2 root root 80 Aug 4 22:06 test
drwx------. 2 root root 80 Aug 4 22:06 .test
./test:
total 4
drwx------. 2 root root 80 Aug 4 22:06 .
drwxrwxrwt. 4 root root 80 Aug 4 22:04 ..
-rw-r--r--. 1 root root 4 Aug 4 22:06 hostname.txt
./.test:
total 8
drwx------. 2 root root 80 Aug 4 22:06 .
drwxrwxrwt. 4 root root 80 Aug 4 22:04 ..
-rw-r--r--. 1 root root 1263 Aug 4 22:04 .encfs6.xml
-rw-r--r--. 1 root root 12 Aug 4 22:06 pAqhW671kQSK4kPLJM-TF6sp
卸载
[root@k8s shm]# fusermount -u test
[root@k8s shm]# ll -R -a
.:
total 0
drwxrwxrwt. 4 root root 80 Aug 4 22:04 .
drwxr-xr-x. 20 root root 3260 Aug 4 21:16 ..
drwx------. 2 root root 40 Aug 4 22:04 test
drwx------. 2 root root 80 Aug 4 22:06 .test
./test:
total 0
drwx------. 2 root root 40 Aug 4 22:04 .
drwxrwxrwt. 4 root root 80 Aug 4 22:04 ..
./.test:
total 8
drwx------. 2 root root 80 Aug 4 22:06 .
drwxrwxrwt. 4 root root 80 Aug 4 22:04 ..
-rw-r--r--. 1 root root 1263 Aug 4 22:04 .encfs6.xml
-rw-r--r--. 1 root root 12 Aug 4 22:06 pAqhW671kQSK4kPLJM-TF6sp