[ec2-user@k8s nfs]$ kubectl describe pod nfs-web-7bc965b94f-k6mrj -n nfs
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43s default-scheduler Successfully assigned nfs/nfs-web-7bc965b94f-k6mrj to worker1
Warning FailedMount 1s (x7 over 35s) kubelet MountVolume.SetUp failed for volume "nfs" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs -o nfsvers=4.2 nfs-server.nfs.svc.cluster.local:/ /var/lib/kubelet/pods/03049217-ded3-4d31-86e8-6a13c0f5b12f/volumes/kubernetes.io~nfs/nfs
Output: mount.nfs: Failed to resolve server nfs-server.nfs.svc.cluster.local: Name or service not known
mount.nfs: Operation already in progress
容器内是可以通过域名访问的,mount的时刻可能是在node节点上执行的,所以解析不了域名。
12345678
[ec2-user@k8s ~]$ kubectl run -ti test --rm --image busybox -n nfs -- sh
/ # ping nfs-server.nfs.svc.cluster.local
[ec2-user@k8s nfs]$ kubectl run -ti test --rm --image busybox -- sh
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local localdomain
options ndots:5
# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml
[ec2-user@k8s ~]$ git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/
[ec2-user@k8s ~]$ cd nfs-subdir-external-provisioner/
[ec2-user@k8s nfs-subdir-external-provisioner]$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
[ec2-user@k8s nfs-subdir-external-provisioner]$ NAMESPACE=${NS:-default}
[ec2-user@k8s nfs-subdir-external-provisioner]$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl create -f deploy/rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
$ vi deploy/deployment.yaml
- name: NFS_SERVER
value: 192.168.191.131
- name: NFS_PATH
value: /backup
volumes:
- name: nfs-client-root
nfs:
server: 192.168.191.131
path: /backup
$ vi deploy/class.yaml
parameters:
archiveOnDelete: "false"
#Specifies a template for creating a directory path via PVC metadata's such as labels, annotations, name or namespace. To specify metadata use ${.PVC.<metadata>}. Example: If folder should be named like <pvc-namespace>-<pvc-name>, use ${.PVC.namespace}-${.PVC.name} as pathPattern.
# pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
# onDelete: delete
# 先把镜像拉下来 k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl apply -f deploy/deployment.yaml
deployment.apps/nfs-client-provisioner created
[ec2-user@k8s nfs-subdir-external-provisioner]$ kubectl apply -f deploy/class.yaml
storageclass.storage.k8s.io/nfs-client created
[ec2-user@k8s nfs-subdir-external-provisioner]$
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ python3 --version
Python 3.8.5
# https://github.com/acecilia/OpenWRTInvasion
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ ls
Dockerfile readme requirements.txt set_english.py
extras README.md script.sh speedtest_urls_template.xml
firmwares remote_command_execution_vulnerability.py script_tools tcp_file_server.py
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ sudo apt install python3-pip
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ pip3 install -r requirements.txt
Requirement already satisfied: requests in /usr/lib/python3/dist-packages (from -r requirements.txt (line 1)) (2.22.0)
# stok获取:登录web访问后,浏览器的地址上就有stok的参数。
# 详情可查看参考的文章
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ python3 remote_command_execution_vulnerability.py
Router IP address [press enter for using the default 'miwifi.com']:
Enter router admin password: __xxx__
There two options to provide the files needed for invasion:
1. Use a local TCP file server runing on random port to provide files in local directory `script_tools`.
2. Download needed files from remote github repository. (choose this option only if github is accessable inside router device.)
Which option do you prefer? (default: 1)
****************
router_ip_address: miwifi.com
stok: __xxx__
file provider: local file server
****************
start uploading config file...
start exec command...
local file server is runing on 0.0.0.0:1135. root='script_tools'
local file server is getting 'busybox-mipsel' for 192.168.31.1.
local file server is getting 'dropbearStaticMipsel.tar.bz2' for 192.168.31.1.
done! Now you can connect to the router using several options: (user: root, password: root)
* telnet miwifi.com
* ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -c 3des-cbc -o UserKnownHostsFile=/dev/null root@miwifi.com
* ftp: using a program like cyberduck
winse@LAPTOP-I9ECVAQ4:OpenWRTInvasion-master$ ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -c 3des-cbc -o UserKnownHostsFile=/dev/null root@miwifi.com
The authenticity of host 'miwifi.com (192.168.31.1)' can't be established.
RSA key fingerprint is SHA256:AT91yqVuqPnmOO5wmke6V0Hl67GKXdkb48W/FU3WfEM.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'miwifi.com,192.168.31.1' (RSA) to the list of known hosts.
root@miwifi.com's password:
BusyBox v1.19.4 (2021-09-30 03:16:53 UTC) built-in shell (ash)
Enter 'help' for a list of built-in commands.
-----------------------------------------------------
Welcome to XiaoQiang!
-----------------------------------------------------
$$$$$$\ $$$$$$$\ $$$$$$$$\ $$\ $$\ $$$$$$\ $$\ $$\
$$ __$$\ $$ __$$\ $$ _____| $$ | $$ | $$ __$$\ $$ | $$ |
$$ / $$ |$$ | $$ |$$ | $$ | $$ | $$ / $$ |$$ |$$ /
$$$$$$$$ |$$$$$$$ |$$$$$\ $$ | $$ | $$ | $$ |$$$$$ /
$$ __$$ |$$ __$$< $$ __| $$ | $$ | $$ | $$ |$$ $$<
$$ | $$ |$$ | $$ |$$ | $$ | $$ | $$ | $$ |$$ |\$$\
$$ | $$ |$$ | $$ |$$$$$$$$\ $$$$$$$$$ | $$$$$$ |$$ | \$$\
\__| \__|\__| \__|\________| \_________/ \______/ \__| \__|
root@XiaoQiang:~#
[ec2-user@amazonlinux ~]$ minikube start
* minikube v1.25.2 on Amazon 2
* Automatically selected the docker driver. Other choices: none, ssh
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Downloading Kubernetes v1.23.3 preload ...
> preloaded-images-k8s-v17-v1...: 505.68 MiB / 505.68 MiB 100.00% 14.20 Mi
> index.docker.io/kicbase/sta...: 379.06 MiB / 379.06 MiB 100.00% 2.11 MiB
! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.30, but successfully downloaded docker.io/kicbase/stable:v0.0.30 as a fallback image
* Creating docker container (CPUs=2, Memory=2200MB) ...
! This container is having trouble accessing https://k8s.gcr.io
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
* Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
- kubelet.housekeeping-interval=5m
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
* kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
[ec2-user@k8s ~]$ kubectl apply -f ingress-nginx-controller-v1.1.2.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
[ec2-user@k8s ~]$ kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
pod/ingress-nginx-controller-755447bb4d-rnxvl condition met
# 状态参考:
# https://kubernetes.io/zh/docs/tasks/access-application-cluster/ingress-minikube/
[ec2-user@k8s ~]$ kubectl get pods --namespace=ingress-nginx
[ec2-user@k8s ~]$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-hbt9d 0/1 Completed 0 2m51s
pod/ingress-nginx-admission-patch-j8qfh 0/1 Completed 1 2m51s
pod/ingress-nginx-controller-755447bb4d-rnxvl 1/1 Running 0 2m51s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.104.8.155 <pending> 80:31031/TCP,443:31845/TCP 2m51s
service/ingress-nginx-controller-admission ClusterIP 10.108.67.255 <none> 443/TCP 2m51s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 2m51s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-755447bb4d 1 1 1 2m51s
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 3s 2m51s
job.batch/ingress-nginx-admission-patch 1/1 3s 2m51s
It has two features that work together to provide this service: address allocation, and external announcement.
After MetalLB has assigned an external IP address to a service, it needs to make the network beyond the cluster aware that the IP “lives” in the cluster. MetalLB uses standard routing protocols to achieve this: ARP, NDP, or BGP.
[ec2-user@k8s ~]$ kubectl get all -n metallb-system
NAME READY STATUS RESTARTS AGE
pod/controller-57fd9c5bb-kc5zt 1/1 Running 0 5m55s
pod/speaker-8pg4v 1/1 Running 0 5m55s
pod/speaker-95bs8 1/1 Running 0 5m55s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/speaker 2 2 2 2 2 kubernetes.io/os=linux 5m55s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/controller 1/1 1 1 5m55s
NAME DESIRED CURRENT READY AGE
replicaset.apps/controller-57fd9c5bb 1 1 1 5m55s
[ec2-user@k8s ~]$ kubectl get service ingress-nginx-controller --namespace=ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.107.221.243 192.168.191.200 80:31443/TCP,443:30099/TCP 34m
[ec2-user@k8s ~]$ kubectl describe service ingress-nginx-controller --namespace=ingress-nginx
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.1.2
helm.sh/chart=ingress-nginx-4.0.18
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.107.221.243
IPs: 10.107.221.243
LoadBalancer Ingress: 192.168.191.200
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31443/TCP
Endpoints: 10.244.2.79:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30099/TCP
Endpoints: 10.244.2.79:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31942
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 38m metallb-controller Assigned IP ["192.168.191.200"]
Normal nodeAssigned 38m metallb-speaker announcing from node "worker1"
[ec2-user@k8s ~]$ netstat -anp | grep 31443
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp 0 0 0.0.0.0:31443 0.0.0.0:* LISTEN -
[ec2-user@worker1 ~]$ netstat -anp | grep 31443
(No info could be read for "-p": geteuid()=1002 but you should be root.)
tcp 0 0 0.0.0.0:31443 0.0.0.0:* LISTEN -