Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

使用U盘安装Centos7

使用U盘安装操作系统,原来一直用 unetbootin-windows 但这次不好使,U盘重新格式化也不行。遇到的几个问题:

  1. 有光驱最好啊,没光驱才用U盘安装啊!
  2. U盘是否能被识别?安装系统嘛,你得屈就电脑,它不识别你就只能换另一个咯。旧的服务器识别USB3.0有问题。
  3. 进BIOS看启动项是否有你的U盘?把U盘的顺序调整到HDD的前面。与第二项是一起检测的。
  4. 做的系统:下载Minimal.iso,用采用 Windows iso2usb 把iso载入到U盘。

  5. 注意1: 看到 ntldr is missing 这样的提示,就可以去再重写一遍U盘了!

  6. 注意2: U盘必须是FAT32的!!

安装系统的时刻,问题又来了:

  • dracut_initqueue[599]: Warning: Could not boot

找不到镜像。

处理: 等一段时间后会进行入到 Dracut shell , 查看下 /dev 下面有哪些磁盘设备。 最大/最后的那个磁盘设备 一般就是你的U盘:如我的是 /dev/sdc1 。

CTRL+ALT+DELETE 重新启动,进入 CENTOS 安装界面启动选项时,按 TAB ,替换为如下内容:

1
2
vmlinux initrd=initrd.img
inst.stage2=hd:/dev/sdc1 quit

–END

Redmine部署以及插件安装

Redmine是类似JIRA的一个项目/BUG管理工具,使用ruby语言编写的。安装相对就麻烦一点,不熟嘛,一堆的东西要安装。有两种简单/傻瓜式的安装方式:

  • bitnami-redmine,相当于一键安装;
  • docker + redmine,使用docker把所有的依赖都安装好,只需要配置remine即可。

这里选择使用docker-compose来安装: sameersbn/redmine:3.4.2

部署

先跑起来,然后再根据需求修改配置。搞得不好的话,重新安装也超级简单,是吧!

1
2
3
4
mkdir -p /srv/docker/redmine/{redmine,postgresql}

wget https://raw.githubusercontent.com/sameersbn/docker-redmine/master/docker-compose.yml
docker-compose up

启动后,浏览器访问 http://HOSTED_IP:10083 ,使用 admin/admin 登录。

  • 重新弄,初始化:
1
2
3
4
5
6
7
8
9
docker-compose rm -f 或者 docker-compose down

rm -rf /srv/docker/redmine/redmine/tmp/*
rm -rf /srv/docker/redmine/postgresql/* 

docker-compose up --build

#docker-compose up -d
#docker-compose start

Theme主题

改头换面,下载主题后放到 /srv/docker/redmine/redmine/themes/ 目录下。然后 重启容器 ,再重新登录,修改 管理 - 配置 - 显示 - 主题 - A1

1
2
3
[root@k8s redmine]# ll /srv/docker/redmine/redmine/themes/
total 0
drwxr-xr-x. 6 es es 69 Sep 18 23:38 a1

Plugins

有些插件不兼容3.4,注意版本的选择!以下是在3.4下面安装使用的插件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@k8s plugins]# sed -i '/haml/s/^/#/' redhopper/Gemfile           
[root@k8s plugins]# mv apijs redmine_apijs

[root@k8s redmine]# ll /srv/docker/redmine/redmine/plugins/
total 0
drwxr-xr-x.  8 es es 118 Sep 18 14:05 clipboard_image_paste
drwxr-xr-x. 10 es es 212 Sep 18 19:18 deployer
drwxr-xr-x.  7 es es 160 Sep 18 12:00 issuefy
drwxr-xr-x.  4 es es  60 Sep 18 11:59 line_numbers
drwxr-xr-x.  8 es es 182 Sep 17 18:05 mega_calendar
drwxr-xr-x.  6 es es 158 Sep 18 12:00 open_flash_chart
drwxrwxr-x.  8 es es 225 Sep 18 22:15 redhopper
drwxr-xr-x.  9 es es 156 Sep  6 19:02 redmine_agile
drwxr-xr-x.  7 es es 133 Sep 18 22:00 redmine_apijs
drwxr-xr-x. 10 es es 119 Aug 30 21:46 redmine_checklists
drwxr-xr-x.  9 es es 158 Sep 18 19:19 redmine_ckeditor
drwxr-xr-x.  8 es es 221 Sep 18 12:01 redmine_code_review
drwxr-xr-x.  8 es es 252 Sep 18 12:01 redmine_dashboard
drwxr-xr-x.  3 es es  70 Sep 18 12:00 redmine_embedded_video
drwxr-xr-x.  2 es es  78 Sep 18 12:00 redmine_gist
drwxrwxr-x.  8 es es 129 Aug  5 10:52 redmine_issue_templates
drwxr-xr-x.  8 es es 170 Sep 18 17:46 redmine_lightbox2
drwxr-xr-x.  8 es es 160 Mar  5  2017 redmine_work_time

不重启容器的话,可以登录到容器把 ~/data/plugins 拷贝到 ~/redmine/plugins 下面,然后执行下面的命令进行更新:

1
2
3
4
5
root@f0481f5f8cda:/home/redmine/redmine# 
bundle install --without development test
bundle exec rake redmine:plugins:migrate RAILS_ENV=production

supervisorctl restart unicorn

其他的一些插件

参考

–END

Docker Compose入门

使用Docker也一段时间了,一开始直接使用命令行 docker run 来启动的,后面使用 k8s 来管理,对于多机环境来说还是挺方便的。但是如果仅仅是单机上面跑docker容器,安装一套 k8s 的话也挺尴尬的。

docker提供了compose编排的功能,通过配置文件的方式来启动、管理(多)容器的运行。有点启动脚本的意思,当然也包含一些管理的元素,对容器LifeCycle的管理。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s composetest]# docker version
Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64
 
[root@k8s composetest]# docker-compose version
docker-compose version 1.16.1, build 6d1ac21
docker-py version: 2.5.1
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t  3 May 2016

docker的版本需要和compose配置的版本适配: https://github.com/docker/compose/releases ,docker-1.12的话,compose version不能高于 2.1。Compose file version 2

先安装官网的helloworld来运行一个例子:

安装:

1
2
3
4
5
6
7
8
9
10
11
12
# 浏览器下载docker-compose
https://github.com/docker/compose/releases/download/1.16.1/docker-compose-Linux-x86_64

[root@k8s opt]# cd /usr/local/bin/
[root@k8s bin]# rz
rz waiting to receive.
Starting zmodem transfer.  Press Ctrl+C to cancel.
Transferring docker-compose-Linux-x86_64 (1)...
  100%    8648 KB    4324 KB/sec    00:00:02       0 Errors  

[root@k8s bin]# mv docker-compose-Linux-x86_64 docker-compose
[root@k8s bin]# chmod +x docker-compose 

Hello World:

官网是一个访问量统计的例子,通过python网站结合redis来实现。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@k8s composetest]# ll
total 16
-rw-r--r--. 1 root root 303 Sep 17 08:09 app.py
-rw-r--r--. 1 root root 112 Sep 17 08:39 docker-compose.yml
-rw-r--r--. 1 root root 114 Sep 17 08:42 Dockerfile
-rw-r--r--. 1 root root  13 Sep 17 08:09 requirements.txt

[root@k8s composetest]# cat app.py 
from flask import Flask
from redis import Redis

app = Flask(__name__)
redis = Redis(host='redis', port=6379)

@app.route('/')
def hello():
  count = redis.incr('hits')
  return 'Hello World! I have been seen {} times.\n'.format(count)

if __name__ == "__main__":
  app.run(host="0.0.0.0", debug=True)

[root@k8s composetest]# cat requirements.txt 
flask
redis

[root@k8s composetest]# cat Dockerfile 
FROM python:3.4-alpine

ADD . /code
WORKDIR /code

RUN pip install -r requirements.txt

CMD ["python", "app.py"]

[root@k8s composetest]# cat docker-compose.yml 
version: '2.1'
services:
  web:
    build: .
    ports:
      - "5000:5000"
  redis:
    image: "redis:alpine"

依赖的镜像可以提前下载好,可以不修改docker配置的情况下来下载,参考docker-download-mirror.sh

写好配置后,运行:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
[root@k8s composetest]# docker-compose up --build
Building web
Step 1 : FROM python:3.4-alpine
 ---> 27a0e572c13a
Step 2 : ADD . /code
 ---> 84082044fb5e
Removing intermediate container 7c4675b618da
Step 3 : WORKDIR /code
 ---> Running in a014af85b748
 ---> 2ada42bd756c
Removing intermediate container a014af85b748
Step 4 : RUN pip install -r requirements.txt
 ---> Running in 4be6f8f5c8b8
Collecting flask (from -r requirements.txt (line 1))
  Downloading Flask-0.12.2-py2.py3-none-any.whl (83kB)
Collecting redis (from -r requirements.txt (line 2))
  Downloading redis-2.10.6-py2.py3-none-any.whl (64kB)
Collecting Jinja2>=2.4 (from flask->-r requirements.txt (line 1))
  Downloading Jinja2-2.9.6-py2.py3-none-any.whl (340kB)
Collecting click>=2.0 (from flask->-r requirements.txt (line 1))
  Downloading click-6.7-py2.py3-none-any.whl (71kB)
Collecting itsdangerous>=0.21 (from flask->-r requirements.txt (line 1))
  Downloading itsdangerous-0.24.tar.gz (46kB)
Collecting Werkzeug>=0.7 (from flask->-r requirements.txt (line 1))
  Downloading Werkzeug-0.12.2-py2.py3-none-any.whl (312kB)
Collecting MarkupSafe>=0.23 (from Jinja2>=2.4->flask->-r requirements.txt (line 1))
  Downloading MarkupSafe-1.0.tar.gz
Building wheels for collected packages: itsdangerous, MarkupSafe
  Running setup.py bdist_wheel for itsdangerous: started
  Running setup.py bdist_wheel for itsdangerous: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/fc/a8/66/24d655233c757e178d45dea2de22a04c6d92766abfb741129a
  Running setup.py bdist_wheel for MarkupSafe: started
  Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/88/a7/30/e39a54a87bcbe25308fa3ca64e8ddc75d9b3e5afa21ee32d57
Successfully built itsdangerous MarkupSafe
Installing collected packages: MarkupSafe, Jinja2, click, itsdangerous, Werkzeug, flask, redis
Successfully installed Jinja2-2.9.6 MarkupSafe-1.0 Werkzeug-0.12.2 click-6.7 flask-0.12.2 itsdangerous-0.24 redis-2.10.6
 ---> ee3e476d4fad
Removing intermediate container 4be6f8f5c8b8
Step 5 : CMD python app.py
 ---> Running in f2f9eefe782e
 ---> 08e3065107b2
Removing intermediate container f2f9eefe782e
Successfully built 08e3065107b2
Recreating composetest_web_1 ... 
Recreating composetest_web_1
Starting composetest_redis_1 ... 
Recreating composetest_web_1 ... done
Attaching to composetest_redis_1, composetest_web_1
redis_1  | 1:C 17 Sep 00:43:45.012 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1  | 1:C 17 Sep 00:43:45.013 # Redis version=4.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1  | 1:C 17 Sep 00:43:45.013 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1  | 1:M 17 Sep 00:43:45.020 * Running mode=standalone, port=6379.
redis_1  | 1:M 17 Sep 00:43:45.020 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1  | 1:M 17 Sep 00:43:45.020 # Server initialized
redis_1  | 1:M 17 Sep 00:43:45.020 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1  | 1:M 17 Sep 00:43:45.020 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1  | 1:M 17 Sep 00:43:45.020 * DB loaded from disk: 0.000 seconds
redis_1  | 1:M 17 Sep 00:43:45.020 * Ready to accept connections
web_1    |  * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
web_1    |  * Restarting with stat
web_1    |  * Debugger is active!
web_1    |  * Debugger PIN: 175-303-648

查看容器状态:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s opt]# curl http://0.0.0.0:5000/
Hello World! I have been seen 1 times.
[root@k8s opt]# curl http://0.0.0.0:5000/
Hello World! I have been seen 2 times.

[root@k8s composetest]# docker-compose ps 
       Name                      Command               State           Ports         
-------------------------------------------------------------------------------------
composetest_redis_1   docker-entrypoint.sh redis ...   Up      6379/tcp              
composetest_web_1     python app.py                    Up      0.0.0.0:5000->5000/tcp

##
docker-compose rm -f # Remove stopped containers
docker-compose down  # Stop and remove containers, networks, images, and volumes

其他

后台运行:

1
2
$ docker-compose up -d
$ docker-compose ps

在指定容器内执行命令:有点类似 docker exec/kubectl exec

1
$ docker-compose run web env

单独编译运行 仅更改过内容的容器:

1
2
$ docker-compose build web
$ docker-compose up --no-deps -d web

配置复用/覆写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d

# A
webapp:
  build: .
  ports:
    - "8000:8000"
  volumes:
    - "/data"
   
# EA   
web:
  extends:
    file: common-services.yml
    service: webapp
    

学习

–END

Zookeeper ACL

集群又一次进行安检,SSH躲不过需要升级的,这次还加了hadoop security和zookeeper acl的bug。以前没太在意这些内容,既然安全检查出来了,还是需要处理的。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
ZooKeeper 未授权访问【原理扫描】
详细描述  ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,是Hadoop和Hbase的重要组件。它是一个为分布式应用提供一致性服务的软件,提供的功能包括:配置维护、域名服务、分布式同步、组服务等。 
ZooKeeper的目标就是封装好复杂易出错的关键服务,将简单易用的接口和性能高效、功能稳定的系统提供给用户。 
在通常情况下,zookeeper允许未经授权的访问。
解决办法  为ZooKeeper配置相应的访问权限。 

方式一: 
1)增加一个认证用户 
addauth digest 用户名:密码明文 
eg. addauth digest user1:password1 
2)设置权限 
setAcl /path auth:用户名:密码明文:权限 
eg. setAcl /test auth:user1:password1:cdrwa 
3)查看Acl设置 
getAcl /path 

方式二: 
setAcl /path digest:用户名:密码密文:权限

威胁分值  5.0
危险插件  否
发现日期  2015-02-10

Zookeeper权限基本知识点、操作

Note also that an ACL pertains only to a specific znode. In particular it does not apply to children. ACL在znode上无继承性,也就是说子znode不会继承父znode的ACL权限.

  • world has a single id, anyone, that represents anyone.
  • auth doesn’t use any id, represents any authenticated user.
  • digest uses a username:password string to generate MD5 hash which is then used as an ACL ID identity. Authentication is done by sending the username:password in clear text. When used in the ACL the expression will be the username:base64 encoded SHA1 password digest.
  • ip uses the client host IP as an ACL ID identity. The ACL expression is of the form addr/bits(3.5+) where the most significant bits of addr are matched against the most significant bits of the client host IP.

zookeeper的ACL格式为 schema:id:permissions 。模式就是上面列的几种,再加一个super。创建的节点默认权限为 world:anyone:rwadc 表示所有人都对这个节点有rwadc的权限。

  • Create:允许对子节点Create 操作
  • Read:允许对本节点GetChildren 和GetData 操作
  • Write :允许对本节点SetData 操作
  • Delete :允许对子节点Delete 操作
  • Admin :允许对本节点setAcl 操作

Auth授权

不需要id,当前 “登录” 的所有users都有权限(sasl、kerberos这些授权方式不懂,囧)。虽然不需要id,但是格式还得按照 scheme:id:perm 的写法。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[zk: localhost:2181(CONNECTED) 15] setAcl /c auth:rwadc  
auth:rwadc does not have the form scheme:id:perm
Acl is not valid : /c

[zk: k8s(CONNECTED) 13] addauth digest a:a
[zk: k8s(CONNECTED) 14] addauth digest b:b
[zk: k8s(CONNECTED) 15] addauth digest c:c
[zk: k8s(CONNECTED) 16] create /e e
Created /e
[zk: k8s(CONNECTED) 17] setAcl /e auth::cdrwa
...省略节点输出信息

[zk: k8s(CONNECTED) 18] getAcl /e
'digest,'a:mDmPUap4qvYwm+PZOtJ/scGyHLY=
: cdrwa
'digest,'b:+F8zPn3x1CLx3qpYHEaRwIheWcc=
: cdrwa
'digest,'c:K7CO7OxIfBOQxczG+7FI9BdZ6/s=
: cdrwa

id随便写也可以,zookeeper都不记录的。

1
2
3
4
5
6
7
8
[zk: localhost:2181(CONNECTED) 9] addauth digest hdfs:hdfs    
[zk: localhost:2181(CONNECTED) 10] setAcl /c auth:x:x:rwadc
...
[zk: localhost:2181(CONNECTED) 11] getAcl /c               
'digest,'user:tpUq/4Pn5A64fVZyQ0gOJ8ZWqkY=
: cdrwa
'digest,'hdfs:0wpra2yK6RCUB9sbo0BkElpzcl8=
: cdrwa

也可以对根 / 授权,这样客户端就不能随便在根下面新建节点了。

1
2
3
4
5
6
7
8
9
[zk: localhost:2181(CONNECTED) 9] addauth digest user:password    
[zk: localhost:2181(CONNECTED) 21] setAcl / auth::rawdc

重新登录
[zk: localhost:2181(CONNECTED) 0] ls /
Authentication is not valid : /
[zk: localhost:2181(CONNECTED) 1] getAcl /
'digest,'user:tpUq/4Pn5A64fVZyQ0gOJ8ZWqkY=
: cdrwa

还原

使用有权限的用户/实例,如果都忘了那就只能放绝招:使用超级管理员登录,重新设置权限为world即可。

1
[zk: localhost:2181(CONNECTED) 26] setAcl / world:anyone:cdrwa

Digest

直接用起来比 auth 简单,直接把密文交给zookeeper。首先得生成对应用户的密码。

1
2
3
4
5
[root@k8s zookeeper-3.4.10]# java -cp zookeeper-3.4.10.jar:lib/* org.apache.zookeeper.server.auth.DigestAuthenticationProvider user:password
user:password->user:tpUq/4Pn5A64fVZyQ0gOJ8ZWqkY=

[root@k8s zookeeper-3.4.10]# java -cp zookeeper-3.4.10.jar:lib/* org.apache.zookeeper.server.auth.DigestAuthenticationProvider es:es
es:es->es:KiHfMOSWCTgPKpz78IL/6qO8AEE=

scheme是digest的时候,id需要密文。通过Zookeeper的客户端编码方式添加认证(登录),digest对应的auth数据是明文。

ACL授权一样使用 setAcl :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$$ A实例
[zk: localhost:2181(CONNECTED) 17] setAcl /b digest:user:tpUq/4Pn5A64fVZyQ0gOJ8ZWqkY=:cdrwa
和md5密码类似,数据库被盗了,如果是常用的密码会被猜出来
[zk: localhost:2181(CONNECTED) 18] getAcl /b
'digest,'user:tpUq/4Pn5A64fVZyQ0gOJ8ZWqkY=
: cdrwa

$$ B实例
重新登录:
[zk: k8s:2181(CONNECTED) 2] ls /b
Authentication is not valid : /b

$$ A实例
[zk: localhost:2181(CONNECTED) 20] create /b/bb ''
Authentication is not valid : /b/bb
[zk: localhost:2181(CONNECTED) 21] addauth digest user:tpUq/4Pn5A64fVZyQ0gOJ8ZWqkY=
[zk: localhost:2181(CONNECTED) 22] create /b/bb ''                                 
Authentication is not valid : /b/bb

# 需要使用明文登录
[zk: localhost:2181(CONNECTED) 23] addauth digest user:password
[zk: localhost:2181(CONNECTED) 24] create /b/bb '' 
Created /b/bb 

# 权限没有继承性
[zk: localhost:2181(CONNECTED) 25] getAcl /b/bb
'world,'anyone
: cdrwa

IP

ip的权限配置更简单些。逻辑就是匹配客户端的IP地址,在权限IP地址段范围内的才能访问。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$$ A实例
[zk: localhost:2181(CONNECTED) 18] setAcl /i ip:127.0.0.1:cdrwa
...
[zk: localhost:2181(CONNECTED) 19] getAcl /i
'ip,'127.0.0.1
: cdrwa
[zk: localhost:2181(CONNECTED) 24] get /i
Authentication is not valid : /i

咋回事呢,就是本地还没权限?有时可localhost不一定对应127.0.0.1的。。。

$$ B实例
[root@k8s zookeeper-3.4.10]# bin/zkCli.sh -server 127.0.0.1
[zk: 127.0.0.1(CONNECTED) 0] get /i
i
...
改成另一个网卡的ip地址
[zk: 127.0.0.1(CONNECTED) 1] setAcl /i ip:192.168.191.138:cdrwa
...
[zk: 127.0.0.1(CONNECTED) 2] getAcl /i
'ip,'192.168.191.138
: cdrwa
[zk: 127.0.0.1(CONNECTED) 3] get /i
Authentication is not valid : /i

$$ C实例
用主机名(191.138)登录的实例
[zk: k8s(CONNECTED) 19] get /i
i

超级管理员

如果权限设置错了,咋办?

1
2
3
4
5
6
7
8
9
10
[zk: k8s(CONNECTED) 21] setAcl /i ip:192.168.191.0/24:cdrwa                   
Acl is not valid : /i

[zk: k8s(CONNECTED) 25] setAcl /i ip:192.168.191.0:cdrwa

[zk: k8s(CONNECTED) 26] getAcl /i
'ip,'192.168.191.0
: cdrwa
[zk: k8s(CONNECTED) 27] get /i
Authentication is not valid : /i

除非把客户端的ip地址换成 192.168.191.0 否则就访问不了了。

此时需要超级管理员才行,不然真没办法折腾了。(不知道为啥)是可以删掉(特指我当前的环境啊),但是这样数据就没有了啊!!

1
2
3
4
5
6
7
8
[zk: localhost:2181(CONNECTED) 26] getAcl /i
'ip,'192.168.191.0
: cdrwa
[zk: localhost:2181(CONNECTED) 27] delete /i
[zk: localhost:2181(CONNECTED) 28] ls /
[a, b, c, zookeeper, d, e]
[zk: localhost:2181(CONNECTED) 29] ls /i
Node does not exist: /i

如果数据很重要,重启后用超级管理员的方式找回密码还是很划的来的。

用 DigestAuthenticationProvider 加密就不操作了,直接用 es:es 对应的 es:es->es:KiHfMOSWCTgPKpz78IL/6qO8AEE= 作为管理员的账号密码。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
export SERVER_JVMFLAGS=-Dzookeeper.DigestAuthenticationProvider.superDigest=es:KiHfMOSWCTgPKpz78IL/6qO8AEE=

[root@k8s zookeeper-3.4.10]# bin/zkServer.sh stop
[root@k8s zookeeper-3.4.10]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

$$ A实例
[root@k8s zookeeper-3.4.10]# bin/zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] get /i
Authentication is not valid : /i
[zk: localhost:2181(CONNECTED) 1] getAcl /i
'ip,'192.168.191.0
: cdrwa
[zk: localhost:2181(CONNECTED) 2] addauth digest es:es
[zk: localhost:2181(CONNECTED) 3] get /i
i
...
[zk: localhost:2181(CONNECTED) 4] setAcl /i world:anyone:cdrwa
...

$$ B实例
[zk: localhost:2181(CONNECTED) 0] get /i
i
[zk: localhost:2181(CONNECTED) 1] getAcl /i
'world,'anyone
: cdrwa

实践—好玩

权限可以直接在创建的时刻指定:

1
create /mynode content digest:user:tpUq/4Pn5A64fVZyQ0gOJ8ZWqkY=:cdrwa

也可以一次性设置N个权限:

注:以下操作都是超级管理员登录的窗口,所以不存在权限的问题。想怎么改就怎么改

1
2
3
4
5
6
7
8
9
setAcl /i ip:192.168.191.0:cdrwa,ip:127.0.0.1:cdrwa,ip:192.168.191.138:cdrwa

getAcl /i
'ip,'192.168.191.0
: cdrwa
'ip,'127.0.0.1
: cdrwa
'ip,'192.168.191.138
: cdrwa

但是,使用ip、digest、word重设权限后,会覆盖旧的:

1
2
3
4
5
6
7
8
9
[zk: localhost:2181(CONNECTED) 7] setAcl /i ip:0.0.0.0:cdrwa
[zk: localhost:2181(CONNECTED) 8] getAcl /i
'ip,'0.0.0.0
: cdrwa

[zk: localhost:2181(CONNECTED) 15] setAcl /i world:anyone:cdraw
[zk: localhost:2181(CONNECTED) 16] getAcl /i
'world,'anyone
: cdrwa

3.4的版本不支持ip段(3.5应该是ok的): IPAuthenticationProvider

1
2
3
public boolean isValid(String id) {
    return addr2Bytes(id) != null;
}

可以找对应版本的源码(远程)调试下:

1
2
[root@k8s zookeeper-3.4.10]# export SERVER_JVMFLAGS="-Dzookeeper.DigestAuthenticationProvider.superDigest=es:KiHfMOSWCTgPKpz78IL/6qO8AEE= -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005"
[root@k8s zookeeper-3.4.10]# bin/zkServer.sh start

auth的权限比较有意思:自家兄弟添加、排除异己;permission按最新的算

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[zk: localhost:2181(CONNECTED) 21] setAcl /i auth::cdrwa,ip:0.0.0.0:cd
...
[zk: localhost:2181(CONNECTED) 22] getAcl /i
'ip,'0.0.0.0
: cd
'digest,'es:KiHfMOSWCTgPKpz78IL/6qO8AEE=
: cdrwa

# auth add
[zk: localhost:2181(CONNECTED) 27] addauth digest m:m
[zk: localhost:2181(CONNECTED) 28] addauth digest n:n
[zk: localhost:2181(CONNECTED) 29] setAcl /i auth::cdrwa
...
[zk: localhost:2181(CONNECTED) 30] getAcl /i
'digest,'es:KiHfMOSWCTgPKpz78IL/6qO8AEE=
: cdrwa
'digest,'m:WZiIgWqJgd8EQVBh55Bslf/7JRc=
: cdrwa
'digest,'n:TZ3f1UF7B75EF5g6qWR0VmEvb/s=
: cdrwa

# perm
[zk: localhost:2181(CONNECTED) 31] addauth digest z:z
[zk: localhost:2181(CONNECTED) 32] addauth digest l:l
[zk: localhost:2181(CONNECTED) 33] setAcl /i auth:z:z:cd
...
[zk: localhost:2181(CONNECTED) 34] getAcl /i
'digest,'es:KiHfMOSWCTgPKpz78IL/6qO8AEE=
: cd
'digest,'m:WZiIgWqJgd8EQVBh55Bslf/7JRc=
: cd
'digest,'n:TZ3f1UF7B75EF5g6qWR0VmEvb/s=
: cd
'digest,'z:cOgtYxFOAwKiTCMigcN2j2fFI3c=
: cd
'digest,'l:gdlgatwJdq7uG8kFfIjcIZj0tnQ=
: cd

可以看到全部变成cd了

[zk: localhost:2181(CONNECTED) 35] setAcl /i auth:z:z:cdraw
...
[zk: localhost:2181(CONNECTED) 36] getAcl /i               
'digest,'es:KiHfMOSWCTgPKpz78IL/6qO8AEE=
: cdrwa
'digest,'m:WZiIgWqJgd8EQVBh55Bslf/7JRc=
: cdrwa
'digest,'n:TZ3f1UF7B75EF5g6qWR0VmEvb/s=
: cdrwa
'digest,'z:cOgtYxFOAwKiTCMigcN2j2fFI3c=
: cdrwa
'digest,'l:gdlgatwJdq7uG8kFfIjcIZj0tnQ=
: cdrwa

全部变成cdrwa

我觉得用 auth 设置权限是最保险的,不会搞错了出现自己都访问不了的情况。

后记

ok,到此基本的知识点算大概了解了。还有自定义实现授权的provider,这有点高级了有兴趣的自己去看官方文档了。

但是因为权限没有继承关系,像一些开源项目用到zookeeper的话,怎么进行加密呢?所有子目录都一个个的加?或者自定义根路径(chroot)让别人猜不到?

还有像zookeeper自己的目录 /zookeeper ,怎么进行权限管理呢?

–END

命令行调用Jenkins2.63打包

Jenkins给集成打包带来了很多的便捷,让不懂开发的同事也能轻松的打包。但是对于开发和运维来说,可能还需要在打包之外做一些事情,以及批量的处理N个打包。

对于研发来说,重复是最难忍受的。Jenkins可以直接通过api来调用查看和处理各种请求。

网络上资料其实挺多的。也有直接一个脚本直接搞定部署的。知其然知其所以然,还是需要自己下功夫理解人家的脚本这样才能更好的用(先不说自己写了)。主要的就是三个步骤:

  1. 怎么登陆: JenkinsScriptConsole-Remoteaccess .|. RemoteaccessAPI-CSRFProtection
  2. 执行build:Running jenkins jobs via command line .|. Triggering Jenkins builds by URL
  3. 检查结果:checkJenkins.sh

crumb

首先来看看crumb是啥

1
2
3
4
5
6
7
8
9
10
11
12
[root@iZ9416vn227Z opt]# curl -X POST $JENKINS_PROJ_AUTH_URL/build
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
<title>Error 403 No valid crumb was included in the request</title>
</head>
<body><h2>HTTP ERROR 403</h2>
<p>Problem accessing /job/helloworld/build. Reason:
<pre>    No valid crumb was included in the request</pre></p><hr><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.4.z-SNAPSHOT</a><hr/>

</body>
</html>

这里CSRF 相当于jenkins做的一个权限控制,有两种方式处理:

方法一:取消控制

在菜单 系统管理 –> Configure Global Security 中调整设置: 取消 防止跨站点请求伪造(Prevent Cross Site Request Forgery exploits) 的勾选。 如果还坚持要启用“防止跨站点请求伪造”,就需要先动态获取crumb。

方法二:获取token

通过URL: crumbIssuer/api/json 获取token的键值,然后把它附加到build请求的HEADER。

命令行通过URL请求jenkins进行编译

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

JENKINS_ID="admin:PASSWORD"
JENKINS_PROJ_AUTH_URL=http://$JENKINS_ID@localhost:18080/job/helloworld
JENKINS_PROJ_URL=http://localhost:18080/job/helloworld

curl $JENKINS_PROJ_AUTH_URL/lastBuild/api/json

#Get the current configuration and save it locally
curl -X GET $JENKINS_PROJ_URL/config.xml

curl 'http://'$JENKINS_ID'@localhost:18080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)'
Jenkins-Crumb:a4296173a91d900c11af07d932559fcd

curl -X POST -H "Jenkins-Crumb:a4296173a91d900c11af07d932559fcd"  $JENKINS_PROJ_AUTH_URL/build

curl -s $JENKINS_PROJ_AUTH_URL/lastBuild/api/json | jq .

# --- TODO ---

progress(排队中)|pending(构建中),每三秒去重新获取结果进行判断  
while grep -qE "In progress|pending" build.tmp2;  

if grep -qE "Success" build.tmp2 ;then  
elif grep -qE "Unstable" build.tmp2 ;then  
elif grep -qE "Failed|Aborted" build.tmp2 ;then  
echo "#Open Link: ${jobPage}${newbuild}/console see details"  

BuildName

jenkins的使用案例

参考

API使用

登录/权限问题

–END