Winse Blog

走走停停都是风景, 熙熙攘攘都向最好, 忙忙碌碌都为明朝, 何畏之.

Puppetboard Install

对于我这样的python小白来说,有网络来安装 puppetboard 还是比较容易的(离线安装依赖处理可能比较麻烦)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
# https://fedoraproject.org/wiki/EPEL/zh-cn
[root@cu2 ~]# yum search epel
[root@cu2 ~]# yum install epel-release


[root@cu2 ~]# yum repolist
Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
 * base: mirrors.skyshe.cn
 * centosplus: mirrors.pubyun.com
 * epel: mirror01.idc.hinet.net
 * extras: mirrors.skyshe.cn
 * updates: mirrors.skyshe.cn
193 packages excluded due to repository priority protections
repo id                                   repo name                                                                   status
base                                      CentOS-6 - Base                                                                  6,575
centosplus                                CentOS-6 - Centosplus                                                             0+76
epel                                      Extra Packages for Enterprise Linux 6 - x86_64                              12,127+117
extras                                    CentOS-6 - Extras                                                                   62
puppet-local                              Puppet Local                                                                         5
updates                                   CentOS-6 - Updates                                                               1,607
repolist: 20,376


[root@cu2 ~]# yum install python-pip -y


[root@cu2 ~]# pip install puppetboard
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
You are using pip version 7.1.0, however version 8.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting puppetboard
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
  Downloading puppetboard-0.1.3.tar.gz (598kB)
    100% |████████████████████████████████| 602kB 726kB/s 
Collecting Flask>=0.10.1 (from puppetboard)
  Downloading Flask-0.10.1.tar.gz (544kB)
    100% |████████████████████████████████| 544kB 734kB/s 
Collecting Flask-WTF<=0.9.5,>=0.9.4 (from puppetboard)
  Downloading Flask-WTF-0.9.5.tar.gz (245kB)
    100% |████████████████████████████████| 249kB 320kB/s 
Collecting WTForms<2.0 (from puppetboard)
  Downloading WTForms-1.0.5.zip (355kB)
    100% |████████████████████████████████| 356kB 1.3MB/s 
Collecting pypuppetdb<0.3.0,>=0.2.1 (from puppetboard)
  Downloading pypuppetdb-0.2.1.tar.gz
Collecting Werkzeug>=0.7 (from Flask>=0.10.1->puppetboard)
  Downloading Werkzeug-0.11.9-py2.py3-none-any.whl (306kB)
    100% |████████████████████████████████| 307kB 1.5MB/s 
Collecting Jinja2>=2.4 (from Flask>=0.10.1->puppetboard)
  Downloading Jinja2-2.8-py2.py3-none-any.whl (263kB)
    100% |████████████████████████████████| 266kB 2.3MB/s 
Collecting itsdangerous>=0.21 (from Flask>=0.10.1->puppetboard)
  Downloading itsdangerous-0.24.tar.gz (46kB)
    100% |████████████████████████████████| 49kB 7.2MB/s 
Collecting requests>=1.2.3 (from pypuppetdb<0.3.0,>=0.2.1->puppetboard)
  Downloading requests-2.10.0-py2.py3-none-any.whl (506kB)
    100% |████████████████████████████████| 507kB 920kB/s 
Collecting MarkupSafe (from Jinja2>=2.4->Flask>=0.10.1->puppetboard)
  Downloading MarkupSafe-0.23.tar.gz
Installing collected packages: Werkzeug, MarkupSafe, Jinja2, itsdangerous, Flask, WTForms, Flask-WTF, requests, pypuppetdb, puppetboard
  Running setup.py install for MarkupSafe
  Running setup.py install for itsdangerous
  Running setup.py install for Flask
  Running setup.py install for WTForms
  Running setup.py install for Flask-WTF
  Running setup.py install for pypuppetdb
  Running setup.py install for puppetboard
Successfully installed Flask-0.10.1 Flask-WTF-0.9.5 Jinja2-2.8 MarkupSafe-0.23 WTForms-1.0.5 Werkzeug-0.11.9 itsdangerous-0.24 puppetboard-0.1.3 pypuppetdb-0.2.1 requests-2.10.0


[root@cu2 ~]# pip show puppetboard
You are using pip version 7.1.0, however version 8.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
---
Metadata-Version: 1.0
Name: puppetboard
Version: 0.1.3
Summary: Web frontend for PuppetDB
Home-page: https://github.com/puppet-community/puppetboard
Author: Daniele Sluijters
Author-email: daniele.sluijters+pypi@gmail.com
License: Apache License 2.0
Location: /usr/lib/python2.6/site-packages
Requires: Flask, Flask-WTF, WTForms, pypuppetdb
[root@cu2 ~]# ll /usr/lib/python2.6/site-packages/puppetboard
total 100
-rw-r--r-- 1 root root 31629 May  5 09:12 app.py
-rw-r--r-- 1 root root 30481 May  5 09:12 app.pyc
-rw-r--r-- 1 root root  1206 May  5 09:12 default_settings.py
-rw-r--r-- 1 root root  1477 May  5 09:12 default_settings.pyc
-rw-r--r-- 1 root root  1025 May  5 09:12 forms.py
-rw-r--r-- 1 root root  1982 May  5 09:12 forms.pyc
-rw-r--r-- 1 root root     0 May  5 09:12 __init__.py
-rw-r--r-- 1 root root   143 May  5 09:12 __init__.pyc
drwxr-xr-x 9 root root  4096 May  5 09:12 static
drwxr-xr-x 2 root root  4096 May  5 09:12 templates
-rw-r--r-- 1 root root  2155 May  5 09:12 utils.py
-rw-r--r-- 1 root root  3433 May  5 09:12 utils.pyc


[root@cu2 ~]# pip install uwsgi
You are using pip version 7.1.0, however version 8.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting uwsgi
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
  Downloading uwsgi-2.0.12.tar.gz (784kB)
    100% |████████████████████████████████| 786kB 143kB/s 
Installing collected packages: uwsgi
  Running setup.py install for uwsgi
Successfully installed uwsgi-2.0.12


[root@cu2 ~]# mkdir -p /var/www/puppetboard
[root@cu2 ~]# cd /var/www/puppetboard/
[root@cu2 puppetboard]# cp /usr/lib/python2.6/site-packages/puppetboard/default_settings.py ./settings.py
# 修改配置 
# https://github.com/voxpupuli/puppetboard#settings
PUPPETDB_HOST = 'cu3'
PUPPETDB_PORT = 8080
REPORTS_COUNT = 21
ENABLE_CATALOG = True

[root@cu2 puppetboard]# vi wsgi.py 
from __future__ import absolute_import
import os

os.environ['PUPPETDOARD_SETTINGS'] = '/var/www/puppetboard/settings.py'
from puppetboard.app import app as application


# A 直接用uwsgi-http
# http://yongqing.is-programmer.com/posts/43688.html
[root@cu2 puppetboard]# uwsgi --http :9091 --wsgi-file /var/www/puppetboard/wsgi.py 

# 使用 supervisord 管理
[root@cu2 supervisord.d]# cat uwsgi.ini 
[program:puppetboard]
command=uwsgi --http :9091 --wsgi-file /var/www/puppetboard/wsgi.py 
[root@cu2 supervisord.d]# supervisorctl update


# B nginx + uwsgi-socket
# 需要对应到 / ,新增一个9091的server
[root@cu2 puppetboard]# vi /home/hadoop/nginx/conf/nginx.conf
server {
  listen 9091;

  location /static {
    alias /usr/lib/python2.6/site-packages/puppetboard/static;
  }
  location / {
    include uwsgi_params;
    uwsgi_pass 127.0.0.1:9090;
  }
}

[root@cu2 puppetboard]# uwsgi --socket :9090 --wsgi-file /var/www/puppetboard/wsgi.py 

[root@cu2 puppetboard]# /home/hadoop/nginx/sbin/nginx -s reload

配置SSL访问需要把ssl_verify设置为false。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 2.7.9+网上说好像就没问题
# http://stackoverflow.com/questions/29099404/ssl-insecureplatform-error-when-using-requests-package
# https://github.com/pypa/pip/issues/2681
[root@cu2 ~]# yum install -y  libffi-devel libffi 
[root@cu2 ~]# pip install 'requests[security]'

# [重要] 两个链接内容一样的:
# * https://groups.google.com/forum/#!msg/puppet-users/m7Sakf4bQ7Q/y6uAa0AUsZIJ
# * http://grokbase.com/t/gg/puppet-users/1428vjkncr/puppetboard-and-ssl
# You have two choices now, set SSL_VERIFY to False and trust that you're
# always talking to your actual PuppetDB or copy from the Puppet CA
# $vardir/ssl/ca_crt.pem to /etc/puppetboard and set SSL_VERIFY to the path
# of ca_crt.pem. In that case the file SSL_VERIFY points to will be used to
# verify PuppetDB's server certificate instead of the OS truststore.
[root@cu2 puppetboard]# vi settings.py 
PUPPETDB_HOST = 'cu3.esw.cn'
PUPPETDB_PORT = 8081
PUPPETDB_SSL_VERIFY = False  # 这里设置为false
PUPPETDB_KEY = '/etc/puppetlabs/puppet/ssl/private_keys/cu2.esw.cn.pem'
PUPPETDB_CERT = '/etc/puppetlabs/puppet/ssl/ca/signed/cu2.esw.cn.pem'

# 重启uwsgi-http服务
[root@cu2 ~]# supervisorctl restart puppetboard

如果 puppetboard 和 puppetdb 安装在同一机器,可以使用 puppetdb/ssl 路径下的ssl文件(puppetdb/ssl也是从puppet/ssl拷贝过来的):

1
2
3
4
5
6
7
8
9
10
[root@cu3 ~]# puppetdb ssl-setup -f
PEM files in /etc/puppetlabs/puppetdb/ssl are missing, we will move them into place for you
Copying files: /etc/puppetlabs/puppet/ssl/certs/ca.pem, /etc/puppetlabs/puppet/ssl/private_keys/cu3.esw.cn.pem and /etc/puppetlabs/puppet/ssl/certs/cu3.esw.cn.pem to /etc/puppetlabs/puppetdb/ssl
...

[root@cu3 ~]# tree /etc/puppetlabs/puppetdb/ssl/
/etc/puppetlabs/puppetdb/ssl/
├── ca.pem
├── private.pem
└── public.pem

–END

Hiera and Facts

为什么用hiera: https://docs.puppet.com/hiera/3.1/#why-hiera

  • hierarchy层级体系。可以设置公共属性,也可以覆写!
  • “注入”设置 class 中的属性值。
  • hiera_include 通过配置来完成site.pp同样的功能,并且比 node 更加强大灵活(数组值可以合并)。

基本概念:

  • hiera.yaml 默认配置文件放在 $codedir/hiera.yaml 。 结合puppet使用时可以通过修改 puppet.conf 的 hiera_config 自定义配置的文件。
  • hierarchy hierarchy定义好可以简化很多工作量。如需要根据操作系统 %{::osfamily} 进行适配。
  • datasource yaml格式介绍。

windows cygwin命令行环境配置

winse@Lenovo-PC ~
$ cat bin/hiera
#!/bin/sh

# default puppetlabs config in c:\Users\winse\Puppetlabs
export HOME=/cygdrive/c/Users/winse

name=`basename $0`

# execute
"C:/Progra~1/Puppet~1/Puppet/bin"/$name.bat "$@"


winse@Lenovo-PC ~
$ cat .bash_profile
...
function hiera_look(){
  code_dir=`puppet config print codedir | sed 's/\r//' `
  ~/bin/hiera -c "$code_dir/hiera.yaml" --debug "$@" ::environment=production
}

HelloWorld

winse@Lenovo-PC /cygdrive/d/esw-shells/puppet/dta/code
$ cat /cygdrive/c/Users/winse/.puppetlabs/etc/puppet/puppet.conf
[main]
codedir = D:/esw-shells/puppet/dta/code
hiera_config = $codedir/hiera.yaml

certname = winse

winse@Lenovo-PC /cygdrive/d/esw-shells/puppet/dta/code
$ cat hiera.yaml
---
:backends:
  - yaml
:hierarchy:
  - "nodes/%{::trusted.certname}"
  - common

:yaml:
  :datadir: "D:/esw-shells/puppet/dta/code/environments/%{::environment}/hieradata"


winse@Lenovo-PC /cygdrive/d/esw-shells/puppet/dta/code
$ cat environments/production/hieradata/common.yaml
whoami: winse


winse@Lenovo-PC ~
$ hiera_look whoami
DEBUG: 2016-05-03 11:27:41 +0100: Hiera YAML backend starting
DEBUG: 2016-05-03 11:27:41 +0100: Looking up whoami in YAML backend
DEBUG: 2016-05-03 11:27:41 +0100: Looking for data source common
DEBUG: 2016-05-03 11:27:41 +0100: Found whoami in common
winse

与Puppet结合使用

参考案例

主要功能

  • hiera获取puppet-facts的属性
  • puppet读取hiera中的属性
  • hiera注入设置puppet-module的属性: 获取到第一个就返回了(类似于hiera),对于strings, arrays, hashes类型 cannot merge values from multiple hierarchy levels; 需要使用来 hiera_array or hiera_hash 代替。
  • hiera_include

动手实践

winse@Lenovo-PC /cygdrive/d/esw-shells/puppet/dta/code/environments/production
$ tree .
.
├── hieradata
│   ├── common.yaml
│   └── nodes
│       └── winse.yaml
├── manifests
│   └── site.pp
└── modules
    └── helloworld
        └── manifests
            └── init.pp

$ cat hieradata/common.yaml
whoami: "%{calling_module} - %{calling_class} - %{calling_class_path} - %{::domain}"

$ cat hieradata/nodes/winse.yaml  | iconv -f gbk -t utf8
---
classes:
  - helloworld::hello
  - helloworld::world

# 文件编码需要与环境匹配,windows要GBK的
helloworld::hello::hello: 你好

$ cat modules/helloworld/manifests/init.pp

class helloworld::hello ($hello = "hello"){

  notify {$hello :
  }

}

class helloworld::world {

  notify {hiera('whoami') : # 不推荐在module中使用hiera方法,这里仅为了演示获取calling_module等
  }

}

$ cat manifests/site.pp

hiera_include('classes')

$ puppet apply environments/production/manifests/site.pp
Notice: Compiled catalog for winse in environment production in 0.28 seconds
Notice: 你好
Notice: /Stage[main]/Helloworld::Hello/Notify[你好]/message: defined 'message' as '你好'
Notice: helloworld - helloworld::world - helloworld/world - DHCP HOST
Notice: /Stage[main]/Helloworld::World/Notify[helloworld - helloworld::world - helloworld/world - DHCP HOST]/message: defined 'message' as 'helloworld - helloworld::world - helloworld/world - DHCP HOST'
Notice: Applied catalog in 0.02 seconds

# 测试获取hiera变量
$ puppet apply -e "notice(hiera('whoami'))"
Notice: Scope(Class[main]):  -  -  - DHCP HOST
Notice: Compiled catalog for winse in environment production in 0.05 seconds
Notice: Applied catalog in 0.03 seconds

$ puppet apply -e "notice(hiera('classes'))"
Notice: Scope(Class[main]): [helloworld::hello, helloworld::world]
Notice: Compiled catalog for winse in environment production in 0.05 seconds
Notice: Applied catalog in 0.02 seconds

facts

自定义指标

三种方式:

  • 文件: yaml/json/txt。推荐放置到 modules/[name]/facts.d 目录下
  • 可执行脚本输出键值对。推荐放置到 modules/[name]/facts.d 目录下,可执行脚本!
  • ruby。放置到 modules/[name]/lib/facter 。custom facts should go in lib/facter/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ cat environments/production/modules/helloworld/facts.d/simple_facts.yaml
---
my_fact: my_value

$ puppet facts find --debug | grep -iE "my_fact|helloworld"
Debug: Loading external facts from D:/esw-shells/puppet/dta/code/environments/production/modules/helloworld/facts.d
Debug: Facter: searching "D:/esw-shells\puppet\dta\code\environments\production\modules\helloworld\facts.d" for external facts.
Debug: Facter: resolving facts from YAML file "D:/esw-shells\puppet\dta\code\environments\production\modules\helloworld\facts.d\simple_facts.yaml".
Debug: Facter: fact "my_fact" has resolved to "my_value".
Debug: Facter: completed resolving facts from YAML file "D:/esw-shells\puppet\dta\code\environments\production\modules\helloworld\facts.d\simple_facts.yaml".
    "my_fact": "my_value",


winse@Lenovo-PC /cygdrive/d/esw-shells/puppet/dta/code
$ cat environments/production/modules/helloworld/lib/facter/users.rb
Facter.add('users') do
  setcode do
    ["winse", "winseliu"]
  end
end

$ puppet facts | grep -3 users
    "uptime_days": 0,
    "uptime_hours": 1,
    "uptime_seconds": 6980,
    "users": [
      "winse",
      "winseliu"
    ],

–END

MCollective Plugins

上一篇介绍了mcollective的安装。乘着这股热情把 mco 命令行和插件的安装弄通,记录下来。

基本命令使用

1
2
3
4
5
6
7
8
9
10
11
12
[root@hadoop-master2 ~]# mco help
The Marionette Collective version 2.8.8

  completion      Helper for shell completion systems
  describe_filter Display human readable interpretation of filters
  facts           Reports on usage for a specific fact
  find            Find hosts using the discovery system matching filter criteria
  help            Application list and help
  inventory       General reporting tool for nodes, collectives and subcollectives
  ping            Ping all nodes
  plugin          MCollective Plugin Application
  rpc             Generic RPC agent client application

自带的插件只能用来查看环境情况(下面列出来的命令上一篇:MCollective安装配置都已记录过)。

1
2
3
mco ping
mco inventory [server_host]
mco facts [fact]

mcollective 的 filter(适配节点)功能很强大,具体查看文档:Selecting Request Targets Using Filters

使用filter功能需要结合facts,需要先把节点的信息写入到mcollective/facts.yaml文件

1
2
3
4
5
6
7
8
[hadoop@hadoop-master1 ~]$ sudo mco ping -I /^hadoop/
[hadoop@hadoop-master1 ~]$ sudo mco puppet runall 8 -I /^hadoop/
[hadoop@hadoop-master1 ~]$ sudo mco service iptables status -I "/cu-ud.*/"

[root@hadoop-master1 manifests]# mco ping -S "hostname=hadoop-master2"
[hadoop@hadoop-master1 ~]$ sudo mco ping -S 'hostname=/hadoop.*/'

[hadoop@hadoop-master1 ~]$ sudo mco facts hostname

插件安装

文档中描述了 Use packagesPut files directly into the libdir 两种安装插件的方式。但是 Packages 都是放在 旧的repo 里面,我们这里使用第二种方式把github下载源码放到libdir来安装。

安装mcollective-puppet-agent

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
# 使用文档 https://github.com/puppetlabs/mcollective-puppet-agent#readme
# 直接下载release版本 
[root@hadoop-master2 ~]# cd /usr/libexec/mcollective/
[root@hadoop-master2 mcollective]# ll
total 44
-rw-r--r-- 1 root root 44759 Apr 29 11:53 mcollective-puppet-agent-1.10.0.tar.gz
[root@hadoop-master2 mcollective]# tar zxf mcollective-puppet-agent-1.10.0.tar.gz  
[root@hadoop-master2 mcollective]# ll mcollective-puppet-agent-1.10.0
total 60
drwxrwxr-x 2 root root  4096 Apr 13  2015 agent
drwxrwxr-x 2 root root  4096 Apr 13  2015 aggregate
drwxrwxr-x 2 root root  4096 Apr 13  2015 application
-rw-rw-r-- 1 root root  3456 Apr 13  2015 CHANGELOG.md
drwxrwxr-x 2 root root  4096 Apr 13  2015 data
drwxrwxr-x 4 root root  4096 Apr 13  2015 ext
-rw-rw-r-- 1 root root   349 Apr 13  2015 Gemfile
-rw-rw-r-- 1 root root  3036 Apr 13  2015 Rakefile
-rw-rw-r-- 1 root root 14739 Apr 13  2015 README.md
drwxrwxr-x 9 root root  4096 Apr 13  2015 spec
drwxrwxr-x 3 root root  4096 Apr 13  2015 util
drwxrwxr-x 2 root root  4096 Apr 13  2015 validator
# 官网提供example有区分服务端和客户端文件。反正多了没问题,直接全部放就行咯。。。
[root@hadoop-master2 mcollective]# mv mcollective-puppet-agent-1.10.0 mcollective

# 验证
# 多了puppet的命令!
[root@hadoop-master2 mcollective]# mco help
The Marionette Collective version 2.8.8

  completion      Helper for shell completion systems
  describe_filter Display human readable interpretation of filters
  facts           Reports on usage for a specific fact
  find            Find hosts using the discovery system matching filter criteria
  help            Application list and help
  inventory       General reporting tool for nodes, collectives and subcollectives
  ping            Ping all nodes
  plugin          MCollective Plugin Application
  puppet          Schedule runs, enable, disable and interrogate the Puppet Agent
  rpc             Generic RPC agent client application


# 同步到mcollective-servers (172.17.0.2对应hadoop-slaver1)
[root@hadoop-master2 mcollective]# rsync -az /usr/libexec/mcollective 172.17.0.2:/usr/libexec/

# mcollective-server添加插件后,重启mcollective服务
# 也可以使用 reload-agents 来重新加载agents: service mcollective reload-agents
[root@hadoop-slaver1 libexec]# service mcollective restart
Shutting down mcollective:                                 [  OK  ]
Starting mcollective:                                      [  OK  ]


# 验证server,已经可以看到新添加的puppet命令了
[root@hadoop-master2 mcollective]# mco inventory hadoop-slaver1
Inventory for hadoop-slaver1:

   Server Statistics:
                      Version: 2.8.8
                   Start Time: 2016-04-29 12:01:40 +0800
                  Config File: /etc/puppetlabs/mcollective/server.cfg
                  Collectives: mcollective
              Main Collective: mcollective
                   Process ID: 123
               Total Messages: 1
      Messages Passed Filters: 1
            Messages Filtered: 0
             Expired Messages: 0
                 Replies Sent: 0
         Total Processor Time: 0.67 seconds
                  System Time: 0.8 seconds

   Agents:
      discovery       puppet          rpcutil        

   Data Plugins:
      agent           collective      fact           
      fstat           puppet          resource       

[root@hadoop-master2 mcollective]# mco help puppet

[root@hadoop-master2 mcollective]# mco puppet status    

 * [ ============================================================> ] 3 / 3

   hadoop-slaver1: Currently stopped; last completed run 10 hours 57 minutes 20 seconds ago
   hadoop-master1: Currently stopped; last completed run 11 hours 1 minutes 05 seconds ago
   hadoop-slaver2: Currently stopped; last completed run 10 hours 57 minutes 16 seconds ago
...


# 配置server.conf
# 注意:真正要执行puppet命令,为了适配puppet4需要添加/修改配置
-bash-4.1# cat /etc/puppetlabs/mcollective/server.cfg 
...
plugin.puppet.command = /opt/puppetlabs/bin/puppet agent
plugin.puppet.config = /etc/puppetlabs/puppet/puppet.conf

# 重启所有mcollective(重新加载agent也可以不重启,使用 mco shell run service mcollective reload-agents 来重新加载)

[root@hadoop-master2 mcollective]# mco puppet runall 1
2016-04-29 16:52:46: Running all nodes with a concurrency of 1
2016-04-29 16:52:46: Discovering enabled Puppet nodes to manage
2016-04-29 16:52:49: Found 3 enabled nodes
2016-04-29 16:52:50: hadoop-slaver1 schedule status: Started a Puppet run using the '/opt/puppetlabs/bin/puppet agent --onetime --no-daemonize --color=false --show_diff --verbose --no-splay' command
2016-04-29 16:52:55: hadoop-slaver2 schedule status: Started a Puppet run using the '/opt/puppetlabs/bin/puppet agent --onetime --no-daemonize --color=false --show_diff --verbose --no-splay' command
2016-04-29 16:52:59: hadoop-master1 schedule status: Started a Puppet run using the '/opt/puppetlabs/bin/puppet agent --onetime --no-daemonize --color=false --show_diff --verbose --no-splay' command
2016-04-29 16:52:59: Iteration complete. Initiated a Puppet run on 3 nodes.

[root@hadoop-master2 puppetlabs]# mco puppet status

 * [ ============================================================> ] 3 / 3

   hadoop-master1: Currently stopped; last completed run 10 seconds ago
   hadoop-slaver1: Currently stopped; last completed run 15 seconds ago
   hadoop-slaver2: Currently stopped; last completed run 04 seconds ago
...
# 或者通过 puppetexplorer 查看节点最后的更新时间

安装 package / service 插件

为了更好的管理,再添加 package 和 service 两个插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# http://stackoverflow.com/questions/8488253/how-to-force-cp-to-overwrite-without-confirmation
[root@hadoop-master2 mcollective]# unalias cp
[root@hadoop-master2 mcollective]# cp -rf mcollective-service-agent-3.1.3/* mcollective/   
[root@hadoop-master2 mcollective]# cp -rf mcollective-package-agent-4.4.0/* mcollective/

[root@hadoop-master2 mcollective]# rsync -az /usr/libexec/mcollective 172.17.0.2:/usr/libexec/

# 重启mcollective服务(或者 mco shell run service mcollective reload-agents 重新加载)

# updated 2016-5-11 17:15:08
# 还是重启比较好,reload-agents Data Plugins 没有重新加载
[root@hadoop-master1 puppet]# mco inventory hadoop-master2
Inventory for hadoop-master2:

   Server Statistics:
                      Version: 2.8.8
                   Start Time: 2016-05-11 17:12:45 +0800
                  Config File: /etc/puppetlabs/mcollective/server.cfg
                  Collectives: mcollective
              Main Collective: mcollective
                   Process ID: 39878
               Total Messages: 1
      Messages Passed Filters: 1
            Messages Filtered: 0
             Expired Messages: 0
                 Replies Sent: 0
         Total Processor Time: 1.17 seconds
                  System Time: 0.1 seconds

   Agents:
      discovery       package         puppet         
      rpcutil         service         shell          

   Data Plugins:
      agent           collective      fact           
      fstat           puppet          resource       
      service                                        

   Configuration Management Classes:
      No classes applied

   Facts:
      mcollective => 1
[root@hadoop-master1 puppet]# mco inventory hadoop-slaver2
Inventory for hadoop-slaver2:

   Server Statistics:
                      Version: 2.8.8
                   Start Time: 2016-05-11 16:56:09 +0800
                  Config File: /etc/puppetlabs/mcollective/server.cfg
                  Collectives: mcollective
              Main Collective: mcollective
                   Process ID: 14062
               Total Messages: 9
      Messages Passed Filters: 7
            Messages Filtered: 2
             Expired Messages: 0
                 Replies Sent: 6
         Total Processor Time: 1.31 seconds
                  System Time: 0.23 seconds

   Agents:
      discovery       package         puppet         
      rpcutil         service         shell          

   Data Plugins:
      agent           collective      fact           
      fstat                                          

   Configuration Management Classes:
      No classes applied

   Facts:
      mcollective => 1

验证下package的实力:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
[root@hadoop-master2 mcollective]# mco package lrzsz status

 * [ ============================================================> ] 3 / 3

   hadoop-slaver1: lrzsz-0.12.20-27.1.el6.x86_64
   hadoop-master1: -purged.
   hadoop-slaver2: -purged.

Summary of Arch:

   x86_64 = 1

Summary of Ensure:

             purged = 2
   0.12.20-27.1.el6 = 1


Finished processing 3 / 3 hosts in 1488.41 ms

[root@hadoop-master2 mcollective]# mco rpc package install package=lrzsz
Discovering hosts using the mc method for 2 second(s) .... 3

 * [ ============================================================> ] 3 / 3


hadoop-slaver1                           Unknown Request Status
   Package is already installed


Summary of Ensure:

   0.12.20-27.1.el6 = 3


Finished processing 3 / 3 hosts in 14525.03 ms
[root@hadoop-master2 mcollective]# mco package lrzsz status

 * [ ============================================================> ] 3 / 3

   hadoop-master1: lrzsz-0.12.20-27.1.el6.x86_64
   hadoop-slaver2: lrzsz-0.12.20-27.1.el6.x86_64
   hadoop-slaver1: lrzsz-0.12.20-27.1.el6.x86_64

Summary of Arch:

   x86_64 = 3

Summary of Ensure:

   0.12.20-27.1.el6 = 3


Finished processing 3 / 3 hosts in 572.13 ms

还有很多的插件:

添加了 service,package,shell,puppet 插件后,用 mco 来执行管理集群太爽了!!

后期统一安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
[root@hadoop-master1 mcollective]# ll
total 100
-rw-r--r-- 1 root root 17101 Apr 29 12:15 mcollective-package-agent-4.4.0.tar.gz
-rw-r--r-- 1 root root 44759 Apr 29 11:53 mcollective-puppet-agent-1.10.0.tar.gz
-rw-r--r-- 1 root root 12483 Apr 29 12:15 mcollective-service-agent-3.1.3.tar.gz
-rw-r--r-- 1 root root 17984 Apr 29 19:24 mcollective-shell-agent-0.0.2.tar.gz
[root@hadoop-master1 mcollective]# ls | xargs -I{} tar zxf {}

# TODO 可以考虑打包成rpm
[root@hadoop-master1 mcollective]# mkdir mcollective
[root@hadoop-master1 mcollective]# unalias cp
[root@hadoop-master1 mcollective]# cp -rf mcollective-package-agent-4.4.0/* mcollective/
[root@hadoop-master1 mcollective]# cp -rf mcollective-puppet-agent-1.10.0/* mcollective/
[root@hadoop-master1 mcollective]# cp -rf mcollective-service-agent-3.1.3/* mcollective/
[root@hadoop-master1 mcollective]# cp -rf mcollective-shell-agent-0.0.2/lib/mcollective/* mcollective/
[root@hadoop-master1 mcollective]# rm -rf mcollective-*

# 验证
[root@hadoop-master1 mcollective]# mco help
The Marionette Collective version 2.8.8

  completion      Helper for shell completion systems
  describe_filter Display human readable interpretation of filters
  facts           Reports on usage for a specific fact
  find            Find hosts using the discovery system matching filter criteria
  help            Application list and help
  inventory       General reporting tool for nodes, collectives and subcollectives
  package         Install, uninstall, update, purge and perform other actions to packages
  ping            Ping all nodes
  plugin          MCollective Plugin Application
  puppet          Schedule runs, enable, disable and interrogate the Puppet Agent
  rpc             Generic RPC agent client application
  service         Manages system services
  shell           Run shell commands

# 同步
[root@hadoop-master1 mcollective]# cd ..
[root@hadoop-master1 libexec]# rsync -az mcollective hadoop-master2:/usr/libexec/

# filter
[root@hadoop-master1 manifests]# mco shell run hostname -S "hostname=hadoop-master2"
[root@hadoop-master1 manifests]# mco ping -S "not hostname=hadoop-master1"
[root@hadoop-master1 manifests]# mco ping -S "! hostname=hadoop-master1"

[hadoop@hadoop-master1 ~]$ sudo mco shell --sort -I /cu-ud[1234]{1}$/ run -- ' ls /home/ud/ftpxdr | wc -l  '

# 少配置了
[root@hadoop-master1 manifests]# mco shell run "echo -e '\n\nplugin.puppet.command = /opt/puppetlabs/bin/puppet agent\nplugin.puppet.config = /etc/puppetlabs/puppet/puppet.conf' >> /etc/puppetlabs/mcollective/server.cfg" 


# 重启 mcollective 服务
[root@hadoop-master1 manifests]# mco shell run "echo service mcollective restart >/tmp/mcollective_restart.sh ; nohup sh /tmp/mcollective_restart.sh "


# ---
[root@hadoop-master1 dtarepo]# mco rpc package install package=lrzsz -I cu-omc1
[root@hadoop-master1 dtarepo]# mco rpc package install package=gmetad -I cu-omc1

mco rpc service start service=puppet
=> mco rpc --agent service --action start --argument service=puppet

mco plugin doc package

mco rpc service status service=puppet -S "environment=development"

mco puppet status
mco rpc puppet status


[root@hadoop-master1 gmond]# mco shell -I cu-ud2 run -- "/opt/puppetlabs/bin/puppet agent -t"
[root@hadoop-master1 production]# mco shell -I /^cu-omc2/ run -- "/opt/puppetlabs/bin/puppet agent -t"

# gmond 多网卡情况确认
[root@hadoop-master1 production]# route add -host 239.2.11.71 dev bond0

# puppet 基本语法
# https://docs.puppet.com/puppet/latest/reference/lang_conditional.html#if-statements
# https://docs.puppet.com/puppet/latest/reference/lang_relationships.html

# puppet使用tag可以更灵活的使用
# https://docs.puppet.com/puppet/latest/reference/lang_tags.html
apache::vhost {'docs.puppetlabs.com':
  port => 80,
  tag  => ['us_mirror1', 'us_mirror2'],
}

$ sudo puppet agent --test --tags apache,us_mirror1

再次强调Filter : Selecting Request Targets Using Filters

–END

MCollective安装配置

puppet agent 通过定时拉取的方式来更新本地系统,但无法满足实时更新的需求。 mcollective 通过 消息中间件 的方式,mclient/mservers通过消息的推送/订阅,实现mservers实时执行mclient提交的请求。(添加 m 说明是mcollective的组件!)

最新版的安装除了官网,没有其他可以直接学习的资料(只能参考)。先看官网的资料:

摘录官网安装描述:[Installing MCollective requires the following steps]

  • Make sure your middleware is up and running and your firewalls are in order.
  • Install the mcollective package on servers, then make sure the mcollective service is running.
  • Install the mcollective-client package on admin workstations.
  • Most Debian-like and Red Hat-like systems can use the official Puppet Labs packages. Enable the Puppet Labs repos, or import the packages into your own repos.
    • If you’re on Debian/Ubuntu, mind the missing package dependency.
  • If your systems can’t use the official packages, check the system requirements and either build your own or run from source.

mcollective对于puppet来说是一个锦上添花的组件,没有puppet一样正常运转。部署主要由两个部分组成:

  • 部署消息中间件
  • 配置mcollective(puppet4.4 agent已经安装该功能,redhat也自带装了Stomp包:/opt/puppetlabs/puppet/lib/ruby/gems/2.1.0/gems/ 目录下面)
    • 配置mclient/mserver
    • 配置Stomp with TLS
    • 配置security

本文先简单实现连接远程主机,然后配置安全功能,最后用puppet来重新实现 mcollective 的安装和配置。

环境说明

  • hadoop-master2:
    • 172.17.42.1
    • puppetserver, activemq-server, mcollective-client
  • hadoop-master1/hadoop-slaver1/hadoop-slaver2:
    • 172.17.0.2/¾
    • puppet-agent, mcollective-server

ActiveMQ部署

activemq的服务端是一个spring-jetty项目,直接解压运行启动脚本即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# http://activemq.apache.org/download-archives.html
# 直接下载最新的 tar.gz

# 解压,启动
On Unix:
From a command shell, change to the installation directory and run ActiveMQ as a foregroud process:
cd [activemq_install_dir]/bin
./activemq console
From a command shell, change to the installation directory and run ActiveMQ as a daemon process:
cd [activemq_install_dir]/bin
./activemq start

# 确认
URL: http://127.0.0.1:8161/admin/
Login: admin
Passwort: admin
# 起了好多端口,随便试一个
netstat -nl|grep 61616
netstat -anp|grep PID

# 数据/日志目录
[root@hadoop-master2 apache-activemq-5.13.2]# ll data/
total 16
-rw-r--r-- 1 root users 4276 Apr 27 21:36 activemq.log
-rw-r--r-- 1 root root     5 Apr 27 21:36 activemq.pid
-rw-r--r-- 1 root root     0 Apr 27 21:36 audit.log
drwxr-xr-x 2 root root  4096 Apr 27 21:36 kahadb

查看连接密码:

1
2
3
4
5
[root@hadoop-master2 conf]# cat credentials.properties
...
activemq.username=system
activemq.password=manager
guest.password=password[root@hadoop-master2 conf]# 

简单配置(unencrypted Stomp)

安装puppet4.4后,mcollective已经安装好了!直接修改配置连接到activemq即可。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
[root@hadoop-master2 puppetlabs]# chkconfig --list | grep mco
mcollective     0:off   1:off   2:off   3:off   4:off   5:off   6:off

# puppetserver作为mcollective-client
[root@hadoop-master2 mcollective]# cat client.cfg                    
...
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = hadoop-master2.example.com
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = system
plugin.activemq.pool.1.password = manager
...

[root@hadoop-master2 mcollective]# mco ping


---- ping statistics ----
No responses received

# puppet agent作为mcollective-server
-bash-4.1# cat server.cfg 
...
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = hadoop-master2.example.com
plugin.activemq.pool.1.port = 61613
plugin.activemq.pool.1.user = system
plugin.activemq.pool.1.password = manager
...

-bash-4.1# service mcollective start
Starting mcollective:                                      [  OK  ]
-bash-4.1# service mcollective status
mcollectived (pid  202) is running...

# 其他两台agent机器一样的配置操作

# 1. mcollective-client(puppetserver) 测试
[root@hadoop-master2 ~]# mco find
hadoop-master1
hadoop-slaver2
hadoop-slaver1
[root@hadoop-master2 mcollective]# mco ping
hadoop-master1                           time=148.29 ms
hadoop-slaver2                           time=187.99 ms
hadoop-slaver1                           time=190.21 ms


---- ping statistics ----
3 replies max: 190.21 min: 148.29 avg: 175.50 

# 2. 先查看/扫描节点状态。(如果配置了facts后,会输出一长串的Facts!)
[root@hadoop-master2 ssl]# mco inventory hadoop-master1
Inventory for hadoop-master1:

   Server Statistics:
                      Version: 2.8.8
                   Start Time: 2016-04-29 00:21:31 +0800
                  Config File: /etc/puppetlabs/mcollective/server.cfg
                  Collectives: mcollective
              Main Collective: mcollective
                   Process ID: 155
               Total Messages: 13
      Messages Passed Filters: 3
            Messages Filtered: 0
             Expired Messages: 0
                 Replies Sent: 2
         Total Processor Time: 2.32 seconds
                  System Time: 0.3 seconds

   Agents:
      discovery       rpcutil                        

   Data Plugins:
      agent           collective      fact           
      fstat                                          

   Configuration Management Classes:
      No classes applied

   Facts:
      mcollective => 1

# 3. 获取节点facts,需要配合puppet一起来使用
# puppetserver节点 配置更新agent facts.yaml信息
[root@hadoop-master2 manifests]# cat site.pp 
file{'/etc/puppetlabs/mcollective/facts.yaml':
  owner    => root,
  group    => root,
  mode     => '400',
  loglevel => debug, # reduce noise in Puppet reports
  content  => inline_template("<%= scope.to_hash.reject { |k,v| k.to_s =~ /(uptime_seconds|timestamp|free)/ }.to_yaml %>"), # exclude rapidly changing facts
}
# 读取facts
[root@hadoop-master2 manifests]# mco facts hostname
Report for fact: hostname

        hadoop-master1                           found 1 times
        hadoop-slaver1                           found 1 times
        hadoop-slaver2                           found 1 times

Finished processing 3 / 3 hosts in 579.93 ms

自带的插件功能比较少,要真正把 mcollective 用起来需要安装插件:puppet, service, package等等。这篇主要记录安装过程,插件安装以及使用后面具体实践了再写。

我觉得内网生产环境安装,到这一步已经差不多了!下面的安全配置就当深入学习吧。

Stomp with TLS 配置

Anonymous TLS 步骤简单一点,这里就不列出来了,自己去看官网的文档: Anonymous TLS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
# CA-Verified TLS

# 1 手动配置activemq

# 1.1 可以直接用puppet的cert/private-keys,我这里新生成一个activemq的证书
[root@hadoop-master2 puppetlabs]# puppet master --configprint ssldir
/etc/puppetlabs/puppet/ssl
# 一个不冲突的名称即可,不需要是hostname/FQDN
[root@hadoop-master2 puppetlabs]# puppet cert generate activemq
Notice: activemq has a waiting certificate request
Notice: Signed certificate request for activemq
Notice: Removing file Puppet::SSL::CertificateRequest activemq at '/etc/puppetlabs/puppet/ssl/ca/requests/activemq.pem'
Notice: Removing file Puppet::SSL::CertificateRequest activemq at '/etc/puppetlabs/puppet/ssl/certificate_requests/activemq.pem'
[root@hadoop-master2 puppetlabs]# tree /etc/puppetlabs/puppet/ssl/
/etc/puppetlabs/puppet/ssl/
...
├── certificate_requests
├── certs
│   ├── activemq.pem
│   ├── ca.pem
│   └── hadoop-master2.example.com.pem
├── crl.pem
├── private
├── private_keys
│   ├── activemq.pem
│   └── hadoop-master2.example.com.pem
└── public_keys
    ├── activemq.pem
    └── hadoop-master2.example.com.pem

9 directories, 22 files

# certs/activemq.pem, certs/ca.pem, private_keys/activemq.pem 就是我们需要的。


# 1.2 创建Truststore
[root@hadoop-master2 puppetlabs]# which keytool
/opt/jdk1.7.0_60/bin/keytool
[root@hadoop-master2 puppetlabs]# cd /etc/puppetlabs/puppet/ssl            
[root@hadoop-master2 ssl]# keytool -import -alias "CU CA" -file certs/ca.pem -keystore truststore.jks
Enter keystore password:  
Re-enter new password: 
Owner: CN=Puppet CA: hadoop-master2.example.com
Issuer: CN=Puppet CA: hadoop-master2.example.com
...
Trust this certificate? [no]:  y
Certificate was added to keystore
[root@hadoop-master2 ssl]# ll
total 32
drwxr-xr-x 5 puppet puppet 4096 Apr 23 00:01 ca
drwxr-xr-x 2 puppet puppet 4096 Apr 28 19:53 certificate_requests
drwxr-xr-x 2 puppet puppet 4096 Apr 28 19:53 certs
-rw-r--r-- 1 puppet puppet  979 Apr 28 10:33 crl.pem
drwxr-x--- 2 puppet puppet 4096 Apr 22 23:51 private
drwxr-x--- 2 puppet puppet 4096 Apr 28 19:53 private_keys
drwxr-xr-x 2 puppet puppet 4096 Apr 28 19:53 public_keys
-rw-r--r-- 1 root   root   1496 Apr 28 20:01 truststore.jks
# 验证下指纹fingerprints
[root@hadoop-master2 ssl]# keytool -list -keystore truststore.jks 
Enter keystore password:  

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

cu ca, Apr 28, 2016, trustedCertEntry, 
Certificate fingerprint (SHA1): 40:2C:45:37:6B:C7:9C:92:E7:4D:1E:4F:2B:C4:17:F4:A3:5F:EB:56
[root@hadoop-master2 ssl]# openssl x509 -in certs/ca.pem -fingerprint -sha1
SHA1 Fingerprint=40:2C:45:37:6B:C7:9C:92:E7:4D:1E:4F:2B:C4:17:F4:A3:5F:EB:56


# 1.3 创建Keystore
[root@hadoop-master2 ssl]# cat private_keys/activemq.pem certs/activemq.pem >activemq.pem
# 所有密码都需一致!! All of these passwords must be the same.
[root@hadoop-master2 ssl]# openssl pkcs12 -export -in activemq.pem -out activemq.p12 -name activemq      
Enter Export Password:
Verifying - Enter Export Password:
[root@hadoop-master2 ssl]# keytool -importkeystore -destkeystore keystore.jks -srckeystore activemq.p12 \
> -srcstoretype PKCS12 -alias activemq
Enter destination keystore password:  XXX
Re-enter new password: XXX
Enter source keystore password:  XXX
[root@hadoop-master2 ssl]# ll -t
total 52
-rw-r--r-- 1 root   root   3918 Apr 28 20:12 keystore.jks
-rw-r--r-- 1 root   root   4230 Apr 28 20:08 activemq.p12
-rw-r--r-- 1 root   root   5203 Apr 28 20:07 activemq.pem
-rw-r--r-- 1 root   root   1496 Apr 28 20:01 truststore.jks
...
# 验证指纹
[root@hadoop-master2 ssl]# keytool -list -keystore keystore.jks 
Enter keystore password:  

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

activemq, Apr 28, 2016, PrivateKeyEntry, 
Certificate fingerprint (SHA1): 4F:DF:DE:64:13:36:0E:74:8B:7F:D3:61:78:29:C4:AA:4F:A4:ED:D8
[root@hadoop-master2 ssl]# openssl x509 -in certs/activemq.pem -fingerprint -sha1
SHA1 Fingerprint=4F:DF:DE:64:13:36:0E:74:8B:7F:D3:61:78:29:C4:AA:4F:A4:ED:D8


# 1.4 配置activemq
# http://activemq.apache.org/how-do-i-use-ssl.html
# https://docs.puppet.com/mcollective/deploy/middleware/activemq.html#tls-credentials
# https://docs.puppet.com/mcollective/deploy/middleware/activemq.html#stomp
[root@hadoop-master2 ssl]# mv keystore.jks truststore.jks /opt/puppetlabs/apache-activemq-5.13.2/conf
[root@hadoop-master2 ssl]# cd /opt/puppetlabs/apache-activemq-5.13.2/conf/
# 填上面步骤设置的密码
[root@hadoop-master2 conf]# vi activemq.xml 
...
<sslContext>
  <sslContext keyStore="keystore.jks" keyStorePassword="XXXX"
              trustStrore="truststore.jks" trustStorePassword="XXXX" />
</sslContext>

<transportConnectors>
  <!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
  <transportConnector name="stomp+nio+ssl" uri="stomp+nio+ssl://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600&needClientAuth=true&transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2"/>
</transportConnectors>

[root@hadoop-master2 apache-activemq-5.13.2]# chmod 600 conf/activemq.xml 
[root@hadoop-master2 apache-activemq-5.13.2]# bin/activemq stop
[root@hadoop-master2 apache-activemq-5.13.2]# bin/activemq start
# 日志查看
[root@hadoop-master2 apache-activemq-5.13.2]# less data/activemq.log 


# 2 puppetserver(mcollective client)
# https://docs.puppet.com/mcollective/configure/client.html
[root@hadoop-master2 ~]# cd /etc/puppetlabs/mcollective/
[root@hadoop-master2 mcollective]# cat client.cfg
...
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = hadoop-master2.example.com
plugin.activemq.pool.1.port = 61614
plugin.activemq.pool.1.user = system
plugin.activemq.pool.1.password = manager
plugin.activemq.pool.1.ssl = true
plugin.activemq.pool.1.ssl.ca = /etc/puppetlabs/puppet/ssl/certs/ca.pem
plugin.activemq.pool.1.ssl.key = /etc/puppetlabs/puppet/ssl/private_keys/hadoop-master2.example.com.pem
plugin.activemq.pool.1.ssl.cert = /etc/puppetlabs/puppet/ssl/certs/hadoop-master2.example.com.pem
...
[root@hadoop-master2 mcollective]# mco ping -v


---- ping statistics ----
No responses received

# 3 puppet agents(mcollective servers)
# https://docs.puppet.com/mcollective/configure/server.html
-bash-4.1# puppet agent --configprint confdir
/etc/puppetlabs/puppet
-bash-4.1# puppet agent --configprint ssldir
/etc/puppetlabs/puppet/ssl
-bash-4.1# puppet agent --configprint hostprivkey
/etc/puppetlabs/puppet/ssl/private_keys/hadoop-master1.example.com.pem
-bash-4.1# puppet agent --configprint hostcert
/etc/puppetlabs/puppet/ssl/certs/hadoop-master1.example.com.pem
-bash-4.1# puppet agent --configprint localcacert
/etc/puppetlabs/puppet/ssl/certs/ca.pem

-bash-4.1# cd /etc/puppetlabs/mcollective/
-bash-4.1# cat server.cfg 
...
connector = activemq
plugin.activemq.pool.size = 1
plugin.activemq.pool.1.host = hadoop-master2.example.com
plugin.activemq.pool.1.port = 61614
plugin.activemq.pool.1.user = system
plugin.activemq.pool.1.password = manager
plugin.activemq.pool.1.ssl = true
plugin.activemq.pool.1.ssl.ca = /etc/puppetlabs/puppet/ssl/certs/ca.pem
plugin.activemq.pool.1.ssl.key = /etc/puppetlabs/puppet/ssl/private_keys/hadoop-master1.example.com.pem
plugin.activemq.pool.1.ssl.cert = /etc/puppetlabs/puppet/ssl/certs/hadoop-master1.example.com.pem
...
-bash-4.1# service mcollective restart
Shutting down mcollective: 
Starting mcollective:                                      [  OK  ]

# 其他两台机器一样的操作

# 测试
[root@hadoop-master2 mcollective]# mco ping -v
hadoop-master1                           time=41.99 ms
hadoop-slaver2                           time=84.87 ms
hadoop-slaver1                           time=85.46 ms


---- ping statistics ----
3 replies max: 85.46 min: 41.99 avg: 70.77 

更多activemq的设置查看官方文档: ActiveMQ Config Reference for MCollective Users example activemq.xml

SSL Security plugin

Stomp with TLS (安全传输层协议)用于加密数据。而 security plugin 主要功能有:

  • mcollective server要授权才会执行 client 发送的请求。
  • create a token that uniquely identify the client - based on the filename of the public key。
  • 在请求中添加创建时间和TTL保证数据的完整性(不被拦截、篡改以及重复)。

参考:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
# 1 生成server秘钥(公钥、私钥)
[root@hadoop-master2 mcollective-security]# openssl genrsa -out server-private.pem 1024
...
[root@hadoop-master2 mcollective-security]# openssl rsa -in server-private.pem -out server-public.pem -outform PEM -pubout  
writing RSA key
[root@hadoop-master2 mcollective-security]# ll
total 12
-rw-r--r-- 1 root root 7915 Apr 29 00:06 server-private.pem
-rw-r--r-- 1 root root 1836 Apr 29 00:07 server-public.pem

# 把 private/public 复制到所有的mcollective-servers节点
# 把 public 复制到mcollective-clients节点
[root@hadoop-master2 mcollective-security]# ssh 172.17.0.2 mkdir -p /etc/puppetlabs/mcollective/ssl/clients
[root@hadoop-master2 mcollective-security]# scp * 172.17.0.2:/etc/puppetlabs/mcollective/ssl/
server-private.pem   100% 7915     7.7KB/s   00:00    
server-public.pem    100% 1836     1.8KB/s   00:00    

[root@hadoop-master2 mcollective-security]# mkdir -p /etc/puppetlabs/mcollective/ssl
[root@hadoop-master2 mcollective-security]# cp server-public.pem /etc/puppetlabs/mcollective/ssl/

# 2 配置mcollective-servers。节点间配置不能同步,TLS配置的证书名称是不一样的!!
-bash-4.1# vi /etc/puppetlabs/mcollective/server.cfg 
...
# Plugins
#securityprovider = psk
#plugin.psk = unset

securityprovider = ssl
plugin.ssl_server_private = /etc/puppetlabs/mcollective/ssl/server-private.pem
plugin.ssl_server_public = /etc/puppetlabs/mcollective/ssl/server-public.pem
plugin.ssl_client_cert_dir = /etc/puppetlabs/mcollective/ssl/clients/
plugin.ssl.enfore_ttl = 0
...

-bash-4.1# service mcollective restart
Shutting down mcollective:                                 [  OK  ]
Starting mcollective:                                      [  OK  ]
# 可以通过 /var/log/puppetlabs/mcollective.log 查看详细日志

# 配置一个节点后,mco ping已经不再显示hadoop-master1了!!

# 3 生成client秘钥
[root@hadoop-master2 mcollective-security]# cd /etc/puppetlabs/mcollective/ssl
[root@hadoop-master2 ssl]# ll
total 8
drwxr-xr-x 2 root root 4096 Apr 29 00:15 clients
-rw-r--r-- 1 root root 1836 Apr 29 00:15 server-public.pem
[root@hadoop-master2 ssl]# openssl genrsa -out winse-private.pem 1024    
...
[root@hadoop-master2 ssl]# openssl rsa -in winse-private.pem -out winse-public.pem -outform PEM -pubout
writing RSA key
[root@hadoop-master2 ssl]# ll
total 16
drwxr-xr-x 2 root root 4096 Apr 29 00:15 clients
-rw-r--r-- 1 root root 1836 Apr 29 00:15 server-public.pem
-rw-r--r-- 1 root root  887 Apr 29 00:26 winse-private.pem
-rw-r--r-- 1 root root  272 Apr 29 00:26 winse-public.pem

# 把client用户的公钥拷贝到所有mcollective-servers的ssl/clients目录下
[root@hadoop-master2 ssl]# scp winse-public.pem 172.17.0.2:/etc/puppetlabs/mcollective/ssl/clients
winse-public.pem 100%  272     0.3KB/s   00:00    

# 4 配置clients
[root@hadoop-master2 ~]# vi /etc/puppetlabs/mcollective/client.cfg 
...
# Plugins
#connector=activemq
#direct_addressing=1 决定是否支持点对点消息,这里选择支持

#securityprovider = psk
#plugin.psk = unset
securityprovider = ssl
plugin.ssl_server_public = /etc/puppetlabs/mcollective/ssl/server-public.pem
plugin.ssl_client_private = /etc/puppetlabs/mcollective/ssl/winse-private.pem
plugin.ssl_client_public = /etc/puppetlabs/mcollective/ssl/winse-public.pem
...

# mcollective-server不需要重启!客户端连接测试
[root@hadoop-master2 ssl]# mco ping -v
hadoop-master1                           time=561.29 ms
hadoop-slaver2                           time=601.91 ms
hadoop-slaver1                           time=608.31 ms


---- ping statistics ----
3 replies max: 608.31 min: 561.29 avg: 590.50 

理解了功能后,再按条理配置其实感觉就不是那么难了。遇到问题先查看日志!!

最佳实践

官网推荐使用 站点管理工具 统一来安装管理,如puppet。下面使用puppet来配置mcollective:

TODO

–END

[整理] Hadoop入门

1. 环境准备

工欲善事其必先利其器。不要吝啬硬件上投入,找一个适合自己的环境!

2. 安装部署hadoop/spark

编译安装

功能优化

维护

旧版本安装

3. 进阶

配置深入理解

问题定位

读码

其他

4. Hadoop平台

5. 监控与自动化部署

监控

自动化

–END