https://github.com/ist0ne/salt-states
本文以一個小的電商網站(www.mall.com)為例,講述Saltstack在真實場景中的應用。主要介紹如何使用salt對電商網站各種服務進行管理、基於角色對應用進行自動化監控、基於Saltstack runner的代碼部署系統等,主要包括以下主題:
Salt state模塊在CentOS 6.5 Saltstack 2014.1.4下測試。
網站架構介紹
網絡架構
- 使用Haproxy做負載均衡,一主一備,當主服務器宕機后備服務器自動接替主服務器角色對外提供服務。
- WEB前端采用Nginx+PHP提供動態頁面的訪問;所有前端服務器通過NFS協議掛載共享存儲,商品展示圖片上傳至存儲中,圖片訪問時通過Varnish進行緩存加速。
- 使用memcached做緩沖層來提高訪問速度和減輕數據庫的壓力;使用Redis做隊列服務。
- 數據持久層使用MySQL,MySQL采用主從模式,通過主從分離提高訪問速度。
- 使用Salt對整個系統進行配置管理;使用Zabbix進行系統監控;所有服務器通過跳板機進行登錄。
- 使用SVN統一管理代碼和配置信息。
說明:上面網絡架構未按實際服務器數量畫出,具體服務器見角色划分部分。
系統架構
- 統一管理:整個系統通過Salt進行配置管理,所有配置文件統一存儲到SVN中,通過SVN版本控制能夠在系統故障時輕松回退到上一個正常版本。
- 代碼部署:通過命令行部署工具從SVN中檢出代碼並部署到WEB前端,做到簡單輕松部署。
- 應用架構:采用經典的三層架構--代碼解析層、緩沖層、數據持久化層。緩沖層對用戶數據進行緩存,不必每次都去數據庫提取數據,減輕數據庫壓力;數據庫采用主從架構,讀寫分離,減輕主庫負載,提高了用戶的訪問速度。
- 動靜態分離:圖片、CSS、JS與動態程序分離,通過Varnish進行加速,提升用戶體驗。
- Zabbix監控:基於角色的自動監控機制,通過Zabbix對系統狀態、應用狀態進行自動監控。
角色划分(主機名、IP地址分配)
說明:所有服務器配置內、外雙網卡,eth0為內網,eth1為外網。操作系統統一部署CentOS 6.5 64位。
負載均衡(ha)
需要兩台服務器作為負載均衡器使用,兩台服務器配置為主備模式,當主服務器宕機后從服務器自動接管服務。
- ha1.grid.mall.com 60.60.60.11 172.16.100.11
- ha2.grid.mall.com 60.60.60.12 172.16.100.12
Web前端(web)
需要三台服務器作為Web前端服務器,對外提供Web服務,Web服務通過負載均衡供用戶訪問。
- web1.grid.mall.com 60.60.60.21 172.16.100.21
- web2.grid.mall.com 60.60.60.22 172.16.100.22
- web3.grid.mall.com 60.60.60.23 172.16.100.23
圖片緩存(cache)
需要兩台服務器作為商品圖片的緩存服務器,緩存服務器通過負載均衡供用戶訪問。
- cache1.grid.mall.com 60.60.60.31 172.16.100.31
- cache2.grid.mall.com 60.60.60.32 172.16.100.32
緩存服務和隊列服務(mc)
需要兩台服務器提供緩沖服務器和隊列服務。
- mc1.grid.mall.com 60.60.60.41 172.16.100.41
- mc2.grid.mall.com 60.60.60.42 172.16.100.42
數據庫(db)
需要兩台服務器提供數據庫服務,兩台服務器通過主從復制同步數據。
- db1.grid.mall.com 60.60.60.51 172.16.100.51
- db2.grid.mall.com 60.60.60.52 172.16.100.52
搜索(search)
需要兩台服務器提供搜索服務。
- search1.grid.mall.com 60.60.60.61 172.16.100.61
- search2.grid.mall.com 60.60.60.62 172.16.100.62
共享存儲(storage)
需要一台服務器提供存儲服務。
- storage1.grid.mall.com 60.60.60.71 172.16.100.71
管理機(admin)
需要一台管理機,上面部署Salt master,zabbix,svn等管理服務。
- admin.grid.mall.com 60.60.60.81 172.16.100.81
Saltstack安裝
Saltstack源碼地址:https://github.com/saltstack/salt,最新版本為v2014.1.4。 需要自己打rpm包,salt描述文件:https://github.com/saltstack/salt/blob/develop/pkg/rpm/salt.spec,另外最新版本的salt需要python-libcloud,也需要提前打好包。如果是在CentOS 5.x 上安裝salt,需要升級zeromq到3.x版。將所有打好的rpm包放入yum倉庫,開始安裝。
Salt Master安裝
注意:安裝前確保主機名已按角色划分部分進行配置。
安裝salt-master:
# yum install -y salt-master
修改配置文件:/etc/salt/master,使salt監聽在內網網卡上。
interface: 172.16.100.81
啟動Salt Master:
# /etc/init.d/salt-master start
查看啟動情況,4505端口處於監聽狀態:
# netstat -tunlp |grep 4505
Salt Minion安裝
注意:安裝前確保主機名已按角色划分部分進行配置。
安裝salt-minion:
# yum install -y salt-minion
修改配置文件:/etc/salt/minion,使其連接到master。
master: 172.16.100.81
啟動Salt Minion:
# /etc/init.d/salt-minion start
查看啟動情況,4506端口處於監聽狀態:
# netstat -tunlp |grep 4506
在Salt Master上為Salt Minion授權
查看Salt Minion是否已經向Salt Master請求授權:
# salt-key -L
Accepted Keys:
Unaccepted Keys:
admin.grid.mall.com
為Salt Minion授權:
# salt-key -a admin.grid.mall.com
基礎服務部署
對基礎服務的管理包括配置管理系統、用戶賬號管理、yum配置管理、hosts文件管理、時間同步管理、DNS配置管理。
配置管理系統
配置管理系統使用模塊化設計,每個服務一個模塊,將多個模塊組織到一起形成角色(/srv/salt/roles/)。所有模塊放置到:/srv/salt下,入口配置文件為:/srv/salt/top.sls。模塊使用的變量放置到:/srv/pillar,入口配置文件:/srv/pillar/top.sls。針對變量的作用域不同,將變量分為三級級,一級應用於模塊(/srv/pillar/模塊名),一級應用於角色(/srv/pillar/roles/),一級應用於主機節點(/srv/pillar/nodes)。具體配置在此不一一列出,具體參見salt配置文件。
入口配置/srv/salt/top.sls,直接引用各種角色:
base:
'*':
- roles.common
'admin.grid.mall.com':
- roles.admin
'ha.grid.mall.com':
- roles.ha
'web*.grid.mall.com':
- roles.web
'cache*.grid.mall.com':
- roles.cache
'mc*.grid.mall.com':
- roles.mc
'db*.grid.mall.com':
- roles.db
'search*.grid.mall.com':
- roles.search
'storage*'.grid.mall.com':
- roles.storage
變量入口配置文件/srv/pillar/top.sls:
base:
'*':
- roles.common
# 引用角色級變量
# 模塊級變量在角色級變量中引用
'admin.grid.mall.com':
- roles.admin
'ha.grid.mall.com':
- roles.ha
'web*.grid.mall.com':
- roles.web
'cache*.grid.mall.com':
- roles.cache
'mc*.grid.mall.com':
- roles.mc
'db*.grid.mall.com':
- roles.db
'search*.grid.mall.com':
- roles.search
'storage*'.grid.mall.com':
- roles.storage
# 引用節點級變量
'ha1.grid.mall.com':
- nodes.ha1
'ha2.grid.mall.com':
- nodes.ha2
'mc1.grid.mall.com':
- nodes.mc1
'mc2.grid.mall.com':
- nodes.mc2
'db1.grid.mall.com':
- nodes.db1
'db2.grid.mall.com':
- nodes.db2
用戶賬號管理
用戶管理模塊:/srv/salt/users
此模塊用到pillar,pillar和grains都可以用來獲取變量,但是grains偏向於獲取客戶端相關信息,比如硬件架構、cpu核數、操作系統版本等信息,相當於puppet的facter;pillar用於定義用戶變量,通過pillar變量的傳遞,使salt state模塊易於重用,相當於puppet的hiera。使用pillar變量之前需要執行salt '*' saltutil.refresh_pillar命令使定義生效。使用命令salt 'admin.grid.mall.com' pillar.item users獲取users變量:
# salt 'admin.grid.mall.com' pillar.item users
admin.grid.mall.com:
----------
users:
----------
dongliang:
----------
fullname:
Shi Dongliang
gid:
1000
group:
dongliang
password:
$6$BZpX5dWZ$./TKqv8ZL3eLNAAmuiGWeT0SvwvpPtk5Nhgf8.xeyFd5XVMJ0QRh8HmiJOpJi7qPCo. mfXIIrbQSGdAJVmZxW.
shell:
/bin/bash
ssh_auth:
----------
comment:
dongliang@leju.com
key:
AAAAB3NzaC1yc2EAAAABIwAAAQEAmCqNHfK6VACeXsAnRfzq3AiSN+U561pSF8qoLOh5Ez38UqtsFLBaFdC/pTTxGQBYhwO2KkgWL9TtWOEp+LxYLskXUeG24pIe8y8r+edHC8fhmHGXWXQVmZwRERl+ygTdFt3ojhDu1FYA0WmKU07KgAqUrvJW1zwJsa/DaXExfwSzALAgm2jwx68hP9CO1msTAhtElUJWeLTlQTZr0ZGWvmlKzcwqxDX58HpA69qgccaOzO5n5qsQYXx8JmnCV18XW9bkxMvn5q8Y9o/to+BQ1440hKcsm9rNpJlIrnQaIbMZs/Sy2QnT+bVx9JyucDvaVJmsfJ+qZlfnhdRkm6eosw==
sudo:
True
uid:
1000
獲取admin.grid.mall.com上面定義的所有pillar變量:
# salt 'admin.grid.mall.com' pillar.items
添加用戶:
/srv/salt/users/user.sls用於管理用戶
include:
- users.sudo
{% for user, args in pillar['users'].iteritems() %}
{{user}}:
group.present:
- gid: {{args['gid']}}
user.present:
- home: /home/{{user}}
- shell: {{args['shell']}}
- uid: {{args['uid']}}
- gid: {{args['gid']}}
- fullname: {{args['fullname']}}
{% if 'password' in args %}
- password: {{args['password']}}
{% endif %}
- require:
- group: {{user}}
{% if 'sudo' in args %}
{% if args['sudo'] %}
sudoer-{{user}}:
file.append:
- name: /etc/sudoers
- text:
- '{{user}} ALL=(ALL) NOPASSWD: ALL'
- require:
- file: sudoers
- user: {{user}}
{% endif %}
{% endif %}
{% if 'ssh_auth' in args %}
/home/{{user}}/.ssh:
file.directory:
- user: {{user}}
- group: {{args['group']}}
- mode: 700
- require:
- user: {{user}}
/home/{{user}}/.ssh/authorized_keys:
file.managed:
- user: {{user}}
- group: {{args['group']}}
- mode: 600
- require:
- file: /home/{{user}}/.ssh
{{ args['ssh_auth']['key'] }}:
ssh_auth.present:
- user: {{user}}
- comment: {{args['ssh_auth']['comment']}}
- require:
- file: /home/{{user}}/.ssh/authorized_keys
{% endif %}
{% endfor %}
sudo.sls為用戶添加sudo權限:
sudoers:
file.managed:
- name: /etc/sudoers
/srv/salt/users/user.sls讀取/srv/pillar/users/init.sls中的users變量。
users:
dongliang: # 定義用戶名
group: dongliang # 用戶所在組
uid: 1000 # 用戶uid
gid: 1000 # 用戶gid
fullname: Shi Dongliang
password: $6$BZpX5dWZ$./TKqv8ZL3eLNAAmuiGWeT0SvwvpPtk5Nhgf8.xeyFd5XVMJ0QRh8HmiJOpJi7qPCo. mfXIIrbQSGdAJVmZxW. # 密碼,注意是hash后的密碼
shell: /bin/bash # 用戶shell
sudo: true # 是否給sudo權限
ssh_auth: # 無密碼登錄,可選項
key: AAAAB3NzaC1yc2EAAAABIwAAAQEAmCqNHfK6VACeXsAnRfzq3AiSN+U561pSF8qoLOh5Ez38UqtsFLBaFdC/pTTxGQBYhwO2KkgWL9TtWOEp+LxYLskXUeG24pIe8y8r+edHC8fhmHGXWXQVmZwRERl+ygTdFt3ojhDu1FYA0WmKU07KgAqUrvJW1zwJsa/DaXExfwSzALAgm2jwx68hP9CO1msTAhtElUJWeLTlQTZr0ZGWvmlKzcwqxDX58HpA69qgccaOzO5n5qsQYXx8JmnCV18XW9bkxMvn5q8Y9o/to+BQ1440hKcsm9rNpJlIrnQaIbMZs/Sy2QnT+bVx9JyucDvaVJmsfJ+qZlfnhdRkm6eosw==
comment: dongliang@mall.com
在salt-master上執行下面命令使配置生效
# salt '*' saltutil.refresh_pillar
# salt '*' state.highstate
yum配置管理
yum配置管理:/srv/salt/base/repo.sls
配置文件:/srv/salt/base/files/mall.repo # 此配置文件可以通過salt協議下發到客戶端
/srv/salt/base/repo.sls定義,管理mall.repo文件,當文件改變后執行yum clean all清理緩存,是配置生效。
/etc/yum.repos.d/mall.repo:
file.managed:
- source: salt://base/files/mall.repo
- user: root
- group: root
- mode: 644
- order: 1
cmd.wait:
- name: yum clean all
- watch:
- file: /etc/yum.repos.d/mall.repo
hosts文件管理
hosts文件管理:/srv/salt/base/hosts.sls
/srv/salt/base/hosts.sls 定義了每個主機名和IP的對應關系。如下:
admin.grid.mall.com:
host.present:
- ip: 172.16.100.81
- order: 1
- names:
- admin.grid.mall.com
時間同步管理
時間同步作為一個cron,定義文件為:/srv/salt/base/crons.sls
# 引用ntp模塊
include:
- ntp
'/usr/sbin/ntpdate 1.cn.pool.ntp.org 1.asia.pool.ntp.org':
cron.present:
- user: root
- minute: 0
- hour: 2
ntp模塊:ntp/init.sls
# 安裝ntpdate軟件包
ntpdate:
pkg.installed:
- name: ntpdate
DNS配置管理
配置DNS服務器定義在resolv.sls,控制/etc/resolv.conf配置文件:
/etc/resolv.conf:
file.managed:
- source: salt://base/files/resolv.conf
- user: root
- group: root
- mode: 644
- template: jinja
服務部署
本節以web服務器為例介紹salt服務的部署。把不同的服務組織成不同的角色,然后將角色應用到不同的節點上。通過角色的划分能夠清晰的對不同的服務模塊進行組織,所有角色的配置放到/srv/salt/roles下,角色用到的相關變量放到/srv/pillar/roles和/srv/pillar/nodes下,其中/srv/pillar/nodes下放置與具體節點相關的變量。
角色與配置文件
/srv/salt/roles/web.sls配置如下,包括nginx模塊、rsync模塊、limits模塊和nfs.client:
include:
- nginx
- rsync
- limits
- nfs.client
變量/srv/pillar/roles/web.sls如下,沒有單獨應用到節點的變量:
hostgroup: web # 定義zabbix分組,具體見后文的自動化監控一節
vhostsdir: /data1/vhosts # 代碼發布目錄
vhostscachedir: /data1/cache # 程序臨時目錄
logdir: /data1/logs # nginx日志目錄
vhosts: # 虛擬主機名,用於創建對用的代碼發布目錄
- www.mall.com
- static.mall.com
limit_users: # 對用戶打開文件數做設置
nginx:
limit_hard: 65535
limit_soft: 65535
limit_type: nofile
mounts: # nfs共享存儲掛載相關配置
/data1/vhosts/static.mall.com/htdocs:
device: 172.16.100.71:/data1/share
fstype: nfs
mkmnt: True
opts: async,noatime,noexec,nosuid,soft,timeo=3,retrans=3,intr,retry=3,rsize=16384,wsize=16384
Nginx+PHP配置
管理模塊:/srv/salt/nginx/
nginx配置文件:/srv/salt/nginx/files/etc/nginx/,其中包括主配置文件、虛擬主機配置文件、和環境變量配置文件。
php配置文件:主配置文件:/srv/salt/nginx/files/etc/php.ini 模塊配置文件:/srv/salt/nginx/files/etc/php.d/
php-fpm配置文件:主配置文件:/srv/salt/nginx/files/etc/php-fpm.conf 其他配置文件:/srv/salt/nginx/files/etc/php-fpm.d/
角色配置:/srv/pillar/roles/web.sls
詳細說明
/srv/salt/nginx/init.sls用於組織整個nginx模塊:
include:
- nginx.server # 包含nginx相關配置
- nginx.php # 包含php相關配置
- nginx.monitor # 包含監控相關配置
/srv/salt/nginx/server.sls用於配置nginx服務:
include:
- users.sudo
{% for user, args in pillar['users'].iteritems() %}
{{user}}:
group.present:
- gid: {{args['gid']}}
user.present:
- home: /home/{{user}}
- shell: {{args['shell']}}
- uid: {{args['uid']}}
- gid: {{args['gid']}}
- fullname: {{args['fullname']}}
{% if 'password' in args %}
- password: {{args['password']}}
{% endif %}
- require:
- group: {{user}}
{% if 'sudo' in args %}
{% if args['sudo'] %}
sudoer-{{user}}:
file.append:
- name: /etc/sudoers
- text:
- '{{user}} ALL=(ALL) NOPASSWD: ALL'
- require:
- file: sudoers
- user: {{user}}
{% endif %}
{% endif %}
{% if 'ssh_auth' in args %}
/home/{{user}}/.ssh:
file.directory:
- user: {{user}}
- group: {{args['group']}}
- mode: 700
- require:
- user: {{user}}
/home/{{user}}/.ssh/authorized_keys:
file.managed:
- user: {{user}}
- group: {{args['group']}}
- mode: 600
- require:
- file: /home/{{user}}/.ssh
{{ args['ssh_auth']['key'] }}:
ssh_auth.present:
- user: {{user}}
- comment: {{args['ssh_auth']['comment']}}
- require:
- file: /home/{{user}}/.ssh/authorized_keys
{% endif %}
{% endfor %}
Stone:salt dongliang$ cat nginx/server.sls
include:
- zabbix.agent
- salt.minion
nginx:
pkg:
- name: nginx
- installed
service:
- name: nginx
- running
- require:
- pkg: nginx
- watch:
- pkg: nginx
- file: /etc/nginx/nginx.conf
- file: /etc/nginx/conf.d/
/etc/nginx/nginx.conf:
file.managed:
- source: salt://nginx/files/etc/nginx/nginx.conf
- template: jinja
- user: root
- group: root
- mode: 644
- backup: minion
/etc/nginx/conf.d/:
file.recurse:
- source: salt://nginx/files/etc/nginx/conf.d/
- template: jinja
- user: root
- group: root
- dir_mode: 755
- file_mode: 644
{% set logdir = salt['pillar.get']('logdir', '/var/log/nginx') %}
{{logdir}}:
cmd.run:
- name: mkdir -p {{logdir}}
- unless: test -d {{logdir}}
- require:
- pkg: nginx
{% if salt['pillar.get']('vhosts', false) %}
{% set dir = salt['pillar.get']('vhostsdir', '/var/www/html') %}
{% set cachedir = salt['pillar.get']('vhostscachedir', '/var/www/cache') %}
{% for vhost in pillar['vhosts'] %}
{{dir}}/{{vhost}}/htdocs:
cmd.run:
- name: mkdir -p {{dir}}/{{vhost}}/htdocs && chown -R nobody.nobody {{dir}}/{{vhost}}/htdocs
- unless: test -d {{dir}}/{{vhost}}/htdocs
- require:
- pkg: nginx
{{cachedir}}/{{vhost}}:
cmd.run:
- name: mkdir -p {{cachedir}}/{{vhost}} && chown -R nginx.nginx {{cachedir}}/{{vhost}}
- unless: test -d {{cachedir}}/{{vhost}}
- require:
- pkg: nginx
{% endfor %}
{% endif %}
nginx-role:
file.append:
- name: /etc/salt/roles
- text:
- 'nginx'
- require:
- file: roles
- service: nginx
- service: salt-minion
- watch_in:
- module: sync_grains
管理nginx相關配置,主要包括安裝nginx軟件包配置相關配置文件,並啟動nginx服務。創建日志目錄、代碼發布目錄、代碼緩存目錄。並配置服務角色,角色也用於對服務的監控,詳見后文自動化監控。
/srv/salt/nginx/php.sls用於配置php服務:
定義php相關配置,主要包括安裝php軟件包配置相關配置文件,啟動php-fpm服務,並配置服務角色。
php-fpm:
pkg:
- name: php-fpm
- pkgs:
- php-fpm
- php-common
- php-cli
- php-devel
- php-pecl-memcache
- php-pecl-memcached
- php-gd
- php-pear
- php-mbstring
- php-mysql
- php-xml
- php-bcmath
- php-pdo
- installed
service:
- running
- require:
- pkg: php-fpm
- watch:
- pkg: php-fpm
- file: /etc/php.ini
- file: /etc/php.d/
- file: /etc/php-fpm.conf
- file: /etc/php-fpm.d/
/etc/php.ini:
file.managed:
- source: salt://nginx/files/etc/php.ini
- user: root
- group: root
- mode: 644
/etc/php.d/:
file.recurse:
- source: salt://nginx/files/etc/php.d/
- user: root
- group: root
- dir_mode: 755
- file_mode: 644
/etc/php-fpm.conf:
file.managed:
- source: salt://nginx/files/etc/php-fpm.conf
- user: root
- group: root
- mode: 644
/etc/php-fpm.d/:
file.recurse:
- source: salt://nginx/files/etc/php-fpm.d/
- user: root
- group: root
- dir_mode: 755
- file_mode: 644
php-fpm-role: # 定義php-fpm角色
file.append:
- name: /etc/salt/roles
- text:
- 'php-fpm'
- require:
- file: roles
- service: php-fpm
- service: salt-minion
- watch_in:
- module: sync_grains
/srv/salt/nginx/monitor.sls用於配置對服務的監控:
include:
- zabbix.agent
- nginx
nginx-monitor:
pkg.installed: # 安裝腳本依賴的軟件包
- name: perl-libwww-perl
php-fpm-monitor-script: # 管理監控腳本,如果腳本存放目錄不存在自動創建
file.managed:
- name: /etc/zabbix/ExternalScripts/php-fpm_status.pl
- source: salt://nginx/files/etc/zabbix/ExternalScripts/php-fpm_status.pl
- user: root
- group: root
- mode: 755
- require:
- service: php-fpm
- pkg: nginx-monitor
- cmd: php-fpm-monitor-script
cmd.run:
- name: mkdir -p /etc/zabbix/ExternalScripts
- unless: test -d /etc/zabbix/ExternalScripts
php-fpm-monitor-config: # 定義zabbix客戶端用戶配置文件
file.managed:
- name: /etc/zabbix/zabbix_agentd.conf.d/php_fpm.conf
- source: salt://nginx/files/etc/zabbix/zabbix_agentd.conf.d/php_fpm.conf
- require:
- file: php-fpm-monitor-script
- service: php-fpm
- watch_in:
- service: zabbix-agent
nginx-monitor-config: # 定義zabbix客戶端用戶配置文件
file.managed:
- name: /etc/zabbix/zabbix_agentd.conf.d/nginx.conf
- source: salt://nginx/files/etc/zabbix/zabbix_agentd.conf.d/nginx.conf
- template: jinja
- require:
- service: nginx
- watch_in:
- service: zabbix-agent
其他角色的部署跟web相似,不一一列出。
代碼部署系統搭建
部署系統基於Salt Runner編寫,Salt Runner使用salt-run命令執行的命令行工具,可以通過調用Salt API很輕松構建。Salt Runner與Salt的執行模塊很像,但是在Salt Master上運行而非Salt Minion上。
配置Salt Master
配置文件(/etc/salt/master.d/publish.conf)如下:
svn:
username: 'publish' # 定義svn用戶名,用於檢出代碼
password: '#1qaz@WSX#ht' # svn密碼
publish:
master: 'admin.grid.mall.com' # salt master主機名
cwd: '/data1/vhosts' # 代碼檢出目錄
projects:
www.mall.com: # 定義項目名
remote: 'svn://172.16.100.81/www.mall.com' # svn存放路徑
target: # 定義代碼部署列表 ip::rsync模塊
- '172.16.100.21::www_mall_com'
- '172.16.100.22::www_mall_com'
- '172.16.100.23::www_mall_com'
另外還要配置runner的放置目錄:runner_dirs: [/srv/salt/_runners],配置完成后要重啟Puppet master。
Web前端部署rsync服務
rsync服務由/srv/salt/rsync模塊進行管理,rsync配置文件(etc/rsyncd.conf)如下:
# File Managed by Salt
uid = nobody
gid = nobody
use chroot = yes
max connections = 150
pid file = /var/run/rsyncd.pid
log file = /var/log/rsyncd.log
transfer logging = yes
log format = %t %a %m %f %b
syslog facility = local3
timeout = 300
incoming chmod = Du=rwx,Dog=rx,Fu=rw,Fgo=r
hosts allow=172.16.100.0/24
[www_mall_com]
path=/data1/vhosts/www.mall.com/htdocs/
read only=no
編寫runner腳本
部署系統在Salt Master上把代碼從SVN中檢出,通過rsync命令部署到web前端。runner腳本(/srv/salt/_runners/publish.py)如下:
# -*- coding: utf-8 -*-
'''
Functions to publish code on the master
'''
# Import salt libs
import salt.client
import salt.output
def push(project, output=True):
'''
publish code to web server.
CLI Example:
.. code-block:: bash
salt-run publish.push project
'''
client = salt.client.LocalClient(__opts__['conf_file'])
ret = client.cmd(__opts__['publish']['master'],
'svn.checkout',
[
__opts__['publish']['cwd'],
__opts__['projects'][project]['remote']
],
kwarg={
'target':project,
'username':__opts__['svn']['username'],
'password':__opts__['svn']['password']
}
)
if ret:
msg = 'URL: %s\n%s' %(__opts__['projects'][project]['remote'], ret[__opts__['publish']['master']])
ret = {'Check out code': msg}
else:
ret = {'Check out code': 'Timeout, try again.'}
if output:
salt.output.display_output(ret, '', __opts__)
for target in __opts__['projects'][project]['target']:
cmd = '/usr/bin/rsync -avz --exclude=".svn" %s/%s/trunk/* %s/' %(__opts__['publish']['cwd'], project, target)
ret[target] = client.cmd(__opts__['publish']['master'],
'cmd.run',
[
cmd,
],
)
title = '\nSending file to %s' %target.split(':')[0]
ret = {title: ret[target][__opts__['publish']['master']]}
if output:
salt.output.display_output(ret, '', __opts__)
return ret
注意,一個項目(svn://172.16.100.81/www.mall.com )通常會建立三個SVN子目錄:trunk、branches、tags,上面腳本推送時只會將trunk目錄下的代碼部署到web前端。
代碼部署
# salt-run publish.push www.mall.com
publish為上文runner腳本名,push為此腳本中定義的推送函數,www.mall.com為salt master中定義的項目名。
參考:
Salt Runners
Python client API
自動化監控
本節參考了綠肥的《記saltstack和zabbix的一次聯姻》,對zabbix添加監控腳本(add_monitors.py)進行部分修改而成,此腳本基於@超大杯摩卡星冰樂 同學的zapi進行更高級別的封裝而成,在此表示感謝。
自動化監控的過程如下:
1. 通過Saltstack部署Zabbix server、Zabbix web、Zabbix api;
2. 完成安裝后需要手動導入Zabbix監控模板;
3. 通過Saltstack部署服務及Zabbix agent;
4. Saltstack在安裝完服務后通過Salt Mine將服務角色匯報給Salt Master;
5. Zabbix api拿到各服務角色后添加相應監控到Zabbix server。
Salt Mine用於將Salt Minion的信息存儲到Salt Master,供其他Salt Minion使用。
下面以對nginx模塊的監控為例講述整個監控過程,其中Zabbix服務(Zabbix server、Zabbix web、Zabbix api)安裝使用/srv/salt/zabbix進行管理,服務部署在admin.grid.mall.com上。Zabbix agent使用/srv/salt/zabbix進行管理。nginx使用/srv/salt/nginx模塊進行管理。
安裝完nginx和php后定義相應的角色:
nginx-role:
file.append:
- name: /etc/salt/roles
- text:
- 'nginx'
- require:
- file: roles
- service: nginx
- service: salt-minion
- watch_in:
- module: sync_grains
php-fpm-role: # 定義php-fpm角色
file.append:
- name: /etc/salt/roles
- text:
- 'php-fpm'
- require:
- file: roles
- service: php-fpm
- service: salt-minion
- watch_in:
- module: sync_grains
/srv/salt/nginx/monitor.sls用於配置zabbix agent和監控腳本:
include:
- zabbix.agent
- nginx
nginx-monitor:
pkg.installed: # 安裝腳本依賴的軟件包
- name: perl-libwww-perl
php-fpm-monitor-script: # 管理監控腳本,如果腳本存放目錄不存在自動創建
file.managed:
- name: /etc/zabbix/ExternalScripts/php-fpm_status.pl
- source: salt://nginx/files/etc/zabbix/ExternalScripts/php-fpm_status.pl
- user: root
- group: root
- mode: 755
- require:
- service: php-fpm
- pkg: nginx-monitor
- cmd: php-fpm-monitor-script
cmd.run:
- name: mkdir -p /etc/zabbix/ExternalScripts
- unless: test -d /etc/zabbix/ExternalScripts
php-fpm-monitor-config: # 定義zabbix客戶端用戶配置文件
file.managed:
- name: /etc/zabbix/zabbix_agentd.conf.d/php_fpm.conf
- source: salt://nginx/files/etc/zabbix/zabbix_agentd.conf.d/php_fpm.conf
- require:
- file: php-fpm-monitor-script
- service: php-fpm
- watch_in:
- service: zabbix-agent
nginx-monitor-config: # 定義zabbix客戶端用戶配置文件
file.managed:
- name: /etc/zabbix/zabbix_agentd.conf.d/nginx.conf
- source: salt://nginx/files/etc/zabbix/zabbix_agentd.conf.d/nginx.conf
- template: jinja
- require:
- service: nginx
- watch_in:
- service: zabbix-agent
Salt Minion收集各個角色到/etc/salt/roles中,並生成grains,Salt Mine通過grains roles獲取角色信息,當roles改變后通知Salt Mine更新。
roles:
file.managed:
- name: /etc/salt/roles
sync_grains:
module.wait:
- name: saltutil.sync_grains
mine_update:
module.run:
- name: mine.update
- require:
- module: sync_grains
/srv/pillar/salt/minion.sls 定義Salt Mine functions:
mine_functions:
test.ping: []
grains.item: [id, hostgroup, roles, ipv4]
grains類似puppet facer,用於收集客戶端相關的信息。本文grains腳本(/srv/salt/_grains/roles.py)通過讀取/etc/salt/roles文件生成grains roles:
import os.path
def roles():
'''define host roles'''
roles_file = "/etc/salt/roles"
roles_list = []
if os.path.isfile(roles_file):
roles_fd = open(roles_file, "r")
for eachroles in roles_fd:
roles_list.append(eachroles[:-1])
return {'roles': roles_list}
if __name__ == "__main__":
print roles()
Zabbix api的配置通過/srv/salt/zabbix/api.sls進行管理,主要完成對zapi的安裝、Zabbix api角色的添加、Zabbix api配置文件的管理、添加監控腳本的管理以及更新監控配置並添加監控。此配置未實現zabbix模板的自動導入,所以需要手動導入模板(/srv/salt/zabbix/files/etc/zabbix/api/templates/zbx_export_templates.xml)。
include:
- salt.minion
python-zabbix-zapi:
file.recurse:
- name: /usr/lib/python2.6/site-packages/zabbix
- source: salt://zabbix/files/usr/lib/python2.6/site-packages/zabbix
- include_empty: True
zabbix-api-role:
file.append:
- name: /etc/salt/roles
- text:
- 'zabbix-api'
- require:
- file: roles
- service: salt-minion
- file: python-zabbix-zapi
- watch_in:
- module: sync_grains
zabbix-api-config:
file.managed:
- name: /etc/zabbix/api/config.yaml
- source: salt://zabbix/files/etc/zabbix/api/config.yaml
- makedirs: True
- template: jinja
- defaults:
Monitors_DIR: {{pillar['zabbix-api']['Monitors_DIR']}}
Templates_DIR: {{pillar['zabbix-api']['Templates_DIR']}}
Zabbix_User: {{pillar['zabbix-api']['Zabbix_User']}}
Zabbix_Pass: {{pillar['zabbix-api']['Zabbix_Pass']}}
Zabbix_URL: {{pillar['zabbix-api']['Zabbix_URL']}}
zabbix-templates:
file.recurse:
- name: {{pillar['zabbix-api']['Templates_DIR']}}
- source: salt://zabbix/files/etc/zabbix/api/templates
- require:
- file: python-zabbix-zapi
- file: zabbix-api-config
zabbix-add-monitors-script:
file.managed:
- name: /etc/zabbix/api/add_monitors.py
- source: salt://zabbix/files/etc/zabbix/api/add_monitors.py
- makedirs: True
- mode: 755
- require:
- file: python-zabbix-zapi
- file: zabbix-api-config
{% for each_minion, each_mine in salt['mine.get']('*', 'grains.item').iteritems() %}
monitor-{{each_minion}}:
file.managed:
- name: {{pillar['zabbix-api']['Monitors_DIR']}}/{{each_minion}}
- source: salt://zabbix/files/etc/zabbix/api/monitors/minion
- makedirs: True
- template: jinja
- defaults:
IP: {{each_mine.ipv4[1]}}
Hostgroup: {{each_mine.hostgroup}}
Roles: {{each_mine.roles}}
Templates: {{pillar['zabbix-templates']}}
- order: last
- require:
- module: mine_update
cmd.wait:
- name: python /etc/zabbix/api/add_monitors.py {{each_minion}}
- require:
- file: zabbix-add-monitors-script
- watch:
- file: monitor-{{each_minion}}
{% endfor %}
上面配置讀取/srv/pillar/zabbix/api.sls配置文件:
zabbix-api:
Zabbix_URL: http://172.16.100.81/zabbix
Zabbix_User: admin
Zabbix_Pass: zabbix
Monitors_DIR: /etc/zabbix/api/monitors/
Templates_DIR: /etc/zabbix/api/templates/
zabbix-base-templates:
{% if grains['os_family'] == 'RedHat' or grains['os_family'] == 'Debian' %}
- 'Template OS Linux'
{% endif %}
zabbix-templates:
memcached: 'Template App Memcached'
zabbix-server: 'Template App Zabbix Server'
web-server: 'Template App HTTP Service'
mysql: 'Template App MySQL'
mysql-master: 'Template App MySQL'
mysql-slave: 'Template App MySQL Slave'
php-fpm: 'Template App PHP FPM'
nginx: 'Template App Nginx'
varnish: 'Template App Varnish'
redis: 'Template App Redis'
zabbix-api中定義zabbix url、用戶名、密碼以及監控配置目錄和模板目錄等。zabbix-base-templates定義基本監控模板,基本監控模板是需要加到所有機器上的。zabbix-templates定義角色與模板的對應關系。
添加監控腳本(/srv/salt/zabbix/files/etc/zabbix/api/add_monitors.py )如下:
#!/bin/env python
#coding=utf8
##########################################################
# Add Monitor To Zabbix
##########################################################
import sys, os.path
import yaml
from zabbix.zapi import *
def _config(config_file):
'''get config'''
config_fd = open(config_file)
config = yaml.load(config_fd)
return config
def _get_templates(api_obj, templates_list):
'''get templates ids'''
templates_id = {}
templates_result = api_obj.Template.getobjects({"host": templates_list})
for each_template in templates_result:
template_name = each_template['name']
template_id = each_template['templateid']
templates_id[template_name] = template_id
return templates_id
def _get_host_templates(api_obj, hostid):
'''get the host has linked templates'''
templates_id = []
templates_result = api_obj.Template.get({'hostids': hostid})
for each_template in templates_result:
template_id = each_template['templateid']
templates_id.append(template_id)
return templates_id
def _create_hostgroup(api_obj, group_name):
'''create hostgroup'''
##check hostgroup exists
hostgroup_status = api_obj.Hostgroup.exists({"name": "%s" %(group_name)})
if hostgroup_status:
print "Hostgroup(%s) is already exists" %(group_name)
group_id = api_obj.Hostgroup.getobjects({"name": "%s" %(group_name)})[0]["groupid"]
else:
hostgroup_status = api_obj.Hostgroup.create({"name": "%s" %(group_name)})
if hostgroup_status:
print "Hostgroup(%s) create success" %(group_name)
group_id = hostgroup_status["groupids"][0]
else:
sys.stderr.write("Hostgroup(%s) create failed, please connect administrator\n" %(group_name))
exit(2)
return group_id
def _create_host(api_obj, hostname, hostip, group_ids):
'''create host'''
##check host exists
host_status = api_obj.Host.exists({"name": "%s" %(hostname)})
if host_status:
print "Host(%s) is already exists" %(hostname)
hostid = api_obj.Host.getobjects({"name": "%s" %(hostname)})[0]["hostid"]
##update host groups
groupids = [group['groupid'] for group in api_obj.Host.get({"output": ["hostid"], "selectGroups": "extend", "filter": {"host": ["%s" %(hostname)]}})[0]['groups']]
is_hostgroup_update = 0
for groupid in group_ids:
if groupid not in groupids:
is_hostgroup_update = 1
groupids.append(groupid)
if is_hostgroup_update == 1:
groups = []
for groupid in groupids:
groups.append({"groupid": "%s" %(groupid)})
host_status = api_obj.Host.update({"hostid": "%s" %(hostid), "groups": groups})
if host_status:
print "Host(%s) group update success" %(hostname)
else:
sys.stderr.write("Host(%s) group update failed, please connect administrator\n" %(hostname))
exit(3)
else:
groups = []
for groupid in group_ids:
groups.append({"groupid": "%s" %(groupid)})
host_status = api_obj.Host.create({"host": "%s" %(hostname), "interfaces": [{"type": 1, "main": 1, "useip": 1, "ip": "%s" %(hostip), "dns": "", "port": "10050"}], "groups": groups})
if host_status:
print "Host(%s) create success" %(hostname)
hostid = host_status["hostids"][0]
else:
sys.stderr.write("Host(%s) create failed, please connect administrator\n" %(hostname))
exit(3)
return hostid
def _create_host_usermacro(api_obj, hostname, usermacro):
'''create host usermacro'''
for macro in usermacro.keys():
value = usermacro[macro]
##check host exists
host_status = api_obj.Host.exists({"name": "%s" %(hostname)})
if host_status:
hostid = api_obj.Host.getobjects({"name": "%s" %(hostname)})[0]["hostid"]
##check usermacro exists
usermacros = api_obj.Usermacro.get({"output": "extend", "hostids": "%s" %(hostid)})
is_macro_exists = 0
if usermacros:
for usermacro in usermacros:
if usermacro["macro"] == macro:
is_macro_exists = 1
if usermacro["value"] == str(value):
print "Host(%s) usermacro(%s) is already exists" %(hostname, macro)
hostmacroid = usermacro["hostmacroid"]
else:
##usermacro exists, but value is not the same, update
usermacro_status = api_obj.Usermacro.update({"hostmacroid": usermacro["hostmacroid"], "value": "%s" %(value)})
if usermacro_status:
print "Host(%s) usermacro(%s) update success" %(hostname, macro)
hostmacroid = usermacro_status["hostmacroids"][0]
else:
sys.stderr.write("Host(%s) usermacro(%s) update failed, please connect administrator\n" %(hostname, macro))
exit(3)
break
if is_macro_exists == 0:
usermacro_status = api_obj.Usermacro.create({"hostid": "%s" %(hostid), "macro": "%s" %(macro), "value": "%s" %(value)})
if usermacro_status:
print "Host(%s) usermacro(%s) create success" %(hostname, macro)
hostmacroid = usermacro_status["hostmacroids"][0]
else:
sys.stderr.write("Host(%s) usermacro(%s) create failed, please connect administrator\n" %(hostname, macro))
exit(3)
else:
sys.stderr.write("Host(%s) is not exists" %(hostname))
exit(3)
return hostmacroid
def _link_templates(api_obj, hostname, hostid, templates_list, donot_unlink_templates):
'''link templates'''
all_templates = []
clear_templates = []
##get templates id
if donot_unlink_templates is None:
donot_unlink_templates_id = {}
else:
donot_unlink_templates_id = _get_templates(api_obj, donot_unlink_templates)
templates_id = _get_templates(api_obj, templates_list)
##get the host currently linked tempaltes
curr_linked_templates = _get_host_templates(api_obj, hostid)
for each_template in templates_id:
if templates_id[each_template] in curr_linked_templates:
print "Host(%s) is already linked %s" %(hostname, each_template)
else:
print "Host(%s) will link %s" %(hostname, each_template)
all_templates.append(templates_id[each_template])
##merge templates list
for each_template in curr_linked_templates:
if each_template not in all_templates:
if each_template in donot_unlink_templates_id.values():
all_templates.append(each_template)
else:
clear_templates.append(each_template)
##convert to zabbix api style
templates_list = []
clear_templates_list = []
for each_template in all_templates:
templates_list.append({"templateid": each_template})
for each_template in clear_templates:
clear_templates_list.append({"templateid": each_template})
##update host to link templates
update_status = api_obj.Host.update({"hostid": hostid, "templates": templates_list})
if update_status:
print "Host(%s) link templates success" %(hostname)
else:
print "Host(%s) link templates failed, please contact administrator" %(hostname)
##host unlink templates
if clear_templates_list != []:
clear_status = api_obj.Host.update({"hostid": hostid, "templates_clear": clear_templates_list})
if clear_status:
print "Host(%s) unlink templates success" %(hostname)
else:
print "Host(%s) unlink templates failed, please contact administrator" %(hostname)
def _main():
'''main function'''
hosts = []
if len(sys.argv) > 1:
hosts = sys.argv[1:]
config_dir = os.path.dirname(sys.argv[0])
if config_dir:
config_file = config_dir+"/"+"config.yaml"
else:
config_file = "config.yaml"
###get config options
config = _config(config_file)
Monitor_DIR = config["Monitors_DIR"]
Zabbix_URL = config["Zabbix_URL"]
Zabbix_User = config["Zabbix_User"]
Zabbix_Pass = config["Zabbix_Pass"]
Zabbix_Donot_Unlink_Template = config["Zabbix_Donot_Unlink_Template"]
if not hosts:
hosts = os.listdir(Monitor_DIR)
###Login Zabbix
zapi = ZabbixAPI(url=Zabbix_URL, user=Zabbix_User, password=Zabbix_Pass)
zapi.login()
for each_host in hosts:
each_config_fd = open(Monitor_DIR+"/"+each_host)
each_config = yaml.load(each_config_fd)
##Get config options
each_ip = each_config["IP"]
hostgroups = each_config["Hostgroup"]
each_templates = each_config["Templates"]
each_usermacros = each_config["Usermacros"]
###Create Hostgroup
groupids = []
for each_hostgroup in hostgroups:
group_id = _create_hostgroup(zapi, each_hostgroup)
groupids.append(group_id)
##Create Host
hostid = _create_host(zapi, each_host, each_ip, groupids)
if each_usermacros:
##Create Host Usermacros
for usermacro in each_usermacros:
if usermacro:
usermacrosid = _create_host_usermacro(zapi, each_host, usermacro)
if each_templates:
##Link tempaltes
_link_templates(zapi, each_host, hostid, each_templates, Zabbix_Donot_Unlink_Template)
if __name__ == "__main__":
_main()
參考:zabbix api
此腳本讀取的配置文件(/srv/salt/zabbix/files/etc/zabbix/api/config.yaml):
Monitors_DIR: {{Monitors_DIR}}
Templates_DIR: {{Templates_DIR}}
Zabbix_URL: {{Zabbix_URL}}
Zabbix_User: {{Zabbix_User}}
Zabbix_Pass: {{Zabbix_Pass}}
Zabbix_Donot_Unlink_Template: # zabbix自動維護連接的模板,手動連接到主機上的模板需要在此處列出
- 'Template OS Linux'
Salt模塊的擴展
對Salt進行模塊化設計就是為了擴展,另外將變量抽象出來放到pillar中也是為了模塊可以重用。當你需要配置兩個web平台,而這兩個平台又有些許不同時你該怎么辦?需要重新再寫個nginx模塊適配新的平台嗎?
對於上面問題的回答是否定的,我們無需再重新寫一個nginx模塊,我們只需要對新的平台傳遞新的配置文件或者使用同一個模板傳遞不同的參數。
使用不同的配置文件
當兩個平台配置相差比較大時可能傳遞一個不同的配置文件會更合適,如下:
/etc/rsyncd.conf:
file.managed:
- source: salt://rsync/files/etc/{{salt['pillar.get']('rsync_template', 'rsyncd.conf')}}
- template: jinja
- user: root
- group: root
- mode: 644
為不同的節點在pillar中配置不同的rsync_template變量即可。
使用同一個模板傳遞不同的參數
/etc/keepalived/keepalived.conf:
file.managed:
- source: salt://keepalived/files/etc/keepalived/keepalived.conf
- template: jinja
- user: root
- group: root
- mode: 644
對於主服務器(/srv/salt/pillar/nodes/ha1.sls )使用如下pillar變量:
keepalived:
notification_email: 'dongliang@mall.com'
notification_email_from: 'haproxy@mall.com'
smtp_server: 127.0.0.1
state: MASTER
priority: 100
auth_type: PASS
auth_pass: mall
virtual_ipaddress_internal: 172.16.100.100
virtual_ipaddress_external: 60.60.60.100
對於從服務器(/srv/salt/pillar/nodes/ha2.sls )使用如下pillar變量:
keepalived:
notification_email: 'dongliang@mall.com'
notification_email_from: 'haproxy@mall.com'
smtp_server: 127.0.0.1
state: BACKUP
priority: 99
auth_type: PASS
auth_pass: mall
virtual_ipaddress_internal: 172.16.100.100
virtual_ipaddress_external: 60.60.60.100