Ansible Playbooks基本使用
你將學到什么
- 如何使用playbook
- 如何編寫playbook
- 如何使用roles
PlayBook使用
基礎環境
### 64 位 Ubuntu 16.04 LTS,創建CentOS LXC容器web模擬托管節點 # ssh-keygen -t rsa # apt-get install lxc # apt-get install yum # lxc-create -n centos -t centos -- -R 7 ### 修改centos模板root密碼 # chroot /var/lib/lxc/centos/rootfs passwd # lxc-copy -n centos -N web -B aufs -s # lxc-start -n web -d ### 進入容器 # lxc-console -n web ### 下面命令都在容器中執行,修改IP地址為10.0.3.200 # vi ifcfg-eth0 DEVICE=eth0 BOOTPROTO=static ONBOOT=yes HOSTNAME=centos NM_CONTROLLED=no TYPE=Ethernet NAME=eth0 IPADDR=10.0.3.200 NETMASK=255.255.255.0 GATEWAY=10.0.3.1 DNS1=114.114.114.114
簡單的playbook
# mkdir playbook # cd playbook # vim hosts [web] 192.168.124.240 # vim site.yml - name: Sample hosts: web # 收集host facts信息 gather_facts: True tasks: # 在ansible托管節點上生成sample.txt文件 - name: Web command: /bin/sh -c "echo 'web' > ~/sample.txt" # 在ansible控制主機上生成sample.txt文件 - name: Local Web local_action: command /bin/sh -c "echo 'local web' > ~/sample.txt"
執行playbook
# ansible-playbook -i hosts site.yml
樣例playbook
下載樣例
### 在主機中下在ansible樣例 $ git clone https://github.com/ansible/ansible-examples.git
修改樣例配置文件
$ cd ansible-examples/tomcat-standalone
$ vim hosts
[tomcat-servers]
10.0.3.200 ### 配置ssh登入密碼 $ vim group_vars/tomcat-servers # Here are variables related to the Tomcat installation http_port: 8080 https_port: 8443 # This will configure a default manager-gui user: admin_username: admin admin_password: 123456 ansible_ssh_pass: 123456
執行playbook
### 出錯就反復執行,不過要加上出錯提示中的--limit @/home/ubuntu/ansible-examples/tomcat-standalone/site.retry參數 # ansible-playbook -i hosts site.yml
出錯處理
- 問題1
TASK [selinux : Install libselinux-python] ************************************* fatal: [10.0.3.200]: FAILED! => {"changed": false, "failed": true, "msg": "Failure talking to yum: Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again"} to retry, use: --limit @/home/ubuntu/ansible-examples/tomcat-standalone/site.retry
解決辦法
### 在容器中執行一遍yum update更新下源,就是更新下緩存,不需要安裝軟件
- 問題2
TASK [tomcat : insert firewalld rule for tomcat http port] ********************* fatal: [10.0.3.200]: FAILED! => {"changed": false, "failed": true, "msg": "firewalld and its python 2 module are required for this module"} RUNNING HANDLER [tomcat : restart tomcat] ************************************** to retry, use: --limit @/home/ubuntu/ansible-examples/tomcat-standalone/site.retry
解決辦法
### 為容器安裝firewalld # yum search firewalld |grep python python-firewall.noarch : Python2 bindings for firewalld # yum install python-firewall.noarch # systemctl enable firewalld # systemctl start firewalld
roles使用
roles標准結構
# tree ansible-sshd/ ansible-sshd/ ├── CHANGELOG ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── LICENSE ├── meta │ ├── 10_top.j2 │ ├── 20_middle.j2 │ ├── 30_bottom.j2 │ ├── main.yml │ ├── make_option_list │ ├── options_body │ └── options_match ├── README.md ├── tasks │ └── main.yml ├── templates │ └── sshd_config.j2 ├── tests │ ├── inventory │ ├── roles │ │ └── ansible-sshd -> ../../. │ └── test.yml ├── Vagrantfile └── vars ├── Amazon.yml ├── Archlinux.yml ├── Debian_8.yml ├── Debian.yml ├── default.yml ├── Fedora.yml ├── FreeBSD.yml ├── OpenBSD.yml ├── RedHat_6.yml ├── RedHat_7.yml ├── Suse.yml ├── Ubuntu_12.yml ├── Ubuntu_14.yml └── Ubuntu_16.yml
目錄名 | 說明 |
---|---|
defaults | 為當前角色設定默認變量時使用此目錄,應當包含一個main.yml文件 |
handlers | 此目錄中應當包含一個main.yml文件,用於定義此角色用到的各handler,在handler中使用include包含的其它的handler文件也應該位於此目錄中 |
meta | 應當包含一個main.yml文件,用於定義此角色的特殊設定及其依賴關系 |
tasks | 至少應該包含一個名為main.yml的文件,其定義了此角色的任務列表,此文件可以使用include包含其它的位於此目錄中的task文件 |
templates | template模塊會自動在此目錄中尋找Jinja2模板文件 |
vars | 定義當前角色使用的變量 |
files | 存放由copy或script等模塊調用的文件 |
tests | 在playbook中角色的使用樣例 |
roles使用
# cat ansible-sshd/tests/test.yml --- - hosts: localhost become: true roles: - ansible-sshd # cd ansible-sshd/tests/ # ansible-playbook test.yml
roles的任務執行順序
### 首先執行meta下的main.yml文件內容 ### 然后執行tasks下的main.yml文件內容
Ansible Playbooks高級使用
文件操作
文件創建
- file
用於設置文件/鏈接/目錄的屬性,或者刪除文件/鏈接/目錄
### state如果是directory當目錄不存在時會自動創建;如果是file當文件不存在時不會自動創建 - name: Create log dir file: path: "{{ item.src }}" state: directory with_items: "{{ log_dirs }}" when: is_metal | bool tags: - common-log - name: Mask lxc-net systemd service file: src: /dev/null path: /etc/systemd/system/lxc-net.service state: link when: - ansible_service_mgr == 'systemd' tags: - lxc-files - lxc-net
修改文件
- lineinfile
用於檢測文件是否存在特殊行或者使用后端正則表達式來替換匹配到的特殊行
- name: Extra lxc config lineinfile: dest: "/var/lib/lxc/{{ inventory_hostname }}/config" line: "{{ item.split('=')[0] }} = {{ item.split('=', 1)[1] }}" insertafter: "^{{ item.split('=')[0] }}" backup: "true" with_items: "{{ extra_container_config | default([]) }}" delegate_to: "{{ physical_host }}" register: _ec when: not is_metal | bool tags: - common-lxc
- replace
lineinfile的多行匹配版本,此模塊會在文件中插入一段內容,並在內容開始和結束位置設置標簽,后續可以使用標簽可以對此塊內容進行操作
### 在ml2_conf.ini文件的[ml2]和[ml2_type_vlan]字段之間插入一段內容
- name: Enable ovn in neutron-server
replace: dest: "{{ node_config_directory }}/neutron-server/ml2_conf.ini" regexp: '\[ml2\][\S\s]*(?=\[ml2_type_vlan\])' replace: |+ [ml2] type_drivers = local,flat,vlan,geneve tenant_network_types = geneve mechanism_drivers = ovn extension_drivers = port_security overlay_ip_version = 4 [ml2_type_geneve] vni_ranges = 1:65536 max_header_size = 38 [ovn] ovn_nb_connection = tcp:{{ api_interface_address }}:{{ ovn_northdb_port }} ovn_sb_connection = tcp:{{ api_interface_address }}:{{ ovn_sourthdb_port }} ovn_l3_mode = False ovn_l3_scheduler = chance ovn_native_dhcp = True neutron_sync_mode = repair backup: yes when: - action == "deploy" - inventory_hostname in groups['network'] notify: - Restart neutron-server container
- ini_file
ini后綴格式文件修改
### 設置l3_agent.ini文件[DEFAULT]字段的external_network_bridge選項值為br-ex - name: Set the external network bridge vars: agent: "{{ 'neutron-vpnaas-agent' if enable_neutron_vpnaas | bool else 'neutron-l3-agent' }}" ini_file: dest: "{{ node_config_directory }}/{{ agent }}/l3_agent.ini" section: "DEFAULT" option: "external_network_bridge" value: "{{ neutron_bridge_name | default('br-ex') }}" backup: yes when: - action == "deploy" - inventory_hostname in ovn_central_address delegate_to: "{{ item }}" with_items: "{{ groups['neutron-server'] }}" notify: - Restart {{ agent }} container
- assemble
將多個文件聚合成一個文件
### 將/etc/haproxy/conf.d目錄下的文件內容聚合成/etc/haproxy/haproxy.cfg文件 - name: Regenerate haproxy configuration assemble: src: "/etc/haproxy/conf.d" dest: "/etc/haproxy/haproxy.cfg" notify: Restart haproxy tags: - haproxy-general-config
循環控制
- with_items
標准循環,用於執行重復任務,{{ item }}類似宏展開
- name: add several users user: name: "{{ item.name }}" state: present groups: "{{ item.groups }}" with_items: - { name: 'testuser1', groups: 'wheel' } - { name: 'testuser2', groups: 'root' }
- with_nested
嵌套循環
### 修改neutron-server組所有主機的ml2_conf.ini文件的對應字段值 - name: Enable ovn in neutron-server vars: params: - { section: 'ml2', option: 'type_drivers', value: 'local,flat,vlan,geneve' } - { section: 'ml2', option: 'tenant_network_types', value: 'geneve' } - { section: 'ml2', option: 'mechanism_drivers', value: 'ovn' } - { section: 'ml2', option: 'extension_drivers', value: 'port_security' } - { section: 'ml2', option: 'overlay_ip_version', value: '4' } - { section: 'securitygroup', option: 'enable_security_group', value: 'True' } ini_file: dest: "{{ node_config_directory }}/neutron-server/ml2_conf.ini" section: "{{ item[0].section }}" option: "{{ item[0].option }}" value: "{{ item[0].value }}" backup: yes when: - action == "deploy" - inventory_hostname in ovn_central_address delegate_to: "{{ item[1] }}" with_nested: - "{{ params }}" - "{{ groups['neutron-server'] }}" notify: - Restart neutron-server container
流程控制
- tags
設置任務標簽
tasks: - yum: name={{ item }} state=installed with_items: - httpd - memcached tags: - packages - template: src=templates/src.j2 dest=/etc/foo.conf tags: - configuration ### 執行playbook可以指定只執行標簽對應任務或跳過標簽對應任務 # ansible-playbook example.yml --tags "configuration,packages" # ansible-playbook example.yml --skip-tags "notification"
- fail_when
用來控制playbook退出
- name: Check if firewalld is installed command: rpm -q firewalld register: firewalld_check failed_when: firewalld_check.rc > 1 when: ansible_os_family == 'RedHat'
- pre_tasks/post_tasks
用來設置在執行roles模塊之前和之后需要執行的任務
- name: Install the aodh components hosts: aodh_all gather_facts: "{{ gather_facts | default(True) }}" max_fail_percentage: 20 user: root pre_tasks: - include: common-tasks/os-lxc-container-setup.yml - include: common-tasks/rabbitmq-vhost-user.yml static: no vars: user: "{{ aodh_rabbitmq_userid }}" password: "{{ aodh_rabbitmq_password }}" vhost: "{{ aodh_rabbitmq_vhost }}" _rabbitmq_host_group: "{{ aodh_rabbitmq_host_group }}" when: - inventory_hostname == groups['aodh_api'][0] - groups[aodh_rabbitmq_host_group] | length > 0 - include: common-tasks/os-log-dir-setup.yml vars: log_dirs: - src: "/openstack/log/{{ inventory_hostname }}-aodh" dest: "/var/log/aodh" - include: common-tasks/mysql-db-user.yml static: no vars: user_name: "{{ aodh_galera_user }}" password: "{{ aodh_container_db_password }}" login_host: "{{ aodh_galera_address }}" db_name: "{{ aodh_galera_database }}" when: inventory_hostname == groups['aodh_all'][0] - include: common-tasks/package-cache-proxy.yml roles: - role: "os_aodh" aodh_venv_tag: "{{ openstack_release }}" aodh_venv_download_url: "{{ openstack_repo_url }}/venvs/{{ openstack_release }}/{{ ansible_distribution | lower }}/aodh-{{ openstack_release }}-{{ ansible_architecture | lower }}.tgz" - role: "openstack_openrc" tags: - openrc - role: "rsyslog_client" rsyslog_client_log_rotate_file: aodh_log_rotate rsyslog_client_log_dir: "/var/log/aodh" rsyslog_client_config_name: "99-aodh-rsyslog-client.conf" tags: - rsyslog vars: is_metal: "{{ properties.is_metal|default(false) }}" aodh_rabbitmq_userid: aodh aodh_rabbitmq_vhost: /aodh aodh_rabbitmq_servers: "{{ rabbitmq_servers }}" aodh_rabbitmq_port: "{{ rabbitmq_port }}" aodh_rabbitmq_use_ssl: "{{ rabbitmq_use_ssl }}" tags: - aodh
主機路由
- delegate_to
可以將當前任務放到其他hosts上執行
### 這是一段在容器中執行的playbook的一部分,這時候需要檢測容器所在的宿主機上的對應目錄是否存在,這時候就需要用到委托來跳出當前容器到宿主機上執行當前任務 - name: Ensure mount directories exists file: path: "{{ item['mount_path'] }}" state: "directory" with_items: - "{{ lxc_default_bind_mounts | default([]) }}" - "{{ list_of_bind_mounts | default([]) }}" delegate_to: "{{ physical_host }}" when: - not is_metal | bool tags: - common-lxc
- local_action
將任務放在ansible控制主機(運行ansible-playbook的主機)上執行
- name: Check if the git cache exists on deployment host local_action: module: stat path: "{{ repo_build_git_cache }}" register: _local_git_cache when: repo_build_git_cache is defined
用戶和用戶組控制
- group
創建用戶組
### 創建系統管理員組haproxy,present表示不存在創建,absent表示存在刪除
- name: Create the haproxy system group group: name: "haproxy" state: "present" system: "yes" tags: - haproxy-group
- user
創建用戶
### 創建haproxy:haproxy用戶,並創建home目錄 - name: Create the haproxy system user user: name: "haproxy" group: "haproxy" comment: "haproxy user" shell: "/bin/false" system: "yes" createhome: "yes" home: "/var/lib/haproxy" tags: - haproxy-user
其他
- authorized_key
添加用戶的SSH認證key
- name: Create authorized keys file from host vars authorized_key: user: "{{ repo_service_user_name }}" key: "{{ hostvars[item]['repo_pubkey'] | b64decode }}" with_items: "{{ groups['repo_all'] }}" when: hostvars[item]['repo_pubkey'] is defined tags: - repo-key - repo-key-store
- slurp
用來讀取遠程主機上文件內容是base64加密的文件
### 讀取id_rsa.pub文件的內容,並設置到變量repo_pub中 - name: Get public key contents and store as var slurp: src: "{{ repo_service_home_folder }}/.ssh/id_rsa.pub" register: repo_pub changed_when: false tags: - repo-key - repo-key-create
- uri
web訪問,類似執行curl命令
- name: test proxy URL for connectivity uri: url: "{{ repo_pkg_cache_url }}/acng-report.html" method: "HEAD" register: proxy_check failed_when: false tags: - common-proxy
- wait_for
等待一個端口變得可用或者等待一個文件變得可用
- name: Wait for container ssh
wait_for: port: "22" delay: "{{ ssh_delay }}" search_regex: "OpenSSH" host: "{{ ansible_host }}" delegate_to: "{{ physical_host }}" register: ssh_wait_check until: ssh_wait_check | success retries: 3 when: - (_mc is defined and _mc | changed) or (_ec is defined and _ec | changed) - not is_metal | bool tags: - common-lxc
- command
執行shell命令
### ignore_errors為true表示命令執行出錯也不會退出playbook
- name: Check if clean is needed command: docker exec openvswitch_vswitchd ovs-vsctl br-exists br-tun register: result ignore_errors: True
切換用戶
### 使用become會先切換成apache用戶,再執行command命令,默認become_user用戶為root(如果你ansible配置的就是root用戶的免密碼登入那就不需要become了) - name: Run a command as the apache user command: somecommand become: true become_user: apache
檢測鏈表是否為空
### pip_wheel_install為鏈表變量 - name: Install wheel packages shell: cd /tmp/wheels && pip install {{ item }}* with_items: - "{{ pip_wheel_install | default([]) }}" when: pip_wheel_install > 0
Ansible Jinja2使用
常用方法
- ternary
根據結果的真假來決定返回值
- name: Set container backend to "dir" or "lvm" based on whether the lxc VG was found set_fact: lxc_container_backing_store: "{{ (vg_result.rc != 0) | ternary('dir', 'lvm') }}" when: vg_result.rc is defined tags: - lxc-container-vg-detect
vg_result.rc不為0返回dir,否則返回lvm
- if語法
根據結果的真假來決定返回值
- name: Set the external network bridge
vars: agent: "{{ 'neutron-vpnaas-agent' if enable_neutron_vpnaas | bool else 'neutron-l3-agent' }}" ini_file: dest: "{{ node_config_directory }}/{{ agent }}/l3_agent.ini" section: "DEFAULT" option: "external_network_bridge" value: "{{ neutron_bridge_name | default('br-ex') }}" backup: yes when: - action == "deploy" - inventory_hostname in ovn_central_address delegate_to: "{{ item }}" with_items: "{{ groups['neutron-server'] }}" notify: - Restart {{ agent }} container
- when中使用jinja2
when表達式中不建議直接使用{{}}的方式來獲取變量值,如果變量是字符串可以使用管道操作| string來獲取變量值
- name: Checking free port for OVN
vars: service: "{{ neutron_services[item.name] }}" wait_for: host: "{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}" port: "{{ item.port }}" connect_timeout: 1 state: stopped when: - container_facts[ item.facts | string ] is not defined - service.enabled | bool - service.host_in_groups | bool with_items: - { name: "ovn-nb-db-server", port: "{{ ovn_northdb_port }}", facts: "ovn_nb_db" } - { name: "ovn-sb-db-server", port: "{{ ovn_sourthdb_port }}", facts: "ovn_sb_db" }
Ansible基本使用
你將學到什么
- 如何配置ansible運行環境
- 如何執行ansible命令
- 如何配置Inventory
環境
角色 | 操作系統 | 網絡地址 |
---|---|---|
管理主機 | ubuntu 14.04 TLS | 192.168.200.250 |
托管節點 | ubuntu 16.04 TLS | 192.168.200.11 192.168.200.12 |
管理主機配置
- 安裝ansible
$ sudo apt-get install software-properties-common $ sudo apt-add-repository ppa:ansible/ansible $ sudo apt-get update $ sudo apt-get install ansible
- 設置托管節點
### 編輯配置文件添加托管節點網絡地址 # vim /etc/ansible/hosts ... [webservers] 192.168.200.11 [compute] 192.168.200.12 ...
- 生成ssh公鑰
# ssh-keygen -t rsa # ssh-agent bash # ssh-add ~/.ssh/id_rsa
托管節點配置
- 添加管理主機公鑰
# ssh-keygen -t rsa # scp root@192.168.200.250:~/.ssh/id_rsa.pub ./ # cat id_rsa.pub >> ~/.ssh/authorized_keys # chmod 600 ~/.ssh/authorized_keys
- 安裝python
# apt-get install python
測試ansible命令
# ansible all -m ping 192.168.200.11 | success >> { "changed": false, "ping": "pong" } 192.168.200.12 | success >> { "changed": false, "ping": "pong" } # ansible compute -m ping 192.168.200.12 | success >> { "changed": false, "ping": "pong" } # ansible webservers -m ping 192.168.200.11 | success >> { "changed": false, "ping": "pong" } # ansible all -a "/bin/echo hello" 192.168.200.12 | success | rc=0 >> hello 192.168.200.11 | success | rc=0 >> hello
配置Inventory
靜態方式
就是前面在文件/etc/ansible/hosts中指定的主機和組的方式
動態方式
通過外部腳本獲取主機列表,並按照ansible所要求的格式返回給ansilbe命令的方式。需要注意的是,用於生成JSON代碼的腳本必須支持兩個選項:
-
--list
:返回一個JSON散列/字典,它包含所管理的所有組.每個組的value應該是一個關於其包含的主機/IP哈希/字典,它可能是一個子組或者組的變量或者僅僅是一個主機/IP的列表,例如:{ "databases" : { "hosts" : [ "host1.example.com", "host2.example.com" ], "vars" : { "a" : true } }, "webservers" : [ "host2.example.com", "host3.example.com" ], "atlanta" : { "hosts" : [ "host1.example.com", "host4.example.com", "host5.example.com" ], "vars" : { "b" : false }, "children": [ "marietta", "5points" ] }, "marietta" : [ "host6.example.com" ], "5points" : [ "host7.example.com" ] }
-
--host <hostname>
:返回一條空的JSON哈希/字典,或者關於變量的JSON哈希/字典,這些變量將被用來模板或者playbooks。返回變量是可選的,如果腳本不希望這樣做,返回一條空的哈希/字典即可:{ "favcolor" : "red", "ntpserver" : "wolf.example.com", "monitoring" : "pack.example.com" }
編寫樣例腳本inventory-script:
[web] 192.168.200.100
調用方式如下:
### Ansible默認通過調用腳本的--list選項來獲取主機列表 # ansible web -i inventory-script -m ping