利用ansible書寫playbook在華為雲上批量配置管理工具自動化安裝ceph集群


ansible、playbook、華為雲、ceph

首先在華為雲上購買搭建ceph集群所需雲主機:

 

 

 

然后購買ceph所需存儲磁盤

 

 

將購買的磁盤掛載到用來搭建ceph的雲主機上

在跳板機上安裝ansible

查看ansible版本,檢驗ansible是否安裝成功

配置主機分組

測試結果

書寫playbook文件內容如下:

 1 ---
 2 #將yum文件同步到各個節點
 3 - hosts: ceph
 4   remote_user: root
 5   tasks: 
 6     - copy:
 7         src: /etc/yum.repos.d/ceph.repo
 8         dest: /etc/yum.repos.d/ceph.repo
 9     - shell: yum clean all
10 #給ceph-0001主機安裝ceph-deploy,創建工作目錄,初始化配置文件
11 - hosts: ceph-0001
12   remote_user: root
13   tasks:
14     - yum: 
15         name: ceph-deploy
16         state: installed
17     - file: 
18         path: /root/ceph-cluster
19         state: directory
20         mode: '0755'
21 #給所有ceph節點安裝ceph相關軟件包
22 - hosts: ceph
23   remote_user: root
24   tasks:
25     - yum:
26         name: ceph-osd,ceph-mds
27         state: installed
28 #給ceph-0001,ceph-0002,ceph-0003安裝ceph-mon
29 - hosts: ceph-0001,ceph-0002,ceph-0003
30   remote_user: root
31   tasks:
32     - yum:
33         name: ceph-mon
34         state: installed
35 #初始化mon服務
36 - hosts: ceph-0001
37   tasks:
38     - shell: 'chdir=/root/ceph-cluster ceph-deploy new ceph-0001 ceph-0002 ceph-0003'
39     - shell: 'chdir=/root/ceph-cluster ceph-deploy mon create-initial'
40 #准備磁盤分區,創建journal盤,並永久修改設備權限,使用ceph-deploy工具初始化數據磁盤,初始化OSD集群,部署ceph文件系統
41 - hosts: ceph
42   remote_user: root
43   tasks:
44     - shell: parted /dev/vdb mklabel gpt
45     - shell: parted /dev/vdb mkpart primary 1 100%
46     - shell: chown  ceph.ceph  /dev/vdb1
47     - copy: 
48         src: /etc/udev/rules.d/70-vdb.rules
49         dest: /etc/udev/rules.d/70-vdb.rules
50 - hosts: ceph-0001
51   remote_user: root
52   tasks:
53     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0001:vdc'
54     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0002:vdc'
55     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0003:vdc'
56     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0004:vdc'
57     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0005:vdc'
58     - shell: 'chdir=/root/ceph-cluster ceph-deploy disk zap ceph-0006:vdc'
59     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0001:vdc:/dev/vdb1'
60     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0002:vdc:/dev/vdb1'
61     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0003:vdc:/dev/vdb1'
62     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0004:vdc:/dev/vdb1'
63     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0005:vdc:/dev/vdb1'
64     - shell: 'chdir=/root/ceph-cluster ceph-deploy osd create ceph-0006:vdc:/dev/vdb1'
65     - shell: 'chdir=/root/ceph-cluster ceph-deploy mds create ceph-0006'
66     - shell: 'chdir=/root/ceph-cluster ceph osd pool create cephfs_data 128'
67     - shell: 'chdir=/root/ceph-cluster ceph osd pool create cephfs_metadata 128'
68     - shell: 'chdir=/root/ceph-cluster ceph fs new myfs1 cephfs_metadata cephfs_data'

playbook具體執行過程如下:

 

前往ceph-0001管理主機上驗證:集群已搭建成功

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM