高可用OpenStack(Queen版)集群-3.高可用配置(pacemaker&haproxy)


參考文檔:

  1. Install-guide:https://docs.openstack.org/install-guide/
  2. OpenStack High Availability Guide:https://docs.openstack.org/ha-guide/index.html
  3. 理解Pacemaker:http://www.cnblogs.com/sammyliu/p/5025362.html

 六.Pacemaker cluster stack集群

Openstack官網使用開源的pacemaker cluster stack做為集群高可用資源管理軟件。

詳細介紹:https://docs.openstack.org/ha-guide/controller-ha-pacemaker.html

1. 安裝pacemaker

# 在全部控制節點安裝相關服務,以controller01節點為例;
# pacemaker:資源管理器(CRM),負責啟動與停止服務,位於 HA 集群架構中資源管理、資源代理層
# corosync:消息層組件(Messaging Layer),管理成員關系、消息與仲裁,為高可用環境中提供通訊服務,位於高可用集群架構的底層,為各節點(node)之間提供心跳信息;
# resource-agents:資源代理,在節點上接收CRM的調度,對某一資源進行管理的工具,管理工具通常為腳本;
# pcs:命令行工具集;
# fence-agents:fencing 在一個節點不穩定或無答復時將其關閉,使其不會損壞集群的其它資源,其主要作用是消除腦裂
[root@controller01 ~]# yum install pacemaker pcs corosync fence-agents resource-agents -y

2. 構建集群

# 啟動pcs服務,在全部控制節點執行,以controller01節點為例
[root@controller01 ~]# systemctl enable pcsd
[root@controller01 ~]# systemctl start pcsd

# 修改集群管理員hacluster(默認生成)密碼,在全部控制節點執行,以controller01節點為例
[root@controller01 ~]# echo pacemaker_pass | passwd --stdin hacluster

# 認證配置在任意節點操作,以controller01節點為例;
# 節點認證,組建集群,需要采用上一步設置的password
[root@controller01 ~]# pcs cluster auth controller01 controller02 controller03 -u hacluster -p pacemaker_pass --force

# 創建並命名集群,在任意節點操作,以controller01節點為例;
# 生成配置文件:/etc/corosync/corosync.conf
[root@controller01 ~]# pcs cluster setup --force --name openstack-cluster-01 controller01 controller02 controller03

3. 啟動

# 啟動集群,以controller01節點為例
[root@controller01 ~]# pcs cluster start --all

# 設置集群開機啟動
[root@controller01 ~]# pcs cluster enable --all

# 查看集群狀態,也可使用” crm_mon -1”命令;
# “DC”:Designated Controller;
# 通過”cibadmin --query --scope nodes”可查看節點配置
[root@controller01 ~]# pcs status cluster

# 查看corosync狀態;
# “corosync”表示一種底層狀態等信息的同步方式
[root@controller01 ~]# pcs status corosync

# 查看節點;
# 或:corosync-cmapctl runtime.totem.pg.mrp.srp.members
[root@controller01 ~]# corosync-cmapctl | grep members

# 查看集群資源
[root@controller01 ~]# pcs resource 

或通過web訪問任意控制節點:https://172.30.200.31:2224

賬號/密碼(即構建集群時生成的密碼):hacluster/pacemaker_pass

4. 設置屬性

# 在任意控制節點設置屬性即可,以controller01節點為例;
# 設置合適的輸入處理歷史記錄及策略引擎生成的錯誤與警告,在troulbshoot時有用
[root@controller01 ~]# pcs property set pe-warn-series-max=1000 \
pe-input-series-max=1000 \
pe-error-series-max=1000 

# pacemaker基於時間驅動的方式進行狀態處理,” cluster-recheck-interval”默認定義某些pacemaker操作發生的事件間隔為15min,建議設置為5min或3min
[root@controller01 ~]# pcs property set cluster-recheck-interval=5

# corosync默認啟用stonith,但stonith機制(通過ipmi或ssh關閉節點)並沒有配置相應的stonith設備(通過“crm_verify -L -V”驗證配置是否正確,沒有輸出即正確),此時pacemaker將拒絕啟動任何資源;
# 在生產環境可根據情況靈活調整,驗證環境下可關閉
[root@controller01 ~]# pcs property set stonith-enabled=false

# 默認當有半數以上節點在線時,集群認為自己擁有法定人數,是“合法”的,滿足公式:total_nodes < 2 * active_nodes;
# 以3個節點的集群計算,當故障2個節點時,集群狀態不滿足上述公式,此時集群即非法;當集群只有2個節點時,故障1個節點集群即非法,所謂的”雙節點集群”就沒有意義;
# 在實際生產環境中,做2節點集群,無法仲裁時,可選擇忽略;做3節點集群,可根據對集群節點的高可用閥值靈活設置
[root@controller01 ~]# pcs property set no-quorum-policy=ignore

# v2的heartbeat為了支持多節點集群,提供了一種積分策略來控制各個資源在集群中各節點之間的切換策略;通過計算出各節點的的總分數,得分最高者將成為active狀態來管理某個(或某組)資源;
# 默認每一個資源的初始分數(取全局參數default-resource-stickiness,通過"pcs property list --all"查看)是0,同時每一個資源在每次失敗之后減掉的分數(取全局參數default-resource-failure-stickiness)也是0,此時一個資源不論失敗多少次,heartbeat都只是執行restart操作,不會進行節點切換;
# 如果針對某一個資源設置初始分數”resource-stickiness“或"resource-failure-stickiness",則取單獨設置的資源分數;
# 一般來說,resource-stickiness的值都是正數,resource-failure-stickiness的值都是負數;有一個特殊值是正無窮大(INFINITY)和負無窮大(-INFINITY),即"永遠不切換"與"只要失敗必須切換",是用來滿足極端規則的簡單配置項;
# 如果節點的分數為負,該節點在任何情況下都不會接管資源(冷備節點);如果某節點的分數大於當前運行該資源的節點的分數,heartbeat會做出切換動作,現在運行該資源的節點將釋 放資源,分數高出的節點將接管該資源

# pcs property list 只可查看修改后的屬性值,參數”--all”可查看含默認值的全部屬性值;
# 也可查看/var/lib/pacemaker/cib/cib.xml文件,或”pcs cluster cib”,或“cibadmin --query --scope crm_config”查看屬性設置,” cibadmin --query --scope resources”查看資源配置
[root@controller01 ~]# pcs property list

5. 配置vip

# 在任意控制節點設置vip(resource_id屬性)即可,命名即為“vip”;
# ocf(standard屬性):資源代理(resource agent)的一種,另有systemd,lsb,service等;
# heartbeat:資源腳本的提供者(provider屬性),ocf規范允許多個供應商提供同一資源代理,大多數ocf規范提供的資源代理都使用heartbeat作為provider;
# IPaddr2:資源代理的名稱(type屬性),IPaddr2便是資源的type;
# 通過定義資源屬性(standard:provider:type),定位”vip”資源對應的ra腳本位置;
# centos系統中,符合ocf規范的ra腳本位於/usr/lib/ocf/resource.d/目錄,目錄下存放了全部的provider,每個provider目錄下有多個type;
# op:表示Operations
[root@controller01 ~]# pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.30.200.30 cidr_netmask=24 op monitor interval=30s

# 查看集群資源
[root@controller01 ~]# pcs resourceprpr

# 通過”pcs resouce”查詢,vip資源在controller01節點;
# 通過”ip a show”可查看vip
[root@controller01 ~]# ip a show eth0

6. High availability management

通過web訪問任意控制節點:https://172.30.200.31:2224

賬號/密碼(即構建集群時生成的密碼):hacluster/pacemaker_pass

雖然以cli的方式設置了集群,但web界面默認並不顯示,手動添加集群;實際操作只需要添加已組建集群的任意節點即可,如下:

# 如果api區分admin/internal/public接口,對客戶端只開放public接口,通常設置兩個vip,如命名為:vip_management與vip_public;
# 建議是將vip_management與vip_public約束在1個節點
# [root@controller01 ~]# pcs constraint colocation add vip_management with vip_public

七.Haproxy

1. 安裝haproxy

# 在全部控制節點安裝haproxy,以controller01節點為例;
# 如果需要安裝最新版本,可參考:http://www.cnblogs.com/netonline/p/7593762.html
[root@controller01 ~]# yum install haproxy -y

2. 配置haproxy.cfg

# 在全部控制節點配置haproxy.cfg,以controller01節點為例;
# haproxy依靠rsyslog輸出日志,是否輸出日志根據實際情況設定;
# 備份原haproxy.cfg文件
[root@controller01 ~]# cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak

# 集群的haproxy文件,涉及服務較多,這里針對涉及到的openstack服務,一次性設置完成,如下:
[root@controller01 ~]# grep -v ^# /etc/haproxy/haproxy.cfg
global
  chroot  /var/lib/haproxy
  daemon
  group  haproxy
  user  haproxy
  maxconn  4000
  pidfile  /var/run/haproxy.pid

defaults
  log  global
  maxconn  4000
  option  redispatch
  retries  3
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

# haproxy監控頁
listen stats
  bind 0.0.0.0:1080
  mode http
  stats enable
  stats uri /
  stats realm OpenStack\ Haproxy
  stats auth admin:admin
  stats  refresh 30s
  stats  show-node
  stats  show-legends
  stats  hide-version

# horizon服務
 listen dashboard_cluster
  bind 172.30.200.30:80
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 172.30.200.31:80 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:80 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:80 check inter 2000 rise 2 fall 5

# mariadb服務;
# 設置controller01節點為master,controller02/03節點為backup,一主多備的架構可規避數據不一致性;
# 另外官方示例為檢測9200(心跳)端口,測試在mariadb服務宕機的情況下,雖然”/usr/bin/clustercheck”腳本已探測不到服務,但受xinetd控制的9200端口依然正常,導致haproxy始終將請求轉發到mariadb服務宕機的節點,暫時修改為監聽3306端口
listen galera_cluster
  bind 172.30.200.30:3306
  balance  source
  mode    tcp
  server controller01 172.30.200.31:3306 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:3306 backup check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:3306 backup check inter 2000 rise 2 fall 5

# 為rabbirmq提供ha集群訪問端口,供openstack各服務訪問;
# 如果openstack各服務直接連接rabbitmq集群,這里可不設置rabbitmq的負載均衡
 listen rabbitmq_cluster
   bind 172.30.200.30:5673
   mode tcp
   option tcpka
   balance roundrobin
   timeout client  3h
   timeout server  3h
   option  clitcpka
   server controller01 172.30.200.31:5672 check inter 10s rise 2 fall 5
   server controller02 172.30.200.32:5672 check inter 10s rise 2 fall 5
   server controller03 172.30.200.33:5672 check inter 10s rise 2 fall 5

# glance_api服務
 listen glance_api_cluster
  bind 172.30.200.30:9292
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 172.30.200.31:9292 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:9292 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:9292 check inter 2000 rise 2 fall 5

# glance_registry服務
 listen glance_registry_cluster
  bind 172.30.200.30:9191
  balance  source
  option  tcpka
  option  tcplog
  server controller01 172.30.200.31:9191 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:9191 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:9191 check inter 2000 rise 2 fall 5

# keystone_admin_internal_api服務
 listen keystone_admin_cluster
  bind 172.30.200.30:35357
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 172.30.200.31:35357 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:35357 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:35357 check inter 2000 rise 2 fall 5

# keystone_public _api服務
 listen keystone_public_cluster
  bind 172.30.200.30:5000
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 172.30.200.31:5000 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:5000 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:5000 check inter 2000 rise 2 fall 5

# 兼容aws ec2-api
 listen nova_ec2_api_cluster
  bind 172.30.200.30:8773
  balance  source
  option  tcpka
  option  tcplog
  server controller01 172.30.200.31:8773 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:8773 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:8773 check inter 2000 rise 2 fall 5

 listen nova_compute_api_cluster
  bind 172.30.200.30:8774
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 172.30.200.31:8774 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:8774 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:8774 check inter 2000 rise 2 fall 5

 listen nova_placement_cluster
  bind 172.30.200.30:8778
  balance  source
  option  tcpka
  option  tcplog
  server controller01 172.30.200.31:8778 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:8778 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:8778 check inter 2000 rise 2 fall 5

 listen nova_metadata_api_cluster
  bind 172.30.200.30:8775
  balance  source
  option  tcpka
  option  tcplog
  server controller01 172.30.200.31:8775 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:8775 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:8775 check inter 2000 rise 2 fall 5

 listen nova_vncproxy_cluster
  bind 172.30.200.30:6080
  balance  source
  option  tcpka
  option  tcplog
  server controller01 172.30.200.31:6080 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:6080 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:6080 check inter 2000 rise 2 fall 5

 listen neutron_api_cluster
  bind 172.30.200.30:9696
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 172.30.200.31:9696 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:9696 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:9696 check inter 2000 rise 2 fall 5

 listen cinder_api_cluster
  bind 172.30.200.30:8776
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 172.30.200.31:8776 check inter 2000 rise 2 fall 5
  server controller02 172.30.200.32:8776 check inter 2000 rise 2 fall 5
  server controller03 172.30.200.33:8776 check inter 2000 rise 2 fall 5

3. 配置內核參數

# 全部控制節點修改內核參數,以controller01節點為例;
# net.ipv4.ip_nonlocal_bind:是否允許no-local ip綁定,關系到haproxy實例與vip能否綁定並切換;
# net.ipv4.ip_forward:是否允許轉發
[root@controller01 ~]# echo "net.ipv4.ip_nonlocal_bind = 1" >>/etc/sysctl.conf
[root@controller01 ~]# echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf
[root@controller01 ~]# sysctl -p

4. 啟動

# 開機啟動是否設置可自行選擇,利用pacemaker設置haproxy相關資源后,pacemaker可控制各節點haproxy服務是否啟動
# [root@controller01 ~]# systemctl enable haproxy
[root@controller01 ~]# systemctl restart haproxy
[root@controller01 ~]# systemctl status haproxy

訪問:http://172.30.200.30:1080/

5. 設置pcs資源

# 任意控制節點操作即可,以controller01節點為例;
# 添加資源lb-haproxy-clone
[root@controller01 ~]# pcs resource create lb-haproxy systemd:haproxy --clone 
[root@controller01 ~]# pcs resource

# 設置資源啟動順序,先vip再lb-haproxy-clone;
# 通過“cibadmin --query --scope constraints”可查看資源約束配置
[root@controller01 ~]# pcs constraint order start vip then lb-haproxy-clone kind=Optional

# 官方建議設置vip運行在haproxy active的節點,通過綁定lb-haproxy-clone與vip服務,將兩種資源約束在1個節點;
# 約束后,從資源角度看,其余暫時沒有獲得vip的節點的haproxy會被pcs關閉
[root@controller01 ~]# pcs constraint colocation add lb-haproxy-clone with vip
[root@controller01 ~]# pcs resource

通過high availability management查看資源相關的設置,如下:


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM