此文章采用虛擬機安裝Zerotier形成虛擬網卡的方式實現不同VMware中的虛擬機實現通信,具體安裝ZeroTier的流程自行百度。
參考文檔:① OpenStack高可用集群部署方案(train版)—基礎配置 - 簡書 (jianshu.com)
一、節點規划
controller01
配置:
6RAM、4CPU、30GB硬盤、1塊NAT網卡、1塊Zerotier的虛擬網卡
網絡:
NAT:192.168.200.10
ZeroTier:192.168.100.10
controller02
配置:
6RAM、4CPU、30GB硬盤、1塊NAT網卡、1塊Zerotier的虛擬網卡
網絡:
NAT:192.168.200.11
ZeroTier:192.168.100.11
compute01
配置:
4RAM、4CPU、30GB硬盤、1塊NAT網卡、1塊Zerotier的虛擬網卡
網絡:
NAT:192.168.200.20
ZeroTier:192.168.100.20
compute02
配置:
4RAM、4CPU、30GB硬盤、1塊NAT網卡、1塊Zerotier的虛擬網卡
網絡:
NAT:192.168.200.21
ZeroTier:192.168.100.21
#采用pacemaker+haproxy的模式實現高可用,pacemaker提供資源管理及VIP(虛擬IP),haproxy提供方向代理及負載均衡
Pacemaker高可用VIP:192.168.100.100
節點 | controller01 | controller02 | compute01 | compute02 |
組件 | mysql | mysql | libvirtd.service openstack-nova-compute |
libvirtd.service openstack-nova-compute |
Keepalived |
Keepalived |
neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent |
neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent |
|
RabbitMQ | RabbitMQ |
|||
memcached |
memcached | |||
Etcd |
Etcd | |||
pacemake |
pacemake | |||
haproxy |
haproxy | |||
keystone | keystone | |||
glance | glance | |||
placement | placement | |||
openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy |
openstack-nova-api openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy |
|||
neutron-server |
neutron-server |
|||
neutron-server |
neutron-server |
|||
dashboard |
dashboard |
服務 | 用戶 | 密碼 |
數據庫mysql | root | 000000 |
backup | backup | |
keystone | KEYSTONE_DBPASS | |
glance | GLANCE_DBPASS | |
placement | PLACEMENT_DBPASS | |
neutron | NEUTRON_DBPASS | |
keystone | admin | admin |
keystone | keystone | |
glance | glance | |
placement | placement | |
nova | nova | |
rabbitmq | openstack | 000000 |
pacemakeweb https://192.168.100.10:2224/ | hacluster | 000000 |
HAProxyweb http://192.168.100.100:1080/ | admin | admin |
二、基本高可用環境配置
1、環境初始化配置
首先配置好四個節點的NAT網卡地址,並且確定可以訪問外網,再進行如下操作。
#####所有節點#####
#設置對應節點的名稱
hostnamectl set-hostname controller01
hostnamectl set-hostname controller02
hostnamectl set-hostname compute01
hostnamectl set-hostname compute02
#關閉防火牆、selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's\SELINUX=enforcing\SELINUX=disable\' /etc/selinux/config
setenforce 0
#安裝基本工具
yum install vim wget net-tools lsof -y
#安裝並且配置zeroTier
curl -s https://install.zerotier.com | sudo bash
zerotier-cli join a0cbf4b62a1c903e
//此處換成自己的網絡ID(在官網申請)
//加入網絡后需在zerotier的網絡管理后台同意加入,否則沒有IP地址
#配置moon提高速度
mkdir /var/lib/zerotier-one/moons.d
cd /var/lib/zerotier-one/moons.d
wget --no-check-certificate https://baimafeima1.coding.net/p/linux-openstack-jiaoben/d/openstack-T/git/raw/master/000000986a8a957f.moon
systemctl restart zerotier-one.service
systemctl enable zerotier-one.service
#配置hosts
cat >> /etc/hosts <<EOF
192.168.100.10 controller01
192.168.100.11 controller02
192.168.100.20 compute01
192.168.100.21 compute02
EOF
#配置ssh免密
##設置時間同步,設置controller01節點做時間同步服務器
#controller01節點配置如下
yum install chrony -y
sed -i '3,6d' /etc/chrony.conf
sed -i '3a\server ntp3.aliyun.com iburst\' /etc/chrony.conf
sed -i 's\#allow 192.168.0.0/16\allow all\' /etc/chrony.conf
sed -i 's\#local stratum 10\local stratum 10\' /etc/chrony.conf
systemctl enable chronyd.service
systemctl restart chronyd.service
#其他節點配置如下
yum install chrony -y
sed -i '3,6d' /etc/chrony.conf
sed -i '3a\server controller01 iburst\' /etc/chrony.conf
systemctl restart chronyd.service
systemctl enable chronyd.service
chronyc sources -v
#內核參數優化,在控制節點上添加,允許非本地IP綁定,允許運行中的HAProxy實例綁定到VIP
modprobe br_netfilter
echo 'net.ipv4.ip_forward = 1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables = 1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >>/etc/sysctl.conf
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
sysctl -p
#下載Train版的軟件包
yum install centos-release-openstack-train -y
yum upgrade
yum clean all
yum makecache
yum install python-openstackclient -y
yum install openstack-utils -y
#openstack-selinux(暫時未裝)
yum install openstack-selinux -y
如圖顯示【*】則為同步成功
2、MariaDB雙主高可用配置MySQL-HA
雙master+keeplived
2.1、安裝數據庫
#####controller#####
所有的controller安裝數據庫
yum install mariadb mariadb-server python2-PyMySQL -y
systemctl restart mariadb.service
systemctl enable mariadb.service
#安裝一些必備軟件
yum -y install gcc gcc-c++ gcc-g77 ncurses-devel bison libaio-devel cmake libnl* libpopt* popt-static openssl-devel
2.2初始化mariadb,在全部控制節點初始化數據庫密碼
######全部controller#####
mysql_secure_installation
#輸入root用戶的當前密碼(不輸入密碼)
Enter current password for root (enter for none):
#設置root密碼?
Set root password? [Y/n] y
#新密碼:
New password:
#重新輸入新的密碼:
Re-enter new password:
#刪除匿名用戶?
Remove anonymous users? [Y/n] y
#禁止遠程root登錄?
Disallow root login remotely? [Y/n] n
#刪除測試數據庫並訪問它?
Remove test database and access to it? [Y/n] y
#現在重新加載特權表?
Reload privilege tables now? [Y/n] y
2.3、修改mariadb配置文件
在全部控制節點修改配置文件/etc/my.cnf
######controller01######
確保/etc/my.cnf中有如下參數,沒有的話需手工添加,並重啟mysql服務。
[mysqld]
log-bin=mysql-bin #啟動二進制文件
server-id=1 #服務器ID(兩個節點的ID不能一樣)
systemctl restart mariadb #重啟數據庫
mysql -uroot -p000000 #登錄數據庫
grant replication slave on *.* to 'backup'@'%' identified by 'backup'; flush privileges; #創建一個用戶用於同步數據
show master status; #查看master的狀態,記錄下File和Position然后在controller02上面設置從controller01同步需要使用。
######配置controller02######
確保/etc/my.cnf中有如下參數,沒有的話需手工添加,並重啟mysql服務。
[mysqld]
log-bin=mysql-bin #啟動二進制文件
server-id=2 #服務器ID(兩個節點的ID不能一樣)
systemctl restart mariadb #重啟數據庫
mysql -uroot -p000000 #登錄數據庫
grant replication slave on *.* to 'backup'@'%' identified by 'backup'; flush privileges; #創建一個用戶用於同步數據
show master status; #查看master的狀態,記錄下File和Position然后在controller02上面設置從controller01同步需要使用。
change master to master_host='192.168.100.10',master_user='backup',master_password='backup',master_log_file='mysql-bin.000001',master_log_pos=639;
#設置controller02以01為主進行數據庫同步,,master_log_file='mysql-bin.000001',master_log_pos=639;是上圖查到的信息
exit; #退出數據庫
systemctl restart mariadb #重啟數據庫
mysql -uroot -p000000
show slave status \G; #查看數據庫的狀態
//Slave_IO_Running: Yes
//Slave_SQL_Running: Yes
//兩項都顯示Yes時說明從controller02同步數據成功。
//至此controller01為主controller02為從的主從架構數據設置成功!
######controller02######
進入數據庫查看數據庫的狀態
mysql -uroot -p000000;
show master status;
######controller01######
#設置controller01和controller02互為主從(即雙master)
mysql -uroot -p000000;
change master to master_host='192.168.100.11',master_user='backup',master_password='backup',master_log_file='mysql-bin.000002',master_log_pos=342;
#master_log_file='mysql-bin.000002',master_log_pos=342;填上圖所示信息
exit;
######controller01#######
systemctl restart mariadb #重啟數據庫
mysql -uroot -p000000
show slave status \G; #查看數據庫的狀態
2.4、測試同步(自行測試)
在controller01節點上創建一個庫,然后查看controller02是否已經同步成功,反之在controller02上創建一個庫controller01上查看是否同步成功。
同步成功
2.5、測試利用keepalived實現高可用
#####controller01######
ip link set multicast on dev ztc3q6qjqu //打開虛擬網卡的多播功能
wget http://www.keepalived.org/software/keepalived-2.1.5.tar.gz
tar -zxvf keepalived-2.1.5.tar.gz
cd keepalived-2.1.5
./configure --prefix=/usr/local/keepalived #安裝到/usr/local/keepalived目錄下;
make && make install
#編輯配置文件
說明:keepalived只有一個配置文件keepalived.conf,里面主要包括以下幾個配置區域,分別是:
global_defs、vrrp_instance和virtual_server。
global_defs:主要是配置故障發生時的通知對象以及機器標識。
vrrp_instance:用來定義對外提供服務的VIP區域及其相關屬性。
virtual_server:虛擬服務器定義
mkdir -p /etc/keepalived/
vim /etc/keepalived/keepalived.conf
#寫入以下配置文件
! Configuration File for keepalived
global_defs {
router_id MySQL-ha
}
vrrp_instance VI_1 {
state BACKUP #兩台配置此處均是BACKUP
interface ztc3q6qjqu
virtual_router_id 51
priority 100 #優先級,另一台改為90
advert_int 1
nopreempt #不搶占,只在優先級高的機器上設置即可,優先級低的機器不設置
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.200
}
}
virtual_server 192.168.100.200 3306 {
delay_loop 2 #每個2秒檢查一次real_server狀態
lb_algo wrr #LVS算法
lb_kind DR #LVS模式
persistence_timeout 60 #會話保持時間
protocol TCP
real_server 192.168.100.10 3306 {
weight 3
notify_down /usr/local/MySQL/bin/MySQL.sh #檢測到服務down后執行的腳本
TCP_CHECK {
connect_timeout 10 #連接超時時間
nb_get_retry 3 #重連次數
delay_before_retry 3 #重連間隔時間
connect_port 3306 #健康檢查端口
}
}
#####controller01
#編寫檢測腳本
mkdir -p /usr/local/MySQL/bin/
vi /usr/local/MySQL/bin/MySQL.sh
#內容如下
!/bin/sh
pkill keepalived
#添加可執行權限
chmod +x /usr/local/MySQL/bin/MySQL.sh
#啟動keepalived
/usr/local/keepalived/sbin/keepalived -D
systemctl enable keepalived.service //開機自啟
#####controller02#####
#安裝keepalived-2.1.5
ip link set multicast on dev ztc3q6qjqu //打開虛擬網卡的多播功能
wget http://www.keepalived.org/software/keepalived-2.1.5.tar.gz
tar -zxvf keepalived-2.1.5.tar.gz
cd keepalived-2.1.5
./configure --prefix=/usr/local/keepalived #安裝到/usr/local/keepalived目錄下;
make && make install
mkdir -p /etc/keepalived/
vim /etc/keepalived/keepalived.conf
#寫入以下配置文件
! Configuration File for keepalived
global_defs {
router_id MySQL-ha
}
vrrp_instance VI_1 {
state BACKUP
interface ztc3q6qjqu
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.200
}
}
virtual_server 192.168.100.200 3306 {
delay_loop 2
lb_algo wrr
lb_kind DR
persistence_timeout 60
protocol TCP
real_server 192.168.100.11 3306 {
weight 3
notify_down /usr/local/MySQL/bin/MySQL.sh
TCP_CHECK {
connect_timeout 10
nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
#####controller02#####
#編寫檢測腳本
mkdir -p /usr/local/MySQL/bin/
vi /usr/local/MySQL/bin/MySQL.sh
#內容如下
!/bin/sh
pkill keepalived
#添加可執行權限
chmod +x /usr/local/MySQL/bin/MySQL.sh
#啟動keepalived
/usr/local/keepalived/sbin/keepalived -D
systemctl enable keepalived.service //開機自啟
2.6使用VIP測試登錄MySQL
mysql -h192.168.100.200 -ubackup -pbackup
#關閉keepalived
systemctl stop keepalived
systemctl disable keepalived
數據庫高可用搭建成功
此處僅作為測試使用Zerotier的方式搭建高可用是否可行,文章后面的組件均采用Pacemaker實現VIP,keepalived棄用
3、RabbitMQ集群(控制節點)
#####全部controller節點######
yum install erlang rabbitmq-server python-memcached -y
systemctl enable rabbitmq-server.service
#####controller01#####
systemctl start rabbitmq-server.service
rabbitmqctl cluster_status
scp /var/lib/rabbitmq/.erlang.cookie controller02:/var/lib/rabbitmq/
######controller02######
chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie #修改controller02節點.erlang.cookie文件的用戶/組
systemctl start rabbitmq-server #啟動controller02節點的rabbitmq服務
#構建集群,controller02節點以ram節點的形式加入集群
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit@controller01
rabbitmqctl start_app
#查詢集群狀態
rabbitmqctl cluster_status
#####controller01#####
# 在任意節點新建賬號並設置密碼,以controller01節點為例
rabbitmqctl add_user openstack 000000
rabbitmqctl set_user_tags openstack administrator
rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
#設置消息隊列的高可用
rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'
#查看消息隊列策略
rabbitmqctl list_policies
4、Memcached(控制節點)
#####全部控制節點
#安裝memcache的軟件包
yum install memcached python-memcached -y
#設置本地監聽
//controller01
sed -i 's\OPTIONS="-l 127.0.0.1,::1"\OPTIONS="-l 127.0.0.1,::1,controller01"\' /etc/sysconfig/memcached
sed -i 's\CACHESIZE="64"\CACHESIZE="1024"\' /etc/sysconfig/memcached
//controller02
sed -i 's\OPTIONS="-l 127.0.0.1,::1"\OPTIONS="-l 127.0.0.1,::1,controller02"\' /etc/sysconfig/memcached
sed -i 's\CACHESIZE="64"\CACHESIZE="1024"\' /etc/sysconfig/memcached
#開機自啟(所有controller節點)
systemctl enable memcached.service
systemctl start memcached.service
systemctl status memcached.service
5、Etcd集群(控制節點)
#####所有controller節點#####
yum install -y etcd
cp -a /etc/etcd/etcd.conf{,.bak} //備份配置文件
#####controller01#####
cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.100.10:2379,http://127.0.0.1:2379"
ETCD_NAME="controller01"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.10:2379"
ETCD_INITIAL_CLUSTER="controller01=http://192.168.100.10:2380,controller02=http://192.168.100.11:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#####controller02#####
cat > /etc/etcd/etcd.conf <<EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.100.11:2379,http://127.0.0.1:2379"
ETCD_NAME="controller02"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.11:2379"
ETCD_INITIAL_CLUSTER="controller01=http://192.168.100.10:2380,controller02=http://192.168.100.11:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
#####controller01######
#修改etcd.service
vim /usr/lib/systemd/system/etcd.service
#修改成以下
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
User=etcd
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/bin/etcd \
--name=\"${ETCD_NAME}\" \
--data-dir=\"${ETCD_DATA_DIR}\" \
--listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" \
--listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" \
--initial-advertise-peer-urls=\"${ETCD_INITIAL_ADVERTISE_PEER_URLS}\" \
--advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" \
--initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" \
--initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" \
--initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""
Restart=on-failure
LimitNOFILE=65536
#拷貝該配置文件到controller02
scp -rp /usr/lib/systemd/system/etcd.service controller02:/usr/lib/systemd/system/
#####全部controller#####
#設置開機自啟
systemctl enable etcd
systemctl restart etcd
systemctl status etcd
#驗證
etcdctl cluster-health
etcdctl member list
兩台節點都是healthy,controller01成為了leader。
6、使用開源的pacemaker cluster stack做為集群高可用資源管理軟件
#####所有controller#####
yum install pacemaker pcs corosync fence-agents resource-agents -y
# 啟動pcs服務
systemctl enable pcsd
systemctl start pcsd
#修改集群管理員hacluster密碼
echo 000000 | passwd --stdin hacluster
# 認證操作(controller01)
#節點認證,組建集群,需要采用上一步設置的password
pcs cluster auth controller01 controller02 -u hacluster -p 000000 --force
#創建並命名集群,
pcs cluster setup --force --name openstack-cluster-01 controller01 controller02
pacemaker集群啟動
#####controller01#####
pcs cluster start --all
pcs cluster enable --all
#命令記錄
pcs cluster status //查看集群狀態
pcs status corosync //corosync表示一種底層狀態等信息的同步方式
corosync-cmapctl | grep members //查看節點
pcs resource //查看資源
設置高可用屬性
#####controller01#####
#設置合適的輸入處理歷史記錄及策略引擎生成的錯誤與警告
pcs property set pe-warn-series-max=1000 \
pe-input-series-max=1000 \
pe-error-series-max=1000
#cluster-recheck-interval默認定義某些pacemaker操作發生的事件間隔為15min,建議設置為5min或3min
pcs property set cluster-recheck-interval=5
pcs property set stonith-enabled=false
#因為資源問題本次只采用了兩控制節點搭建,無法仲裁,需忽略法定人數策略
pcs property set no-quorum-policy=ignore
配置 vip
#####controller01#####
pcs resource create vip ocf:heartbeat:IPaddr2 ip=192.168.100.100 cidr_netmask=24 op monitor interval=30s
#查看集群資源和生成的VIP情況
pcs resource
ip a
高可用性管理
通過web訪問任意控制節點:https://192.168.100.10:2224,賬號/密碼(即構建集群時生成的密碼):hacluster/000000。雖然以命令行的方式設置了集群,但web界面默認並不顯示,手動添加集群,實際操作只需要添加已組建集群的任意節點即可,如下

7、部署Haproxy
#####全部controller#####
yum install haproxy -y
#開啟日志功能
mkdir /var/log/haproxy
chmod a+w /var/log/haproxy
#編輯配置文件
vim /etc/rsyslog.conf
取消以下注釋:
15 $ModLoad imudp
16 $UDPServerRun 514
19 $ModLoad imtcp
20 $InputTCPServerRun 514
最后添加:
local0.=info -/var/log/haproxy/haproxy-info.log
local0.=err -/var/log/haproxy/haproxy-err.log
local0.notice;local0.!=err -/var/log/haproxy/haproxy-notice.log
#重啟rsyslog
systemctl restart rsyslog
配置關於所有組件的配置(全部控制節點)
VIP:192.168.100.100
#####全部controller節點#####
cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
vim /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local0
chroot /var/lib/haproxy
daemon
group haproxy
user haproxy
maxconn 4000
pidfile /var/run/haproxy.pid
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 4000
# haproxy監控頁
listen stats
bind 0.0.0.0:1080
mode http
stats enable
stats uri /
stats realm OpenStack\ Haproxy
stats auth admin:admin
stats refresh 30s
stats show-node
stats show-legends
stats hide-version
# horizon服務
listen dashboard_cluster
bind 192.168.100.100:80
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:80 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:80 check inter 2000 rise 2 fall 5
#為rabbirmq提供ha集群訪問端口,供openstack各服務訪問;
#如果openstack各服務直接連接rabbitmq集群,這里可不設置rabbitmq的負載均衡
listen rabbitmq_cluster
bind 192.168.100.100:5673
mode tcp
option tcpka
balance roundrobin
timeout client 3h
timeout server 3h
option clitcpka
server controller01 192.168.100.10:5672 check inter 10s rise 2 fall 5
server controller02 192.168.100.11:5672 check inter 10s rise 2 fall 5
# glance_api服務
listen glance_api_cluster
bind 192.168.100.100:9292
balance source
option tcpka
option httpchk
option tcplog
timeout client 3h
timeout server 3h
server controller01 192.168.100.10:9292 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:9292 check inter 2000 rise 2 fall 5
# keystone_public _api服務
listen keystone_public_cluster
bind 192.168.100.100:5000
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:5000 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:5000 check inter 2000 rise 2 fall 5
listen nova_compute_api_cluster
bind 192.168.100.100:8774
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:8774 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:8774 check inter 2000 rise 2 fall 5
listen nova_placement_cluster
bind 192.168.100.100:8778
balance source
option tcpka
option tcplog
server controller01 192.168.100.10:8778 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:8778 check inter 2000 rise 2 fall 5
listen nova_metadata_api_cluster
bind 192.168.100.100:8775
balance source
option tcpka
option tcplog
server controller01 192.168.100.10:8775 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:8775 check inter 2000 rise 2 fall 5
listen nova_vncproxy_cluster
bind 192.168.100.100:6080
balance source
option tcpka
option tcplog
server controller01 192.168.100.10:6080 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:6080 check inter 2000 rise 2 fall 5
listen neutron_api_cluster
bind 192.168.100.100:9696
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:9696 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:9696 check inter 2000 rise 2 fall 5
listen cinder_api_cluster
bind 192.168.100.100:8776
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.100.10:8776 check inter 2000 rise 2 fall 5
server controller02 192.168.100.11:8776 check inter 2000 rise 2 fall 5
# mariadb服務;
#設置controller01節點為master,controller02節點為backup,一主多備的架構可規避數據不一致性;
#另外官方示例為檢測9200(心跳)端口,測試在mariadb服務宕機的情況下,雖然”/usr/bin/clustercheck”腳本已探測不到服務,但受xinetd控制的9200端口依然正常,導致haproxy始終將請求轉發到mariadb服務宕機的節點,暫時修改為監聽3306端口
#listen galera_cluster
# bind 192.168.100.100:3306
# balance source
# mode tcp
# server controller01 192.168.100.10:3306 check inter 2000 rise 2 fall 5
# server controller02 192.168.100.11:3306 backup check inter 2000 rise 2 fall 5
#為rabbirmq提供ha集群訪問端口,供openstack各服務訪問;
#如果openstack各服務直接連接rabbitmq集群,這里可不設置rabbitmq的負載均衡
#復制配置信息給controller02
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
配置內核參數
#####所有controller#####
#net.ipv4.ip_nonlocal_bind = 1是否允許no-local ip綁定,關系到haproxy實例與vip能否綁定並切換
#net.ipv4.ip_forward:是否允許轉發
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf
sysctl -p
啟動服務
#####所有controller######
systemctl enable haproxy
systemctl restart haproxy
systemctl status haproxy
netstat -lntup | grep haproxy
可以看到VIP的各端口處於監聽狀態
訪問:http://192.168.100.100:1080 用戶名/密碼:admin/admin
rabbitmq已安裝所有顯示綠色,其他服務未安裝;在此步驟會顯示紅色
設置pcs資源
#####controller01#####
#添加資源 lb-haproxy-clone
pcs resource create lb-haproxy systemd:haproxy clone
pcs resource
#####controller01#####
#設置資源啟動順序,先vip再lb-haproxy-clone;
pcs constraint order start vip then lb-haproxy-clone kind=Optional
#官方建議設置vip運行在haproxy active的節點,通過綁定lb-haproxy-clone與vip服務,
#所以將兩種資源約束在1個節點;約束后,從資源角度看,其余暫時沒有獲得vip的節點的haproxy會被pcs關閉
pcs constraint colocation add lb-haproxy-clone with vip
pcs resource

通過pacemaker高可用管理查看資源相關的設置
#####controller01#####
#hosts添加mysqlvip和havip解析
sed -i '$a 192.168.100.100 havip' /etc/hosts
scp /etc/hosts controller02:/etc/hosts
scp /etc/hosts compute01:/etc/hosts
scp /etc/hosts compute02:/etc/hosts
三、openstackT版各個組件部署
1、Keystone部署
1.1、配置keystone數據庫
#####任意控制節點(例如controller01)#####
mysql -u root -p000000
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
flush privileges;
exit
controller02查看一下是否同步成功、
同步成功。
#####全部controller節點#####
yum install openstack-keystone httpd mod_wsgi -y
yum install openstack-utils -y
yum install python-openstackclient -y
cp /etc/keystone/keystone.conf{,.bak}
egrep -v '^$|^#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/keystone/keystone.conf cache enabled true
openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller01:11211,controller02:11211
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@havip/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet
1.2、同步數據庫
#####controller01#####
#同步數據庫
su -s /bin/sh -c "keystone-manage db_sync" keystone
#查看是否同步成功
mysql -hhavip -ukeystone -pKEYSTONE_DBPASS keystone -e "show tables";
#####controller01#####
#在/etc/keystone/生成相關秘鑰及目錄
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
#並將初始化的密鑰拷貝到其他的控制節點
scp -rp /etc/keystone/fernet-keys /etc/keystone/credential-keys controller02:/etc/keystone/
#####controller02#####
#同步后修改controller02節點的fernet的權限
chown -R keystone:keystone /etc/keystone/credential-keys/
chown -R keystone:keystone /etc/keystone/fernet-keys/
1.3、認證引導
#####controller01#####
keystone-manage bootstrap --bootstrap-password admin \
--bootstrap-admin-url http://havip:5000/v3/ \
--bootstrap-internal-url http://havip:5000/v3/ \
--bootstrap-public-url http://havip:5000/v3/ \
--bootstrap-region-id RegionOne
1.4、配置Http Server
#####controller01#####
cp /etc/httpd/conf/httpd.conf{,.bak}
sed -i "s/#ServerName www.example.com:80/ServerName ${HOSTNAME}/" /etc/httpd/conf/httpd.conf
sed -i "s/Listen\ 80/Listen\ 192.168.100.10:80/g" /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
sed -i "s/Listen\ 5000/Listen\ 192.168.100.10:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s#*:5000#192.168.100.10:5000#g" /etc/httpd/conf.d/wsgi-keystone.conf
systemctl enable httpd.service
systemctl restart httpd.service
systemctl status httpd.service
#####controller02#####
cp /etc/httpd/conf/httpd.conf{,.bak}
sed -i "s/#ServerName www.example.com:80/ServerName ${HOSTNAME}/" /etc/httpd/conf/httpd.conf
sed -i "s/Listen\ 80/Listen\ 192.168.100.11:80/g" /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
sed -i "s/Listen\ 5000/Listen\ 192.168.100.11:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s#*:5000#192.168.100.11:5000#g" /etc/httpd/conf.d/wsgi-keystone.conf
systemctl enable httpd.service
systemctl restart httpd.service
systemctl status httpd.service
1.5、編寫環境變量腳本
#####controller01#####
touch ~/admin-openrc.sh
cat >> ~/admin-openrc.sh<< EOF
#admin-openrc
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://havip:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
source ~/admin-openrc.sh
scp -rp ~/admin-openrc.sh controller02:~/
scp -rp ~/admin-openrc.sh compute01:~/
scp -rp ~/admin-openrc.sh compute02:~/
#驗證
openstack token issue
1.6、創建新域、項目、用戶和角色
openstack project create --domain default --description "Service Project" service //使用默認域創建service項目
openstack project create --domain default --description "demo Project" demo //使用默認域創建myproject項目(no-admin使用)
openstack user create --domain default --password demo demo //創建myuser用戶,需要設置密碼,可設置為myuser
openstack role create user //創建myrole角色
openstack role add --project demo --user demo user
1.7、設置pcs資源
#####任意controller節點#####
pcs resource create openstack-keystone systemd:httpd clone interleave=true
pcs resource
2、Glance
2.1、創建數據庫、用戶角色、endpoint
#####任意controller節點#####
#創建數據庫
mysql -u root -p000000
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
flush privileges;
exit;
#創建用戶、角色
openstack user create --domain default --password glance glance //創建glance用戶,密碼glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image //創建glance實體服務
#創建glance endpoint
openstack endpoint create --region RegionOne image public http://havip:9292
openstack endpoint create --region RegionOne image internal http://havip:9292
openstack endpoint create --region RegionOne image admin http://havip:9292
2.2、部署與配置glance
#####全部controller節點#####
yum install openstack-glance -y
mkdir /var/lib/glance/images/
chown glance:nobody /var/lib/glance/images
#備份Keystone配置文件
cp /etc/glance/glance-api.conf{,.bak}
egrep -v '^$|^#' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
#controller01節點參數
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.100.10
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@havip/glance
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://havip:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://havip:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
#controller02節點配置
scp -rp /etc/glance/glance-api.conf controller02:/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.100.11
2.3、同步數據庫
#####任意controller#####
su -s /bin/sh -c "glance-manage db_sync" glance
#查看組件
mysql -hhavip -uglance -pGLANCE_DBPASS -e "use glance;show tables;"
2.4、啟動服務
#####全部controller節點#####
systemctl enable openstack-glance-api.service
systemctl restart openstack-glance-api.service
systemctl status openstack-glance-api.service
lsof -i:9292
2.5、下載cirros鏡像驗證glance服務
######任意controller節點#####
wget -c http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
openstack image create --file ~/cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros-qcow2
openstack image list
2.6、添加pcs資源
#####任意controller#####
pcs resource create openstack-glance-api systemd:openstack-glance-api clone interleave=true
pcs resource
3、Placement
3.1、配置Placement數據庫
#####任意controller
mysql -u root -p000000
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
flush privileges;
exit;
#創建placement用戶,添加角色
openstack user create --domain default --password placement placement //設置密碼為placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
#創建endpoint
openstack endpoint create --region RegionOne placement public http://havip:8778
openstack endpoint create --region RegionOne placement internal http://havip:8778
openstack endpoint create --region RegionOne placement admin http://havip:8778
3.2、安裝配置placement軟件包
######全部controller#####
yum install openstack-placement-api -y
#備份Placement配置
cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
#編輯配置文件
openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@havip/placement
openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url http://havip:5000/v3
openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
openstack-config --set /etc/placement/placement.conf keystone_authtoken password placement
scp /etc/placement/placement.conf controller02:/etc/placement/
3.3、同步數據庫
#####任意controller#####
su -s /bin/sh -c "placement-manage db sync" placement //忽略輸出
mysql -uroot -p000000 placement -e " show tables;"
3.4、修改placement的apache配置文件
#####全部controller#####
#備份00-Placement-api配置
##controller01上
cp /etc/httpd/conf.d/00-placement-api.conf{,.bak}
sed -i "s/Listen\ 8778/Listen\ 192.168.100.10:8778/g" /etc/httpd/conf.d/00-placement-api.conf
sed -i "s/*:8778/192.168.100.10:8778/g" /etc/httpd/conf.d/00-placement-api.conf
##controller02上
cp /etc/httpd/conf.d/00-placement-api.conf{,.bak}
sed -i "s/Listen\ 8778/Listen\ 192.168.100.11:8778/g" /etc/httpd/conf.d/00-placement-api.conf
sed -i "s/*:8778/192.168.100.11:8778/g" /etc/httpd/conf.d/00-placement-api.conf
#在00-placement-api.conf后添加如下
vim /etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
#重啟httpd
systemctl restart httpd.service
netstat -lntup|grep 8778
lsof -i:8778
3.5、驗證檢查Placement健康狀態
placement-status upgrade check
3.6、設置pcs資源
前面keystone已經設置過httpd的服務,因為placement也是使用httpd服務,因此不需要再重復設置,登陸haproxy的web界面查看已經添加成功
4、Nova控制節點集群部署
4.1、創建配置nova相關數據庫
#####任意controller節點######
#創建nova_api,nova和nova_cell0數據庫並授權
mysql -u root -p000000
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
flush privileges;
4.2、創建相關服務憑證
#####任意controller節點#####
#創建nova用戶
openstack user create --domain default --password nova nova //密碼nova
#添加管理員角色給nova
openstack role add --project service --user nova admin
#創建compute服務
openstack service create --name nova --description "OpenStack Compute" compute
#創建endpoint
openstack endpoint create --region RegionOne compute public http://havip:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://havip:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://havip:8774/v2.1
4.3、安裝並配置nova軟件包
#####全部controller節點#####
yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y
#nova-api(nova主服務)
#nova-scheduler(nova調度服務)
#nova-conductor(nova數據庫服務,提供數據庫訪問)
#nova-novncproxy(nova的vnc服務,提供實例的控制台)
######controller01######
#備份配置文件/etc/nova/nova.conf
cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.100.10
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
#rabbitmq的vip端口在haproxy中設置的為5673;暫不使用haproxy配置的rabbitmq;直接連接rabbitmq集群
#openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:Zx*****@10.15.253.88:5673
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:000000@controller01:5672,openstack:000000@controller02:5672
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen_port 8774
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 8775
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.100.10
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.100.10
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@havip/nova_api
openstack-config --set /etc/nova/nova.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/nova/nova.conf cache enabled True
openstack-config --set /etc/nova/nova.conf cache memcache_servers controller01:11211,controller02:11211
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@havip/nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri http://havip:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://havip:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 192.168.100.10
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address 192.168.100.10
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.100.10
openstack-config --set /etc/nova/nova.conf vnc novncproxy_port 6080
openstack-config --set /etc/nova/nova.conf glance api_servers http://havip:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://havip:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password placement
#####controller01#####
#拷貝配置文件
scp -rp /etc/nova/nova.conf controller02:/etc/nova/
#####controller02#####
sed -i "s\192.168.100.10\192.168.100.11\g" /etc/nova/nova.conf
4.4、同步數據庫並驗證
#####任意controller#####
#同步nova-api的數據庫
su -s /bin/sh -c "nova-manage api_db sync" nova
#注冊cell0數據庫:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#創建cell1單元格:
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
#同步nova數據庫
su -s /bin/sh -c "nova-manage db sync" nova
#驗證nova cell0和cell1已正確注冊
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
#Cells是Nova內部為了解決數據庫、消息隊列瓶頸問題而設計的一種計算節點划分部署方案,cell v2 自 Newton 版本引入 ,Ocata 版本變為必要組件 。
#驗證數據庫
mysql -hhavip -unova -pNOVA_DBPASS -e "use nova_api;show tables;"
mysql -hhavip -unova -pNOVA_DBPASS -e "use nova;show tables;"
mysql -hhavip -unova -pNOVA_DBPASS -e "use nova_cell0;show tables;"
4.5、啟動nova服務,並配置開機啟動
#####全部controller節點#####
systemctl enable openstack-nova-api.service
systemctl enable openstack-nova-scheduler.service
systemctl enable openstack-nova-conductor.service
systemctl enable openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service
systemctl restart openstack-nova-scheduler.service
systemctl restart openstack-nova-conductor.service
systemctl restart openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service
systemctl status openstack-nova-scheduler.service
systemctl status openstack-nova-conductor.service
systemctl status openstack-nova-novncproxy.service
netstat -tunlp | egrep '8774|8775|8778|6080'
curl http://havip:8774
4.6、驗證
#####controller#####
#列出各服務控制組件,查看狀態;
openstack compute service list
#顯示api端點;
openstack catalog list
#檢查cell與placement api;都為success為正常
nova-status upgrade check
4.7、設置pcs資源
#####任意controller節點####
pcs resource create openstack-nova-api systemd:openstack-nova-api clone interleave=true
pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler clone interleave=true
pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor clone interleave=true
pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy clone interleave=true
#建議openstack-nova-api,openstack-nova-conductor與openstack-nova-novncproxy 等無狀態服務以active/active模式運行;
#openstack-nova-scheduler等服務以active/passive模式運行
5、Nova計算節點集群部署
5.1、Nova安裝
compute01:192.168.100.20
compute02:192.168.100.21
#####全部compute節點#####
yum install openstack-nova-compute -y
yum install -y openstack-utils -y
#備份配置文件/etc/nova/nova.confcp /etc/nova/nova.conf{,.bak}
cp /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
#確定計算節點是否支持虛擬機硬件加速
egrep -c '(vmx|svm)' /proc/cpuinfo
0
#如果此命令返回值不是0,則計算節點支持硬件加速,不需要加入下面的配置。
#如果此命令返回值是0,則計算節點不支持硬件加速,並且必須配置libvirt為使用QEMU而不是KVM
#需要編輯/etc/nova/nova.conf 配置中的[libvirt]部分:因測試使用為虛擬機,所以修改為qemu
5.2、部署與配置
######compute01######
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:000000@controller01:5672,openstack:000000@controller02:5672
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.100.20
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri http://havip:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://havip:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address 192.168.100.20
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://havip:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://havip:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://havip:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password placement
#拷貝配置文件
scp -rp /etc/nova/nova.conf compute02:/etc/nova/
#####compute02#####
sed -i "s\192.168.100.20\192.168.100.21\g" /etc/nova/nova.conf
5.3、啟動nova
######全部compute######
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
#任意控制節點執行;查看計算節點列表
openstack compute service list --service nova-compute
5.4、控制節點上發現計算主機
#####controller節點上運行#####
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
####全部controller節點#####
#在全部控制節點操作;設置自動發現時間為10min,可根據實際環境調節
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 600
systemctl restart openstack-nova-api.service
5.5、驗證
openstack compute service list
6、Neutron控制節點集群部署
6.1創建neutron相關數據庫及認證信息(控制節點)
#建庫授權
mysql -u root -p000000
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
flush privileges;
exit;
#創建用戶、項目、角色
openstack user create --domain default --password neutron neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
#創建endpoint
openstack endpoint create --region RegionOne network public http://havip:9696
openstack endpoint create --region RegionOne network internal http://havip:9696
openstack endpoint create --region RegionOne network admin http://havip.88:9696
6.2、安裝Neutron server(控制節點)
6.2.1、配置neutron.conf
#####全部controller#####
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
yum install conntrack-tools -y
#####controller01######
##配置參數文件
#備份配置文件/etc/nova/nova.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
#配置neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host 192.168.100.10
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
#直接連接rabbitmq集群
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:000000@controller01:5672,openstack:000000@controller02:5672
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
#啟用l3 ha功能
openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True
#最多在幾個l3 agent上創建ha router
openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3
#可創建ha router的最少正常運行的l3 agnet數量
openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 1
#dhcp高可用,在3個網絡節點各生成1個dhcp服務器
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 2
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@havip/neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://havip:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://havip:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://havip:5000
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password nova
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
#將配置文件復制到controller02節點
scp -rp /etc/neutron/neutron.conf controller02:/etc/neutron/
######controller02######
sed -i "s\192.168.100.10\192.168.100.11\g" /etc/neutron/neutron.conf
6.2.2、配置 ml2_conf.ini
#####全部controller#####
#備份配置文件
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
#####controller01######
#編輯配置文件
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true
scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini controller02:/etc/neutron/plugins/ml2/ml2_conf.ini
6.2.3、配置nova服務與neutron服務進行交互
#####全部controller#####
#修改配置文件/etc/nova/nova.conf
#在全部控制節點上配置nova服務與網絡節點服務進行交互
openstack-config --set /etc/nova/nova.conf neutron url http://havip:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://havip:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret neutron
6.3、同步數據庫
#####controller01#####
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
mysql -hhavip -u neutron -pNEUTRON_DBPASS -e "use neutron;show tables;"
#####全部controller節點#####
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
systemctl enable neutron-server.service
systemctl restart neutron-server.service
systemctl status neutron-server.service
7、Neutron計算節點集群部署
7.1、安裝配置Neutron agent(計算節點=網絡節點)
#####所有的compute節點#####
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
#備份配置文件/etc/nova/nova.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
#####compute01#####
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host 192.168.100.20
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:000000@controller01:5672,openstack:000000@controller02:5672
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
#配置RPC的超時時間,默認為60s,可能導致超時異常.設置為180s
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://havip:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://havip:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
scp -rp /etc/neutron/neutron.conf compute02:/etc/neutron/neutron.conf
#####compute02#####
sed -i "s\192.168.100.20\192.168.100.21\g" /etc/neutron/neutron.conf
7.2、部署與配置(計算節點)
7.2.1、配置nova.conf
#####全部compute節點#####
openstack-config --set /etc/nova/nova.conf neutron url http://havip:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://havip:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron
7.2.2、配置ml2_conf.ini
#####compute01#####
#備份配置文件
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true
scp -rp /etc/neutron/plugins/ml2/ml2_conf.ini compute02:/etc/neutron/plugins/ml2/ml2_conf.ini
7.2.3、配置linuxbridge_agent.ini
######compute01######
#備份配置文件
cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
#環境無法提供四張網卡;建議生產環境上將每種網絡分開配置
#provider網絡對應規划的ens33
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
#tunnel租戶網絡(vxlan)vtep端點,這里對應規划的ens33地址,根據節點做相應修改
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.100.20
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
scp -rp /etc/neutron/plugins/ml2/linuxbridge_agent.ini compute02:/etc/neutron/plugins/ml2/
#######compute02上######
sed -i "s#192.168.100.20#192.168.100.21#g" /etc/neutron/plugins/ml2/linuxbridge_agent.ini
7.2.4、 配置 l3_agent.ini
- l3代理為租戶虛擬網絡提供路由和NAT服務
#####全部compute節點#####
#備份配置文件
cp -a /etc/neutron/l3_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
7.2.5、配置dhcp_agent.ini
#####全部controller#####
#備份配置文件
cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
7.2.6、配置metadata_agent.ini
- 元數據代理提供配置信息,例如實例的憑據
metadata_proxy_shared_secret
的密碼與控制節點上/etc/nova/nova.conf
文件中密碼一致;
#####全部compute節點#####
#備份配置文件
cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host havip
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret neutron
openstack-config --set /etc/neutron/metadata_agent.ini cache memcache_servers controller01:11211,controller02:11211
7.3、添加linux內核參數設置
- 確保Linux操作系統內核支持網橋過濾器,通過驗證所有下列sysctl值設置為1;
#####全部的控制節點和計算節點#####
#啟用網絡橋接器支持,需要加載 br_netfilter 內核模塊;否則會提示沒有目錄
modprobe br_netfilter
echo 'net.ipv4.ip_nonlocal_bind = 1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables=1' >>/etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' >>/etc/sysctl.conf
sysctl -p
7.4、重啟nova-api和neutron-gaent服務
-0------------------------------------------------
#####全部的compute節點#####
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
systemctl restart neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
systemctl status neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
#####controller01######
#neutron服務驗證(控制節點)
#列出已加載的擴展,以驗證該neutron-server過程是否成功啟動
openstack extension list --network
#列出代理商以驗證成功
openstack network agent list
7.5、添加pcs資源
#####controller01#####
pcs resource create neutron-server systemd:neutron-server clone interleave=true
8、Horazion儀表盤集群部署
8.1、安裝並配置dashboard
######全部conntroller######
yum install openstack-dashboard memcached python3-memcached -y
#備份配置文件/etc/nova/nova.conf
cp -a /etc/openstack-dashboard/local_settings{,.bak}
grep -Ev '^$|#' /etc/openstack-dashboard/local_settings.bak >/etc/openstack-dashboard/local_settings
######controller01#########
#在controller01上配置,后通過scp拷貝到controller02
#配置文件中要將所有注釋取消
vim /etc/openstack-dashboard/local_settings
#指定在網絡服務器中配置儀表板的訪問位置,添加如下:
WEBROOT = '/dashboard/'
#配置儀表盤在controller節點上使用OpenStack服務
sed -i 's\OPENSTACK_HOST = "127.0.0.1"\OPENSTACK_HOST = "192.168.100.100"\' /etc/openstack-dashboard/local_settings
#允許主機訪問儀表板,接受所有主機,不安全不應在生產中使用
ALLOWED_HOSTS = ['*']
#配置memcached會話存儲服務,添加如下
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller01:11211,controller02:11211',
}
}
#啟用身份API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
#啟用對域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
#配置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
#配置Default為通過儀表板創建的用戶的默認域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
#配置user為通過儀表板創建的用戶的默認角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
#如果選擇網絡選項1,請禁用對第3層網絡服務的支持,如果選擇網絡選項2,則可以打開
OPENSTACK_NEUTRON_NETWORK = {
#自動分配的網絡
'enable_auto_allocated_network': False,
#Neutron分布式虛擬路由器(DVR)
'enable_distributed_router': False,
#FIP拓撲檢查
'enable_fip_topology_check': False,
#高可用路由器模式
'enable_ha_router': True,
#ipv6網絡
'enable_ipv6': True,
#Neutron配額功能
'enable_quotas': True,
#rbac政策
'enable_rbac_policy': True,
#路由器的菜單和浮動IP功能,Neutron部署中有三層功能的支持;可以打開
'enable_router': True,
#默認的DNS名稱服務器
'default_dns_nameservers': [],
#網絡支持的提供者類型,在創建網絡時,該列表中的網絡類型可供選擇
'supported_provider_types': ['*'],
#使用與提供網絡ID范圍,僅涉及到VLAN,GRE,和VXLAN網絡類型
'segmentation_id_range': {},
#使用與提供網絡類型
'extra_provider_types': {},
#支持的vnic類型,用於與端口綁定擴展
'supported_vnic_types': ['*'],
#物理網絡
'physical_networks': [],
}
#配置時區為亞洲上海
TIME_ZONE = "Asia/Shanghai"
#拷貝到controller02
scp -rp /etc/openstack-dashboard/local_settings controller02:/etc/openstack-dashboard/
8.2、配置openstack-dashboard.conf
#####全部controller#####
cp /etc/httpd/conf.d/openstack-dashboard.conf{,.bak}
#建立策略文件(policy.json)的軟鏈接,否則登錄到dashboard將出現權限錯誤和顯示混亂
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
#賦權,在第3行后新增 WSGIApplicationGroup %{GLOBAL}
sed -i '3a WSGIApplicationGroup\ %{GLOBAL}' /etc/httpd/conf.d/openstack-dashboard.conf
######所有controller節點######
systemctl restart httpd.service memcached.service
systemctl enable httpd.service memcached.service
systemctl status httpd.service memcached.service
8.3、測試登錄
http://192.168.100.100/dashboard