OpenStack Mitaka HA部署方案(隨筆)


[Toc]
https://github.com/wanstack/AutoMitaka # 親情奉獻安裝openstack HA腳本 使用python + shell,完成了基本的核心功能(純二層的)。歡迎Fork ,喜歡的請記得start一下。非常感謝。 ---
title: Openstack Mitaka 集群安裝部署
date: 2017-03-04-14 23:37
tags: Openstack
---
==openstack運維開發群:94776000 歡迎牛逼的你==


### Openstack Mitaka HA 實施部署測試文檔

#### 一、環境說明

##### 1、主機環境

```
controller(VIP) 192.168.10.100
controller01 192.168.10.101, 10.0.0.1
controller02 192.168.10.102, 10.0.0.2
controller03 192.168.10.103, 10.0.0.3
compute01 192.168.10.104, 10.0.0.4
compute02 192.168.10.105, 10.0.0.5
```
本次環境僅限於測試環境,主要測試HA功能。具體生產環境請對網絡進行划分。

 

#### 二、配置基礎環境


##### 1、配置主機解析


```
在對應節點上配置主機名:

hostnamectl set-hostname controller01
hostname contoller01

hostnamectl set-hostname controller02
hostname contoller02

hostnamectl set-hostname controller03
hostname contoller03

hostnamectl set-hostname compute01
hostname compute01

hostnamectl set-hostname compute02
hostname compute02
```


```
在controller01上配置主機解析:

[root@controller01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.100 controller
192.168.10.101 controller01
192.168.10.102 controller02
192.168.10.103 controller03

192.168.10.104 compute01
192.168.10.105 compute02
```

##### 2、配置ssh互信


```
在controller01上進行配置:

ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ''
ssh-copy-id -i .ssh/id_rsa.pub root@controller02
ssh-copy-id -i .ssh/id_rsa.pub root@controller03
ssh-copy-id -i .ssh/id_rsa.pub root@compute01
ssh-copy-id -i .ssh/id_rsa.pub root@compute02
```


```
拷貝主機名解析配置文件到其他節點
scp /etc/hosts controller02:/etc/hosts
scp /etc/hosts controller03:/etc/hosts
scp /etc/hosts compute01:/etc/hosts
scp /etc/hosts compute02:/etc/hosts
```

##### 3yum 源配置

本次測試機所有節點都可以正常連接網絡,故使用阿里雲的openstack和base源


```
# 所有控制和計算節點開啟yum緩存
[root@controller01 ~]# cat /etc/yum.conf 
[main]
cachedir=/var/cache/yum/$basearch/$releasever
# 開啟緩存keepcache=1表示開啟緩存,keepcache=0表示不開啟緩存,默認為0
keepcache=1
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=5
bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
distroverpkg=centos-release

# 基礎源
yum install wget -y
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

# openstack mitaka源
yum install centos-release-openstack-mitaka -y
# 默認是centos源,建議修改成阿里雲的,因為速度快
[root@contoller01 yum.repos.d]# vim CentOS-OpenStack-mitaka.repo 
# CentOS-OpenStack-mitaka.repo
#
# Please see http://wiki.centos.org/SpecialInterestGroup/Cloud for more
# information

[centos-openstack-mitaka]
name=CentOS-7 - OpenStack mitaka
baseurl=http://mirrors.aliyun.com//centos/7/cloud/$basearch/openstack-mitaka/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud


# galera源
vim mariadb.repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos7-amd64/
enable=1
gpgcheck=1
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
```

scp 到其他所有節點

```
scp CentOS-OpenStack-mitaka.repo controller02:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
scp CentOS-OpenStack-mitaka.repo controller03:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
scp CentOS-OpenStack-mitaka.repo compute01:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo
scp CentOS-OpenStack-mitaka.repo compute02:/etc/yum.repos.d/CentOS-OpenStack-mitaka.repo

scp mariadb.repo controller02:/etc/yum.repos.d/mariadb.repo
scp mariadb.repo controller03:/etc/yum.repos.d/mariadb.repo
scp mariadb.repo compute01:/etc/yum.repos.d/mariadb.repo
scp mariadb.repo compute02:/etc/yum.repos.d/mariadb.repo
```

 

##### 4、ntp配置
本機環境已經有ntp服務器,故直接使用。如果沒有ntp服務器建議使用controller作為ntp服務器

```
yum install ntpdate -y
echo "*/5 * * * * /usr/sbin/ntpdate 192.168.2.161 >/dev/null 2>&1" >> /var/spool/cron/root
/usr/sbin/ntpdate 192.168.2.161
```

##### 5、關閉防火牆和selinux

```
systemctl disable firewalld.service
systemctl stop firewalld.service
sed -i -e "s#SELINUX=enforcing#SELINUX=disabled#g" /etc/selinux/config
sed -i -e "s#SELINUXTYPE=targeted#\#SELINUXTYPE=targeted#g" /etc/selinux/config
setenforce 0
systemctl stop NetworkManager
systemctl disable NetworkManager
```

##### 6、安裝配置pacemaker


```
# 所有控制節點安裝如下軟件
yum install -y pcs pacemaker corosync fence-agents-all resource-agents
修改corosync配置文件
[root@contoller01 ~]# cat /etc/corosync/corosync.conf
totem {
version: 2
secauth: off
cluster_name: openstack-cluster
transport: udpu
}

nodelist {
node {
ring0_addr: controller01
nodeid: 1
}
node {
ring0_addr: controller02
nodeid: 2
}
node {
ring0_addr: controller03
nodeid: 3
}
}

quorum {
provider: corosync_votequorum
two_node: 1
}

logging {
to_syslog: yes
}
```

```
# 把配置文件拷貝到其他控制節點
scp /etc/corosync/corosync.conf controller02:/etc/corosync/corosync.conf
scp /etc/corosync/corosync.conf controller03:/etc/corosync/corosync.conf
```


```
# 查看成員信息
corosync-cmapctl runtime.totem.pg.mrp.srp.members
```

 

```


# 所有控制節點啟動服務
systemctl enable pcsd
systemctl start pcsd

# 所有控制節點設置hacluster用戶的密碼
echo hacluster | passwd --stdin hacluster

# [controller01]設置到集群節點的認證
pcs cluster auth controller01 controller02 controller03 -u hacluster -p hacluster --force
# [controller01]創建並啟動集群
pcs cluster setup --force --name openstack-cluster controller01 controller02 controller03
pcs cluster start --all
# [controller01]設置集群屬性
pcs property set pe-warn-series-max=1000 pe-input-series-max=1000 pe-error-series-max=1000 cluster-recheck-interval=5min
# [controller01] 暫時禁用STONISH,否則資源無法啟動
pcs property set stonith-enabled=false

# [controller01] 忽略投票
pcs property set no-quorum-policy=ignore

# [controller01]配置VIP資源,VIP可以在集群節點間浮動
pcs resource create vip ocf:heartbeat:IPaddr2 params ip=192.168.10.100 cidr_netmask="24" op monitor interval="30s"
```

##### 7、安裝haproxy


```
# [所有控制節點] 安裝軟件
yum install -y haproxy

# [所有控制節點] 修改/etc/rsyslog.d/haproxy.conf文件
echo "\$ModLoad imudp" >> /etc/rsyslog.d/haproxy.conf;
echo "\$UDPServerRun 514" >> /etc/rsyslog.d/haproxy.conf;
echo "local3.* /var/log/haproxy.log" >> /etc/rsyslog.d/haproxy.conf;
echo "&~" >> /etc/rsyslog.d/haproxy.conf;

# [所有控制節點] 修改/etc/sysconfig/rsyslog文件
sed -i -e 's#SYSLOGD_OPTIONS=\"\"#SYSLOGD_OPTIONS=\"-c 2 -r -m 0\"#g' /etc/sysconfig/rsyslog

# [所有控制節點] 重啟rsyslog服務
systemctl restart rsyslog

# 創建haproxy基礎配置
vim /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local3
chroot /var/lib/haproxy
daemon
group haproxy
maxconn 4000
pidfile /var/run/haproxy.pid
user haproxy


#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
log global
maxconn 4000
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s

include conf.d/*.cfg
```

```
# 拷貝到其他控制節點
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
```

```
# [controller01]在pacemaker集群增加haproxy資源
pcs resource create haproxy systemd:haproxy --clone
# Optional表示只在同時停止和/或啟動兩個資源時才會產生影響。對第一個指定資源進行的任何更改都不會對第二個指定的資源產生影響,定義在前面的資源先確保運行。
pcs constraint order start vip then haproxy-clone kind=Optional
# vip的資源決定了haproxy-clone資源的位置約束
pcs constraint colocation add haproxy-clone with vip
ping -c 3 192.168.10.100
```


##### 8、galera安裝配置


```
#所有控制節點上操作基本操作 :安裝、設置配置文件 
yum install -y MariaDB-server xinetd

# 在所有控制節點進行配置
vim /usr/lib/systemd/system/mariadb.service
[Service]新添加兩行如下參數:
LimitNOFILE=10000
LimitNPROC=10000

systemctl --system daemon-reload 
systemctl restart mariadb.service

# 初始化數據庫,在controller01上執行即可
systemctl start mariadb
mysql_secure_installation

# 查看並發數
show variables like 'max_connections';

# 關閉服務修改配置文件
systemctl stop mariadb

# 備份原始配置文件
cp /etc/my.cnf.d/server.cnf /etc/my.cnf.d/bak.server.cnf

```


```
# 配置controller01上的配置文件
cat /etc/my.cnf.d/server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
max_connections = 4096
bind-address= 192.168.10.101

default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M

wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://controller01,controller02,controller03"
wsrep_node_name= controller01
wsrep_node_address= 192.168.10.101
wsrep_sst_method=rsync
```


```
# 配置controller02上的配置文件
cat /etc/my.cnf.d/server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
max_connections = 4096
bind-address= 192.168.10.102

default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M

wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://controller01,controller02,controller03"
wsrep_node_name= controller02
wsrep_node_address= 192.168.10.102
wsrep_sst_method=rsync
```


```
# 配置controller03上的配置文件
cat /etc/my.cnf.d/server.cnf
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
binlog_format=ROW
max_connections = 4096
bind-address= 192.168.10.103

default_storage_engine=innodb
innodb_autoinc_lock_mode=2
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=122M

wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
wsrep_cluster_name="galera_cluster"
wsrep_cluster_address="gcomm://controller01,controller02,controller03"
wsrep_node_name= controller03
wsrep_node_address= 192.168.10.103
wsrep_sst_method=rsync
```


```
# 在controller01上執行
galera_new_cluster

#查看日志
tail -f /var/log/messages

# 啟動其他控制節點
systemctl enable mariadb
systemctl start mariadb
```


```
# 添加check
mysql -uroot -popenstack -e "use mysql;INSERT INTO user(Host, User) VALUES('192.168.10.100', 'haproxy_check');FLUSH PRIVILEGES;"
mysql -uroot -popenstack -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller01' IDENTIFIED BY '"openstack"'";
mysql -uroot -popenstack -h 192.168.10.100 -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
```


```
# 配置haproxy for galera
# 所有控制節點創建haproxy配置文件目錄

cat /etc/haproxy/haproxy.cfg
listen galera_cluster
bind 192.168.10.100:3306
balance source
#option mysql-check user haproxy_check
server controller01 192.168.10.101:3306 check port 9200 inter 2000 rise 2 fall 5
server controller02 192.168.10.102:3306 check port 9200 inter 2000 rise 2 fall 5
server controller03 192.168.10.103:3306 check port 9200 inter 2000 rise 2 fall 5


# 拷貝配置文件到其他控制節點
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/
```


```
# 重啟pacemaker,corosync集群腳本
vim restart-pcs-cluster.sh
#!/bin/sh
pcs cluster stop --all
sleep 10
#ps aux|grep "pcs cluster stop --all"|grep -v grep|awk '{print $2 }'|xargs kill
for i in 01 02 03; do ssh controller$i pcs cluster kill; done
pcs cluster stop --all
pcs cluster start --all
sleep 5
watch -n 0.5 pcs resource
echo "pcs resource"
pcs resource
pcs resource|grep Stop
pcs resource|grep FAILED


# 執行腳本
bash restart-pcs-cluster.sh 
```

##### 9、安裝配置rabbitmq-server集群

```
# 所有控制節點
yum install -y rabbitmq-server


# 拷貝controller01上的cookie到其他控制節點上
scp /var/lib/rabbitmq/.erlang.cookie root@controller02:/var/lib/rabbitmq/.erlang.cookie
scp /var/lib/rabbitmq/.erlang.cookie root@controller03:/var/lib/rabbitmq/.erlang.cookie

# controller01以外的其他節點設置權限
chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
chmod 400 /var/lib/rabbitmq/.erlang.cookie


# 啟動服務
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service

# 在任意控制節點上查看集群配置
rabbitmqctl cluster_status

# controller01以外的其他節點 加入集群
rabbitmqctl stop_app
rabbitmqctl join_cluster --ram rabbit@controller01
rabbitmqctl start_app


# 在任意節點 設置ha-mode
rabbitmqctl cluster_status;
rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'

# 在任意節點執行創建用戶
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
```

##### 10、安裝配置memcache

```
yum install -y memcached

# controller01上修改配置
cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.10.101,::1"

# controller02上修改配置
cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.10.102,::1"

# controller03上修改配置
cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.10.103,::1"

# 所有節點啟動服務
systemctl enable memcached.service
systemctl start memcached.service
```

#### 三、安裝配置openstack軟件集群


```
# 所有控制節點和計算節點安裝openstack 基礎包
yum upgrade -y
yum install -y python-openstackclient openstack-selinux openstack-utils
```

##### 1、安裝openstack Identity

```

# 在任意節點創建keystone數據庫
mysql -uroot -popenstack -e "CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '"keystone"';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '"keystone"';
FLUSH PRIVILEGES;"


# 所有節點安裝keystone軟件包
yum install -y openstack-keystone httpd mod_wsgi

# 任意節點生成臨時token
openssl rand -hex 10
8464d030a1f7ac3f7207

# 修改keystone配置文件
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token 8464d030a1f7ac3f7207
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@controller/keystone
#openstack-config --set /etc/keystone/keystone.conf token provider fernet

openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_durable_queues true

# 拷貝配置文件到其他控制節點
scp /etc/keystone/keystone.conf controller02:/etc/keystone/keystone.conf
scp /etc/keystone/keystone.conf controller03:/etc/keystone/keystone.conf


sed -i -e 's#\#ServerName www.example.com:80#ServerName '"controller01"'#g' /etc/httpd/conf/httpd.conf
sed -i -e 's#\#ServerName www.example.com:80#ServerName '"controller02"'#g' /etc/httpd/conf/httpd.conf
sed -i -e 's#\#ServerName www.example.com:80#ServerName '"controller03"'#g' /etc/httpd/conf/httpd.conf

 

# controller01上的配置
vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 192.168.10.101:5000
Listen 192.168.10.101:35357
<VirtualHost 192.168.10.101:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

<VirtualHost 192.168.10.101:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

# controller02上的配置
vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 192.168.10.102:5000
Listen 192.168.10.102:35357
<VirtualHost 192.168.10.102:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

<VirtualHost 192.168.10.102:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

# controller03上的配置
vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 192.168.10.103:5000
Listen 192.168.10.103:35357
<VirtualHost 192.168.10.103:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

<VirtualHost 192.168.10.103:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

 

 

# 添加haproxy配置
vim /etc/haproxy/haproxy.cfg
listen keystone_admin_cluster
bind 192.168.10.100:35357
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:35357 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:35357 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:35357 check inter 2000 rise 2 fall 5
listen keystone_public_internal_cluster
bind 192.168.10.100:5000
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:5000 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:5000 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:5000 check inter 2000 rise 2 fall 5

# 把haproxy配置拷貝到其他控制節點
scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg

# [任一節點]生成數據庫
su -s /bin/sh -c "keystone-manage db_sync" keystone


# [任一節點/controller01]初始化Fernet key,並共享給其他節點
#keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

# 在其他控制節點
#mkdir -p /etc/keystone/fernet-keys/

# 在controller01上
#scp /etc/keystone/fernet-keys/* root@controller02:/etc/keystone/fernet-keys/
#scp /etc/keystone/fernet-keys/* root@controller03:/etc/keystone/fernet-keys/

# 在其他控制節點
chown keystone:keystone /etc/keystone/fernet-keys/*

# [任一節點]添加pacemaker資源,openstack資源和haproxy資源無關,可以開啟A/A模式
# interleave=true副本交錯啟動/停止,改變master/clone間的order約束,每個實例像其他克隆實例一樣可快速啟動/停止,無需等待其他克隆實例。
# interleave這個設置為false的時候,constraint的order順序的受到其他節點的影響,為true不受其他節點影響
pcs resource create openstack-keystone systemd:httpd --clone interleave=true
bash restart-pcs-cluster.sh

# 在任意節點創建臨時token
export OS_TOKEN=8464d030a1f7ac3f7207
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

# [任一節點]service entity and API endpoints
openstack service create --name keystone --description "OpenStack Identity" identity

openstack endpoint create --region RegionOne identity public http://controller:5000/v3
openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
openstack endpoint create --region RegionOne identity admin http://controller:35357/v3

# [任一節點]創建項目和用戶
openstack domain create --description "Default Domain" default
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default --password admin admin
openstack role create admin
openstack role add --project admin --user admin admin

### [任一節點]創建service項目
openstack project create --domain default --description "Service Project" service

# 在任意節點創建demo項目和用戶
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password demo demo
openstack role create user
openstack role add --project demo --user demo user


# 生成keystonerc_admin腳本
echo "export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='[\u@\h \W(keystone_admin)]\$ '
">/root/keystonerc_admin
chmod +x /root/keystonerc_admin

# 生成keystonerc_demo腳本
echo "export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export PS1='[\u@\h \W(keystone_admin)]\$ '
">/root/keystonerc_demo
chmod +x /root/keystonerc_demo


source keystonerc_admin
### check
openstack token issue

source keystonerc_demo
### check
openstack token issue
```

##### 2、安裝openstack Image集群

```
# [任一節點]創建數據庫
mysql -uroot -popenstack -e "CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '"glance"';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '"glance"';
FLUSH PRIVILEGES;"

 

# [任一節點]創建用戶等
source keystonerc_admin 
openstack user create --domain default --password glance glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

# 所有控制節點安裝glance軟件包
yum install -y openstack-glance

# [所有控制節點]配置/etc/glance/glance-api.conf文件
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:glance@controller/glance

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glance

openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host controller
openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host controller01

# [所有控制節點]配置/etc/glance/glance-registry.conf文件
openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:glance@controller/glance

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glance

openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_durable_queues true

openstack-config --set /etc/glance/glance-registry.conf DEFAULT registry_host controller
openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host controller01

scp /etc/glance/glance-api.conf controller02:/etc/glance/glance-api.conf
scp /etc/glance/glance-api.conf controller03:/etc/glance/glance-api.conf
# 修改bind_host為對應的controller02,controller03

scp /etc/glance/glance-registry.conf controller02:/etc/glance/glance-registry.conf
scp /etc/glance/glance-registry.conf controller03:/etc/glance/glance-registry.conf
# 修改bind_host為對應的controller02,controller03

vim /etc/haproxy/haproxy.cfg
# 增加如下配置
listen glance_api_cluster
bind 192.168.10.100:9292
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:9292 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:9292 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:9292 check inter 2000 rise 2 fall 5
listen glance_registry_cluster
bind 192.168.10.100:9191
balance source
option tcpka
option tcplog
server controller01 192.168.10.101:9191 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:9191 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:9191 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg

# [任一節點]生成數據庫
su -s /bin/sh -c "glance-manage db_sync" glance

# [任一節點]添加pacemaker資源
pcs resource create openstack-glance-registry systemd:openstack-glance-registry --clone interleave=true
pcs resource create openstack-glance-api systemd:openstack-glance-api --clone interleave=true
# 下面2條表示先啟動openstack-keystone-clone然后啟動openstack-glance-registry-clone然后啟動openstack-glance-api-clone
pcs constraint order start openstack-keystone-clone then openstack-glance-registry-clone
pcs constraint order start openstack-glance-registry-clone then openstack-glance-api-clone
# api隨着registry啟動而啟動,如果registry啟動不了,則api也啟動不了
pcs constraint colocation add openstack-glance-api-clone with openstack-glance-registry-clone

# 在任意節點重啟pacemaker
bash restart-pcs-cluster.sh

# 上傳測試鏡像
openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
openstack image list
```


##### 3、安裝openstack Compute集群(控制節點)

```
# 所有控制節點安裝軟件包
yum install -y openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler


# [任一節點]創建數據庫
mysql -uroot -popenstack -e "CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '"nova"';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '"nova"';
CREATE DATABASE nova_api;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '"nova"';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '"nova"';
FLUSH PRIVILEGES;"

# [任一節點]創建用戶等
source keystonerc_admin
openstack user create --domain default --password nova nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s

# [所有控制節點]配置配置nova組件,/etc/nova/nova.conf文件
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
# openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:nova@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:nova@controller/nova

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.101
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.101
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.101
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.101
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.101
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.101

scp /etc/nova/nova.conf controller02:/etc/nova/nova.conf
scp /etc/nova/nova.conf controller03:/etc/nova/nova.conf
# 其他節點修改my_ip,vncserver_listen,vncserver_proxyclient_address,osapi_compute_listen,metadata_listen,vnc novncproxy_host
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.102
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.102
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.102
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.102
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.102
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.102


################################
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.10.103
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 192.168.10.103
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address 192.168.10.103
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host 192.168.10.103
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen 192.168.10.103
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen 192.168.10.103
##################################
# 配置haproxy
vim /etc/haproxy/haproxy.cfg
listen nova_compute_api_cluster
bind 192.168.10.100:8774
balance source
option tcpka
option httpchk
option tcplog

server controller01 192.168.10.101:8774 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:8774 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:8774 check inter 2000 rise 2 fall 5
listen nova_metadata_api_cluster
bind 192.168.10.100:8775
balance source
option tcpka
option tcplog
server controller01 192.168.10.101:8775 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:8775 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:8775 check inter 2000 rise 2 fall 5
listen nova_vncproxy_cluster
bind 192.168.10.100:6080
balance source
option tcpka
option tcplog
server controller01 192.168.10.101:6080 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:6080 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:6080 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg


# [任一節點]生成數據庫
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova

# [任一節點]添加pacemaker資源
pcs resource create openstack-nova-consoleauth systemd:openstack-nova-consoleauth --clone interleave=true
pcs resource create openstack-nova-novncproxy systemd:openstack-nova-novncproxy --clone interleave=true
pcs resource create openstack-nova-api systemd:openstack-nova-api --clone interleave=true
pcs resource create openstack-nova-scheduler systemd:openstack-nova-scheduler --clone interleave=true
pcs resource create openstack-nova-conductor systemd:openstack-nova-conductor --clone interleave=true
# 下面幾個order屬性表示先啟動 openstack-keystone-clone 然后啟動openstack-nova-consoleauth-clone
# 然后啟動openstack-nova-novncproxy-clone,然后啟動openstack-nova-api-clone,然后啟動openstack-nova-scheduler-clone
# 然后啟動openstack-nova-conductor-clone
# 下面幾個colocation屬性表示consoleauth約束着novncproxy資源的位置,也就是說consoleauth停止,則novncproxy停止。
# 下面的幾個colocation屬性依次類推
pcs constraint order start openstack-keystone-clone then openstack-nova-consoleauth-clone

pcs constraint order start openstack-nova-consoleauth-clone then openstack-nova-novncproxy-clone
pcs constraint colocation add openstack-nova-novncproxy-clone with openstack-nova-consoleauth-clone

pcs constraint order start openstack-nova-novncproxy-clone then openstack-nova-api-clone
pcs constraint colocation add openstack-nova-api-clone with openstack-nova-novncproxy-clone

pcs constraint order start openstack-nova-api-clone then openstack-nova-scheduler-clone
pcs constraint colocation add openstack-nova-scheduler-clone with openstack-nova-api-clone

pcs constraint order start openstack-nova-scheduler-clone then openstack-nova-conductor-clone
pcs constraint colocation add openstack-nova-conductor-clone with openstack-nova-scheduler-clone

bash restart-pcs-cluster.sh

### [任一節點]測試
source keystonerc_admin
openstack compute service list
```

##### 4、安裝配置neutron集群(控制節點)


```
# [任一節點]創建數據庫
mysql -uroot -popenstack -e "CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '"neutron"';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '"neutron"';
FLUSH PRIVILEGES;"

# [任一節點]創建用戶等
source /root/keystonerc_admin
openstack user create --domain default --password neutron neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

# 所有控制節點
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ebtables


# [所有控制節點]配置neutron server,/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT bind_host 192.168.10.101
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:neutron@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password nova

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp


# [所有控制節點]配置ML2 plugin,/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges external:1:4090
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver iptables_hybrid

# [所有控制節點]配置Open vSwitch agent,/etc/neutron/plugins/ml2/openvswitch_agent.ini,注意,此處填寫第二塊網卡

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 10.0.0.1
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings external:br-ex

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True

# [所有控制節點]配置L3 agent,/etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge

# [所有控制節點]配置DHCP agent,/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True

# [所有控制節點]配置metadata agent,/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 192.168.10.100
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret openstack

# [所有控制節點]配置nova和neutron集成,/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron

openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret openstack

# [所有控制節點]配置L3 agent HA、/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_automatic_l3agent_failover True
openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3
openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2

# [所有控制節點]配置DHCP agent HA、/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3

# [所有控制節點] 配置Open vSwitch (OVS) 服務,創建網橋和端口
systemctl enable openvswitch.service
systemctl start openvswitch.service

# [所有控制節點] 創建ML2配置文件軟連接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

vim /etc/haproxy/haproxy.cfg
listen neutron_api_cluster
bind 192.168.10.100:9696
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:9696 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:9696 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:9696 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg


# 備份原來配置文件
cp /etc/sysconfig/network-scripts/ifcfg-ens160 /etc/sysconfig/network-scripts/bak-ifcfg-ens160
echo "DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep IPADDR|awk -F '=' '{print $2}')
NETMASK=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep NETMASK|awk -F '=' '{print $2}')
GATEWAY=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep GATEWAY|awk -F '=' '{print $2}')
DNS1=$(cat /etc/sysconfig/network-scripts/ifcfg-ens160 |grep DNS1|awk -F '=' '{print $2}')
DNS2=218.2.2.2
ONBOOT=yes">/etc/sysconfig/network-scripts/ifcfg-br-ex

echo "TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
NAME=ens160
DEVICE=ens160
ONBOOT=yes">/etc/sysconfig/network-scripts/ifcfg-ens160

ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex ens160

systemctl restart network.service

# 拷貝配置文件到其他控制節點並作相應修改
scp /etc/neutron/neutron.conf controller02:/etc/neutron/neutron.conf
scp /etc/neutron/neutron.conf controller03:/etc/neutron/neutron.conf

scp /etc/neutron/plugins/ml2/ml2_conf.ini controller02:/etc/neutron/plugins/ml2/ml2_conf.ini
scp /etc/neutron/plugins/ml2/ml2_conf.ini controller03:/etc/neutron/plugins/ml2/ml2_conf.ini

scp /etc/neutron/plugins/ml2/openvswitch_agent.ini controller02:/etc/neutron/plugins/ml2/openvswitch_agent.ini
scp /etc/neutron/plugins/ml2/openvswitch_agent.ini controller03:/etc/neutron/plugins/ml2/openvswitch_agent.ini

scp /etc/neutron/l3_agent.ini controller02:/etc/neutron/l3_agent.ini
scp /etc/neutron/l3_agent.ini controller03:/etc/neutron/l3_agent.ini

scp /etc/neutron/dhcp_agent.ini controller02:/etc/neutron/dhcp_agent.ini
scp /etc/neutron/dhcp_agent.ini controller03:/etc/neutron/dhcp_agent.ini

scp /etc/neutron/metadata_agent.ini controller02:/etc/neutron/metadata_agent.ini
scp /etc/neutron/metadata_agent.ini controller03:/etc/neutron/metadata_agent.ini

 

# [任一節點]生成數據庫
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

# [任一節點]添加pacemaker資源
pcs resource create neutron-server systemd:neutron-server op start timeout=90 --clone interleave=true
pcs constraint order start openstack-keystone-clone then neutron-server-clone

# 全局唯一克隆:參數globally-unique=true。這些資源各不相同。一個節點上運行的克隆實例與另一個節點上運行的實例不同,同一個節點上運行的任何兩個實例也不同。
# clone-max: 在集群中最多能運行多少份克隆資源,默認和集群中的節點數相同; clone-node-max:每個節點上最多能運行多少份克隆資源,默認是1;
pcs resource create neutron-scale ocf:neutron:NeutronScale --clone globally-unique=true clone-max=3 interleave=true
pcs constraint order start neutron-server-clone then neutron-scale-clone

pcs resource create neutron-ovs-cleanup ocf:neutron:OVSCleanup --clone interleave=true
pcs resource create neutron-netns-cleanup ocf:neutron:NetnsCleanup --clone interleave=true
pcs resource create neutron-openvswitch-agent systemd:neutron-openvswitch-agent --clone interleave=true
pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=true
pcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=true
pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent --clone interleave=true

pcs constraint order start neutron-scale-clone then neutron-ovs-cleanup-clone
pcs constraint colocation add neutron-ovs-cleanup-clone with neutron-scale-clone
pcs constraint order start neutron-ovs-cleanup-clone then neutron-netns-cleanup-clone
pcs constraint colocation add neutron-netns-cleanup-clone with neutron-ovs-cleanup-clone
pcs constraint order start neutron-netns-cleanup-clone then neutron-openvswitch-agent-clone
pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-netns-cleanup-clone
pcs constraint order start neutron-openvswitch-agent-clone then neutron-dhcp-agent-clone
pcs constraint colocation add neutron-dhcp-agent-clone with neutron-openvswitch-agent-clone
pcs constraint order start neutron-dhcp-agent-clone then neutron-l3-agent-clone
pcs constraint colocation add neutron-l3-agent-clone with neutron-dhcp-agent-clone
pcs constraint order start neutron-l3-agent-clone then neutron-metadata-agent-clone
pcs constraint colocation add neutron-metadata-agent-clone with neutron-l3-agent-clone

bash restart-pcs-cluster.sh

# [任一節點] 測試
soource keystonerc_admin
neutron ext-list
neutron agent-list
ovs-vsctl show
neutron agent-list
```

##### 5、安裝配置dashboard集群


```
# 所有節點安裝
yum install -y openstack-dashboard


# [所有控制節點] 修改配置文件/etc/openstack-dashboard/local_settings
sed -i \
-e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "'"192.168.10.101"'"#g' \
-e "s#ALLOWED_HOSTS.*#ALLOWED_HOSTS = ['*',]#g" \
-e "s#^CACHES#SESSION_ENGINE = 'django.contrib.sessions.backends.cache'\nCACHES#g#" \
-e "s#locmem.LocMemCache'#memcached.MemcachedCache',\n 'LOCATION' : [ 'controller01:11211', 'controller02:11211', 'controller03:11211', ]#g" \
-e 's#^OPENSTACK_KEYSTONE_URL =.*#OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST#g' \
-e "s/^#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT.*/OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True/g" \
-e 's/^#OPENSTACK_API_VERSIONS.*/OPENSTACK_API_VERSIONS = {\n "identity": 3,\n "image": 2,\n "volume": 2,\n}\n#OPENSTACK_API_VERSIONS = {/g' \
-e "s/^#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN.*/OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'/g" \
-e 's#^OPENSTACK_KEYSTONE_DEFAULT_ROLE.*#OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"#g' \
-e "s#^LOCAL_PATH.*#LOCAL_PATH = '/var/lib/openstack-dashboard'#g" \
-e "s#^SECRET_KEY.*#SECRET_KEY = '4050e76a15dfb7755fe3'#g" \
-e "s#'enable_ha_router'.*#'enable_ha_router': True,#g" \
-e 's#TIME_ZONE = .*#TIME_ZONE = "'"Asia/Shanghai"'"#g' \
/etc/openstack-dashboard/local_settings

scp /etc/openstack-dashboard/local_settings controller02:/etc/openstack-dashboard/local_settings
scp /etc/openstack-dashboard/local_settings controller03:/etc/openstack-dashboard/local_settings

# 在controller02上修改
sed -i -e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "'"192.168.10.102"'"#g' /etc/openstack-dashboard/local_settings
# 在controiller03上修改
sed -i -e 's#OPENSTACK_HOST =.*#OPENSTACK_HOST = "'"192.168.10.103"'"#g' /etc/openstack-dashboard/local_settings

 

# [所有控制節點]
echo "COMPRESS_OFFLINE = True" >> /etc/openstack-dashboard/local_settings
python /usr/share/openstack-dashboard/manage.py compress

# [所有控制節點] 設置HTTPD在特定的IP上監聽
sed -i -e 's/^Listen.*/Listen '"$(ip addr show dev br-ex scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g'|head -n 1)"':80/g' /etc/httpd/conf/httpd.conf

 

vim /etc/haproxy/haproxy.cfg
listen dashboard_cluster
bind 192.168.10.100:80
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:80 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:80 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:80 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg
```


##### 6、安裝配置cinder


```
# 所有控制節點
yum install -y openstack-cinder

# [任一節點]創建數據庫
mysql -uroot -popenstack -e "CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '"cinder"';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '"cinder"';
FLUSH PRIVILEGES;"

# [任一節點]創建用戶等
. /root/keystonerc_admin

# [任一節點]創建用戶等
openstack user create --domain default --password cinder cinder
openstack role add --project service --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

#創建cinder服務的API endpoints
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

#[所有控制節點] 修改/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:cinder@controller/cinder
openstack-config --set /etc/cinder/cinder.conf database max_retries -1

openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password cinder

openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/cinder/cinder.conf DEFAULT control_exchange cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen $(ip addr show dev br-ex scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g'|head -n 1)
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip $(ip addr show dev br-ex scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g'|head -n 1)
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292


# [任一節點]生成數據庫
su -s /bin/sh -c "cinder-manage db sync" cinder

# 所有控制節點修改計算節點配置
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

# 重啟計算節點 nova-api
# pcs resource restart openstack-nova-api-clone


# 安裝配置存儲節點 ,存儲節點和控制節點復用
# 所有節點
yum install lvm2 -y
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

yum install openstack-cinder targetcli python-keystone -y


# 所有控制節點修改部分配置文件
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm

# 增加haproxy.cfg配置文件
vim /etc/haproxy/haproxy.cfg
listen cinder_api_cluster
bind 192.168.10.100:8776
balance source
option tcpka
option httpchk
option tcplog
server controller01 192.168.10.101:8776 check inter 2000 rise 2 fall 5
server controller02 192.168.10.102:8776 check inter 2000 rise 2 fall 5
server controller03 192.168.10.103:8776 check inter 2000 rise 2 fall 5

scp /etc/haproxy/haproxy.cfg controller02:/etc/haproxy/haproxy.cfg
scp /etc/haproxy/haproxy.cfg controller03:/etc/haproxy/haproxy.cfg


# [任一節點]添加pacemaker資源
pcs resource create openstack-cinder-api systemd:openstack-cinder-api --clone interleave=true
pcs resource create openstack-cinder-scheduler systemd:openstack-cinder-scheduler --clone interleave=true
pcs resource create openstack-cinder-volume systemd:openstack-cinder-volume

pcs constraint order start openstack-keystone-clone then openstack-cinder-api-clone
pcs constraint order start openstack-cinder-api-clone then openstack-cinder-scheduler-clone
pcs constraint colocation add openstack-cinder-scheduler-clone with openstack-cinder-api-clone
pcs constraint order start openstack-cinder-scheduler-clone then openstack-cinder-volume
pcs constraint colocation add openstack-cinder-volume with openstack-cinder-scheduler-clone

# 重啟集群
bash restart-pcs-cluster.sh
# [任一節點]測試
. /root/keystonerc_admin
cinder service-list
```
#### 7、安裝配置ceilometer和aodh集群
##### 7.1 安裝配置ceilometer集群

實在無力吐槽這個項目,所以不想寫了

##### 7.2 安裝配置aodh集群

實在無力吐槽這個項目,所以不想寫了


#### 四、安裝配置計算節點
##### 4.1 OpenStack Compute service
```

# 所有計算節點
yum install -y openstack-nova-compute

# 修改配置文件/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip $(ip addr show dev ens160 scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g')
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT memcached_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address $(ip addr show dev ens160 scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g')
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.10.100:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf libvirt virt_type $(count=$(egrep -c '(vmx|svm)' /proc/cpuinfo); if [ $count -eq 0 ];then echo "qemu"; else echo "kvm"; fi)


# 打開虛擬機遷移的監聽端口
sed -i -e "s#\#listen_tls *= *0#listen_tls = 0#g" /etc/libvirt/libvirtd.conf
sed -i -e "s#\#listen_tcp *= *1#listen_tcp = 1#g" /etc/libvirt/libvirtd.conf
sed -i -e "s#\#auth_tcp *= *\"sasl\"#auth_tcp = \"none\"#g" /etc/libvirt/libvirtd.conf
sed -i -e "s#\#LIBVIRTD_ARGS *= *\"--listen\"#LIBVIRTD_ARGS=\"--listen\"#g" /etc/sysconfig/libvirtd

#啟動服務
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
```


##### 4.2 OpenStack Network service


```
# 安裝組件
yum install -y openstack-neutron-openvswitch ebtables ipset
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch


# 修改/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller01:5672,controller02:5672,controller03:5672
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_interval 1
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_retry_backoff 2
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_max_retries 0
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_durable_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password openstack

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

### 配置Open vSwitch agent,/etc/neutron/plugins/ml2/openvswitch_agent.ini,注意,此處填寫第二塊網卡
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver iptables_hybrid

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip $(ip addr show dev ens192 scope global | grep "inet " | sed -e 's#.*inet ##g' -e 's#/.*##g')

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population True

### 配置nova和neutron集成,/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

systemctl restart openstack-nova-compute.service
systemctl start openvswitch.service
systemctl restart neutron-openvswitch-agent.service

systemctl enable openvswitch.service
systemctl enable neutron-openvswitch-agent.service
```

#### 五 修補
控制節點:

```
GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller01' IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller02' IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO 'root'@'controller03' IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.10.101' IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.10.102' IDENTIFIED BY "openstack";
GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.10.103' IDENTIFIED BY "openstack";
```

rabbitmq集群相關:

```
/sbin/service rabbitmq-server stop
/sbin/service rabbitmq-server start
```

 

```
# 設置資源超時時間
pcs resource op defaults timeout=90s

# 清除錯誤
pcs resource cleanup openstack-keystone-clone
```

 


##### mariadb集群排錯

```
報錯描述如下:節點啟動不了,查看 tailf /var/log/messages日志發現如下報錯:
[ERROR] WSREP: gcs/src/gcs_group.cpp:group_post_state_exchange():321
解決錯誤: rm -f /var/lib/mysql/grastate.dat
然后重啟服務即可
```


#### 六 增加dvr功能
##### 6.1 控制節點配置

```
vim /etc/neutron/neutron.conf
[DEFAULT]
router_distributed = true

vim /etc/neutron/plugins/ml2/ml2_conf.ini 
mechanism_drivers = openvswitch,linuxbridge,l2population

vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
enable_distributed_routing = true
[agent]
l2_population = True

vim /etc/neutron/l3_agent.ini 
[DEFAULT]
agent_mode = dvr_snat


vim /etc/openstack-dashboard/local_settings 
'enable_distributed_router': True,

重啟控制節點neutron相關服務,重啟httpd服務
```


##### 6.2 計算節點配置

```
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
mechanism_drivers = openvswitch,l2population


vim /etc/neutron/plugins/ml2/openvswitch_agent.ini
[DEFAULT]
enable_distributed_routing = true
[agent]
l2_population = True

vim /etc/neutron/l3_agent.ini 
[DEFAULT]
interface_driver = openvswitch
external_network_bridge =
agent_mode = dvr


重啟neutron相關服務

ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex ens160
openstack-service restart neutron
```


關於rabbitmq連接數限制問題:


```
[root@controller01 ~]# cat /etc/security/limits.d/20-nproc.conf 
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
* soft nproc 4096
root soft nproc unlimited
*    soft    nofile    10240
*    hard    nofile    10240

[root@controller01 ~]#ulimit -n 10240

[root@controller01 ~]#cat /usr/lib/systemd/system/rabbitmq-server.service
[Service]
LimitNOFILE=10240 #在啟動腳本中添加此參數
[root@controller01 ~]#systemctl daemon-reload
[root@controller01 ~]#systemctl restart rabbitmq-server.service

[root@controller01 ~]#rabbitctl status
{file_descriptors,[{total_limit,10140},
{total_used,2135},
{sockets_limit,9124},
{sockets_used,2133}]}
```


#### 關於高可用路由器
只能在系統管理員頁面上創建高可用或者DVR分布式路由器


# 關於image鏡像共享
把控制節點中的/var/lib/glance/images 鏡像目錄共享出來。


yum -y install nfs-utils rpcbind -y
mkdir /opt/glance/images/ -p
vim /etc/exports
/opt/glance/images/ 10.128.246.0/23(rw,no_root_squash,no_all_squash,sync)

exportfs -r
systemctl enable rpcbind.service
systemctl start rpcbind.service
systemctl enable nfs-server.service 
systemctl start nfs-server.service


2、2個nova節點查看
showmount -e 10.128.247.153

# 三個控制節點掛載
mount -t nfs 10.128.247.153:/opt/glance/images/ /var/lib/glance/iamges/

chown -R glance.glance /opt/glance/images/


##########
普通用戶創建HA路由器
```
neutron router-create router_demo --ha True
```

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM