openstack高可用集群搭建(集中式路由)(train版)


目錄

一、規划

主機規划

# ha-node
10.10.10.21 ha01
10.10.10.22 ha02

# controller-node
10.10.10.31 controller01
10.10.10.32 controller02
10.10.10.33 controller03

# compute-node
10.10.10.41 compute01
10.10.10.42 compute02

# ceph-node
10.10.10.51 ceph01
10.10.10.52 ceph02
10.10.10.53 ceph03
  • ha-node 使用 haproxy + keepalived 實現高可用,vip 10.10.10.10
  • controller-node 部署控制節點相關組件以及網絡節點 neutron 的 sever 與 agent 組件
  • 系統:centos 7.9 ,內核:5.4.152-1.el7.elrepo.x86_64

系統拓撲

image-20211116095507763

  1. congtroller節點運行keystone,glance,horizon,nova&neutron&cinder管理相關組件,另外openstack相關的基礎服務;
  2. compute節點運行nova-compute,neutron-linuxbridge-agent(或者neutron-openvswitch-agent),cinder-volume(后經驗證,如果后端使用共享存儲,建議部署在controller節點,可通過pacemaker控制運行模式,但寫文檔時,此驗證環境的cinder-volume部署在compute節點)等
  3. 控制+網絡節點
  • 控制節點和網絡節點部署在相同機器,也可以控制節點和網絡節點分開(控制節點部署neutron-server;網絡節點部署neutron-agent)
  • 管理網絡(紅色):含host os管理,api等網絡,如果生產環境允許,建議各邏輯網絡使用獨立的物理網絡,api區分admin/internal/public接口,對客戶端只開放public接口;
  • 外部網絡(藍色,External Network):主要針對guest os訪問internet/外部的floating ip;
  • 租戶(虛機)隧道網絡(與vlan網絡共存2選1)(紫色):guest os之間通訊的網絡,采用vxlan/gre等方式;
  • 租戶(虛機)vlan網絡(與隧道網絡共存2選1)(黃色,不需要配置ip):guest os之間通訊的網絡,采用vlan方式(雖然規划了,但是后面創建實例時是沒有使用的);
  • 存儲網絡(綠色):與存儲集群通訊;為了glance和ceph通信;
  1. 計算節點網絡
  • 管理網絡(紅色):含host os管理,api等網絡;

  • 存儲網絡(綠色):與存儲集群通訊;

  • 租戶(虛機)隧道網絡(與vlan網絡共存2選1)(紫色):guest os之間通訊的網絡,采用vxlan/gre等方式;

  • 租戶(虛機)vlan網絡(與隧道網絡共存2選1)(黃色,不需要配置ip):guest os之間通訊的網絡,采用vlan方式(雖然規划了,但是后面創建實例時是沒有使用的);

  1. 存儲節點
  • 管理網絡(紅色):含host os管理,api等網絡;
  • 存儲網絡(綠色):與外部存儲客戶端通信;
  • 存儲集群網絡(黑色):存儲集群內部通訊,數據復制同步網絡,與外界沒有直接聯系;
  1. 無狀態的服務,如xxx-api,采取active/active的模式運行;有狀態的服務,如neturon-xxx-agent,cinder-volume等,建議采取active/passive的模式運行(因前端采用haproxy,客戶端的多次請求可能會被轉發到不同的控制節點,如果客戶端請求被負載到無狀態信息的控制節點,可能會導致操作請求失敗);自身具有集群機制的服務,如rabbitmq,memcached等采用本身的集群機制即可;

Vmware 虛擬機網絡配置

虛擬網絡設置

image-20211108163217264

  • 因為外部網絡ens34是連接在VMnet2上的,按理說應該是VMnet2是NAT模式(vmware只能設置一個NAT網絡),但是因為所有主機需要yum安裝軟件,所以暫時把管理網絡VMnet1設置為NAT模式,后面測試外部網絡功能的時候會把VMnet2設置為NAT模式

ha node

image-20211108163329977

controller + network node

image-20211116100313530

compute node

image-20211108163547693

ceph node

image-20211108163653587

整體規划

host ip service remark
ha01-02 ens33:10.10.10.21-22 1.haproxy
2.keepalived
1.高可用 vip:10.10.10.10
controller01-03 ens33:10.10.10.31-33
ens34:10.10.20.31-33
ens35:10.10.30.31-33
ens36:Vlan Tenant Network
ens37:10.10.50.31-33
1. keystone
2. glance-api , glance-registry
3. nova-api, nova-conductor, nova-consoleauth, nova-scheduler, nova-novncproxy
4. neutron-api, neutron-linuxbridge-agent, neutron-dhcp-agent, neutron-metadata-agent, neutron-l3-agent
5. cinder-api, cinder-schedulera
6. dashboard
7. mariadb, rabbitmq, memcached等
1.控制節點: keystone, glance, horizon, nova&neutron管理組件;
2.網絡節點:虛機網絡,L2(虛擬交換機)/L3(虛擬路由器),dhcp,route,nat等;
3.openstack基礎服務
compute01-02 ens33:10.10.10.41-42
ens34:10.10.50.41-42
ens35:10.10.30.41-42
ens36:Vlan Tenant Network
1. nova-compute
2. neutron-linuxbridge-agent
3. cinder-volume(如果后端使用共享存儲,建議部署在controller節點)
1.計算節點:hypervisor(kvm);
2.網絡節點:虛機網絡 L2(虛擬交換機);
ceph01-03 ens33:10.10.10.51-53
ens34:10.10.50.51-53
ens35:10.10.60.51-53
1. ceph-mon, ceph-mgr
2. ceph-osd
1.存儲節點:調度,監控(ceph)等組件;
2.存儲節點:卷服務等組件

網卡配置參考

[root@controller01 ~]# tail /etc/sysconfig/network-scripts/ifcfg-ens*
==> /etc/sysconfig/network-scripts/ifcfg-ens33 <==
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=7fff7303-8b35-4728-a4f2-f33d20aefdf4
DEVICE=ens33
ONBOOT=yes
IPADDR=10.10.10.31
NETMASK=255.255.255.0
GATEWAY=10.10.10.2
DNS1=10.10.10.2

==> /etc/sysconfig/network-scripts/ifcfg-ens34 <==
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
UUID=8f98810c-a504-4d16-979d-4829501a8c7c
DEVICE=ens34
ONBOOT=yes
IPADDR=10.10.20.31
NETMASK=255.255.255.0

==> /etc/sysconfig/network-scripts/ifcfg-ens35 <==
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens35
UUID=ba3ac372-df26-4226-911e-4a48031f80a8
DEVICE=ens35
ONBOOT=yes
IPADDR=10.10.30.31
NETMASK=255.255.255.0

==> /etc/sysconfig/network-scripts/ifcfg-ens36 <==
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens36
UUID=d7ab5617-a38f-4c28-b30a-f49a1cfd0060
DEVICE=ens36
ONBOOT=yes

==> /etc/sysconfig/network-scripts/ifcfg-ens37 <==
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens40
UUID=662b80cb-31f1-386d-b293-c86cfe98d755
ONBOOT=yes
IPADDR=10.10.50.31
NETMASK=255.255.255.0
  • 其中一張網卡配置默認網關和DNS

升級內核

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# 安裝ELRepo
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 載入elrepo-kernel元數據
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
# 查看可用的rpm包
yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
# 安裝長期支持版本的kernel
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64
# 刪除舊版本工具包
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y
# 安裝新版本工具包
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64

#查看默認啟動順序
awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg  

#默認啟動的順序是從0開始,新內核是從頭插入(目前位置在0,而4.4.4的是在1),所以需要選擇0。
grub2-set-default 0
  • 內核:5.4.152-1.el7.elrepo.x86_64

配置firewalld、selinux、ntp時間同步、hostname、hosts文件

echo "# ha-node
10.10.10.21 ha01
10.10.10.22 ha02

# controller-node
10.10.10.31 controller01
10.10.10.32 controller02
10.10.10.33 controller03

# compute-node
10.10.10.41 compute01
10.10.10.42 compute02

# ceph-node
10.10.10.51 ceph01
10.10.10.52 ceph02
10.10.10.53 ceph03
" >> /etc/hosts

配置集群 ssh 信任關系

# 生成密鑰
ssh-keygen -t rsa -P ''

# 拷貝公鑰給本機
ssh-copy-id -i .ssh/id_rsa.pub root@localhost

# 拷貝 .ssh 目錄所有文件到集群其他節點
scp -rp .ssh/ root@ha01:/root
scp -rp .ssh/ root@ha02:/root
scp -rp .ssh/ root@controller01:/root
scp -rp .ssh/ root@controller02:/root
scp -rp .ssh/ root@controller03:/root
scp -rp .ssh/ root@compute1:/root
scp -rp .ssh/ root@compute2:/root
scp -rp .ssh/ root@ceph01:/root
scp -rp .ssh/ root@ceph02:/root
scp -rp .ssh/ root@ceph03:/root

完成后,集群中所有主機可以互相免密登錄了

優化 ssh 登陸速度

sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config 
systemctl restart sshd

內核參數優化

所有節點

echo 'modprobe br_netfilter' >> /etc/rc.d/rc.local
chmod 755 /etc/rc.d/rc.local
modprobe br_netfilter
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1'  >>/etc/sysctl.conf
sysctl -p

在 ha01 和 ha02 節點上添加,允許本地不存在 IP 綁定監聽端口,允許運行中的 HAProxy 實例綁定端口到VIP

echo 'net.ipv4.ip_nonlocal_bind = 1' >> /etc/sysctl.conf
sysctl -p

安裝基礎軟件包

所有節點

yum install epel-release -y
yum install centos-release-openstack-train -y
yum clean all
yum makecache
yum install python-openstackclient -y
  • ha 節點可以不用安裝

openstack-utils能夠讓openstack安裝更加簡單,直接在命令行修改配置文件(全部節點)

mkdir -p /opt/tools
yum install wget crudini -y
wget --no-check-certificate -P /opt/tools https://cbs.centos.org/kojifiles/packages/openstack-utils/2017.1/1.el7/noarch/openstack-utils-2017.1-1.el7.noarch.rpm
rpm -ivh /opt/tools/openstack-utils-2017.1-1.el7.noarch.rpm
  • ha 節點可以不用安裝

二、基礎服務

MariaDB集群

采用MariaDB + Galera組成三個Active節點,外部訪問通過Haproxy的active + backend方式代理。平時主庫為A,當A出現故障,則切換到B或C節點。目前測試將MariaDB三個節點部署到了控制節點上。

官方推薦:三個節點的MariaDB和Galera集群,建議每個集群具有4個vCPU和8 GB RAM

img

安裝與配置修改

在全部controller節點安裝mariadb,以controller01節點為例

yum install mariadb mariadb-server python2-PyMySQL -y

在全部controller節點安裝galera相關插件,利用galera搭建集群

yum install mariadb-server-galera mariadb-galera-common galera xinetd rsync -y

systemctl restart mariadb.service
systemctl enable mariadb.service

在全部controller節點初始化mariadb數據庫密碼,以controller01節點為例

[root@controller01 ~]# mysql_secure_installation
#輸入root用戶的當前密碼(不輸入密碼)
Enter current password for root (enter for none): 
#設置root密碼?
Set root password? [Y/n] y
#新密碼:
New password: 
#重新輸入新的密碼:
Re-enter new password: 
#刪除匿名用戶?
Remove anonymous users? [Y/n] y
#禁止遠程root登錄?
Disallow root login remotely? [Y/n] n
#刪除測試數據庫並訪問它?
Remove test database and access to it? [Y/n] y
#現在重新加載特權表?
Reload privilege tables now? [Y/n] y 

修改mariadb配置文件

在全部控制節點/etc/my.cnf.d/目錄下新增openstack.cnf配置文件,主要設置集群同步相關參數,以controller01節點為例,個別涉及ip地址/host名等參數根據當前節點實際情況修改

創建和編輯/etc/my.cnf.d/openstack.cnf文件

[server]

[mysqld]
bind-address = 10.10.10.31
max_connections = 1000
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/run/mariadb/mariadb.pid
max_allowed_packet = 500M
net_read_timeout = 120
net_write_timeout = 300
thread_pool_idle_timeout = 300

[galera]
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="mariadb_galera_cluster"

wsrep_cluster_address="gcomm://controller01,controller02,controller03"
wsrep_node_name="controller01"
wsrep_node_address="10.10.10.31"

binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_slave_threads=4
innodb_flush_log_at_trx_commit=2
innodb_buffer_pool_size=1024M
wsrep_sst_method=rsync

[embedded]

[mariadb]

[mariadb-10.3]

構建集群

停止全部控制節點的mariadb服務,以controller01節點為例

systemctl stop mariadb

在controller01節點通過如下方式啟動mariadb服務

/usr/libexec/mysqld --wsrep-new-cluster --user=root &

其他控制節點加入mariadb集群

systemctl start mariadb.service
  • 啟動后加入集群,當前節點會從 controller01 節點同步數據,查看 mariadb 日志 /var/log/mariadb/mariadb.log

回到controller01節點重新配置mariadb

#重啟controller01節點;並在啟動前刪除contrller01節點之前的數據 
pkill -9 mysqld
rm -rf /var/lib/mysql/*

#注意以system unit方式啟動mariadb服務時的權限
chown mysql:mysql /var/run/mariadb/mariadb.pid

## 啟動后查看節點所在服務狀態,正常的話 contrller01 節點會從 contrller02 節點同步數據
systemctl start mariadb.service
systemctl status mariadb.service

查看集群狀態

[root@controller01 ~]# mysql -uroot -p123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 13
Server version: 10.3.20-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show status like "wsrep_cluster_size";
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+
1 row in set (0.001 sec)

MariaDB [(none)]> show status LIKE 'wsrep_ready';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| wsrep_ready   | ON    |
+---------------+-------+
1 row in set (0.000 sec)

MariaDB [(none)]> 

在controller01創建數據庫,到另外兩台節點上查看是否可以同步

[root@controller01 ~]# mysql -uroot -p123456
MariaDB [(none)]> create database cluster_test charset utf8mb4;
Query OK, 1 row affected (0.005 sec)

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| cluster_test       |
| information_schema |
| mysql              |
| performance_schema |
+--------------------+

另外兩台查看

[root@controller02 ~]# mysql -uroot -p123456 -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| cluster_test       |  √
| information_schema |
| mysql              |
| performance_schema |
+--------------------+

[root@controller03 ~]# mysql -uroot -p123456 -e 'show databases;'
+--------------------+
| Database           |
+--------------------+
| cluster_test       |  √
| information_schema |
| mysql              |
| performance_schema |
+--------------------+

設置心跳檢測clustercheck

在全部控制節點下載修改 clustercheck 腳本

wget -P /extend/shell/ https://raw.githubusercontent.com/olafz/percona-clustercheck/master/clustercheck

在任意一個控制節點的數據庫中創建 clustercheck_user 用戶並賦權; 其他兩台節點會自動同步

mysql -uroot -p123456
GRANT PROCESS ON *.* TO 'clustercheck'@'localhost' IDENTIFIED BY '123456';
flush privileges;
exit;

修改所有控制節點 clustercheck 腳本,注意賬號/密碼與上一步新建的賬號/密碼對應

$ vi /extend/shell/clustercheck
MYSQL_USERNAME="clustercheck"
MYSQL_PASSWORD="123456"
MYSQL_HOST="localhost"
MYSQL_PORT="3306"
...

#添加執行權限並復制到/usr/bin/下
$ chmod +x /extend/shell/clustercheck
$ cp /extend/shell/clustercheck /usr/bin/
  • 最新下載的 clustercheck 腳本好像不用設置 MYSQL_HOST 與 MYSQL_PORT 參數
  • /usr/bin/clustercheck 參考
#!/bin/bash
#
# Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly
#
# Author: Olaf van Zandwijk <olaf.vanzandwijk@nedap.com>
# Author: Raghavendra Prabhu <raghavendra.prabhu@percona.com>
#
# Documentation and download: https://github.com/olafz/percona-clustercheck
#
# Based on the original script from Unai Rodriguez
#

if [[ $1 == '-h' || $1 == '--help' ]];then
    echo "Usage: $0 <user> <pass> <available_when_donor=0|1> <log_file> <available_when_readonly=0|1> <defaults_extra_file>"
    exit
fi

# if the disabled file is present, return 503. This allows
# admins to manually remove a node from a cluster easily.
if [ -e "/var/tmp/clustercheck.disabled" ]; then
    # Shell return-code is 1
    echo -en "HTTP/1.1 503 Service Unavailable\r\n"
    echo -en "Content-Type: text/plain\r\n"
    echo -en "Connection: close\r\n"
    echo -en "Content-Length: 51\r\n"
    echo -en "\r\n"
    echo -en "Percona XtraDB Cluster Node is manually disabled.\r\n"
    sleep 0.1
    exit 1
fi

set -e

if [ -f /etc/sysconfig/clustercheck ]; then
        . /etc/sysconfig/clustercheck
fi

MYSQL_USERNAME="clustercheck"
MYSQL_PASSWORD="123456"
MYSQL_HOST="localhost"
MYSQL_PORT="3306"

AVAILABLE_WHEN_DONOR=${AVAILABLE_WHEN_DONOR:-0}
ERR_FILE="${ERR_FILE:-/dev/null}"
AVAILABLE_WHEN_READONLY=${AVAILABLE_WHEN_READONLY:-1}
DEFAULTS_EXTRA_FILE=${DEFAULTS_EXTRA_FILE:-/etc/my.cnf}

#Timeout exists for instances where mysqld may be hung
TIMEOUT=10

EXTRA_ARGS=""
if [[ -n "$MYSQL_USERNAME" ]]; then
    EXTRA_ARGS="$EXTRA_ARGS --user=${MYSQL_USERNAME}"
fi
if [[ -n "$MYSQL_PASSWORD" ]]; then
    EXTRA_ARGS="$EXTRA_ARGS --password=${MYSQL_PASSWORD}"
fi
if [[ -r $DEFAULTS_EXTRA_FILE ]];then
    MYSQL_CMDLINE="mysql --defaults-extra-file=$DEFAULTS_EXTRA_FILE -nNE --connect-timeout=$TIMEOUT \
                    ${EXTRA_ARGS}"
else
    MYSQL_CMDLINE="mysql -nNE --connect-timeout=$TIMEOUT ${EXTRA_ARGS}"
fi
#
# Perform the query to check the wsrep_local_state
#
WSREP_STATUS=$($MYSQL_CMDLINE -e "SHOW STATUS LIKE 'wsrep_local_state';" \
    2>${ERR_FILE} | tail -1 2>>${ERR_FILE})

if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]]
then
    # Check only when set to 0 to avoid latency in response.
    if [[ $AVAILABLE_WHEN_READONLY -eq 0 ]];then
        READ_ONLY=$($MYSQL_CMDLINE -e "SHOW GLOBAL VARIABLES LIKE 'read_only';" \
                    2>${ERR_FILE} | tail -1 2>>${ERR_FILE})

        if [[ "${READ_ONLY}" == "ON" ]];then
            # Percona XtraDB Cluster node local state is 'Synced', but it is in
            # read-only mode. The variable AVAILABLE_WHEN_READONLY is set to 0.
            # => return HTTP 503
            # Shell return-code is 1
            echo -en "HTTP/1.1 503 Service Unavailable\r\n"
            echo -en "Content-Type: text/plain\r\n"
            echo -en "Connection: close\r\n"
            echo -en "Content-Length: 43\r\n"
            echo -en "\r\n"
            echo -en "Percona XtraDB Cluster Node is read-only.\r\n"
            sleep 0.1
            exit 1
        fi
    fi
    # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200
    # Shell return-code is 0
    echo -en "HTTP/1.1 200 OK\r\n"
    echo -en "Content-Type: text/plain\r\n"
    echo -en "Connection: close\r\n"
    echo -en "Content-Length: 40\r\n"
    echo -en "\r\n"
    echo -en "Percona XtraDB Cluster Node is synced.\r\n"
    sleep 0.1
    exit 0
else
    # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503
    # Shell return-code is 1
    echo -en "HTTP/1.1 503 Service Unavailable\r\n"
    echo -en "Content-Type: text/plain\r\n"
    echo -en "Connection: close\r\n"
    echo -en "Content-Length: 44\r\n"
    echo -en "\r\n"
    echo -en "Percona XtraDB Cluster Node is not synced.\r\n"
    sleep 0.1
    exit 1
fi

創建心跳檢測服務

在全部控制節點新增心跳檢測服務配置文件/etc/xinetd.d/galera-monitor,以controller01節點為例

$ vi /etc/xinetd.d/galera-monitor
# default:on
# description: galera-monitor
service galera-monitor
{
port = 9200
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
group = root
groups = yes
server = /usr/bin/clustercheck
type = UNLISTED
per_source = UNLIMITED
log_on_success =
log_on_failure = HOST
flags = REUSE
}

修改 /etc/services

...
#wap-wsp        9200/tcp                # WAP connectionless session service
galera-monitor  9200/tcp                # galera-monitor
...

啟動 xinetd 服務

```
# 全部控制節點都需要啟動
systemctl daemon-reload
systemctl enable xinetd
systemctl start xinetd
systemctl status xinetd
```

測試心跳檢測腳本

在全部控制節點驗證,以controller01節點為例

$ /usr/bin/clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40

Percona XtraDB Cluster Node is synced.

異常關機或異常斷電后的修復

當突然停電,所有galera主機都非正常關機,來電后開機,會導致galera集群服務無法正常啟動。以下為處理辦法

第1步:開啟galera集群的群主主機的mariadb服務。
第2步:開啟galera集群的成員主機的mariadb服務。

異常處理:galera集群的群主主機和成員主機的mysql服務無法啟動,如何處理?

#解決方法一:
第1步、刪除garlera群主主機的/var/lib/mysql/grastate.dat狀態文件
/bin/galera_new_cluster啟動服務。啟動正常。登錄並查看wsrep狀態。

第2步:刪除galera成員主機中的/var/lib/mysql/grastate.dat狀態文件
systemctl restart mariadb重啟服務。啟動正常。登錄並查看wsrep狀態。

#解決方法二:
第1步、修改garlera群主主機的/var/lib/mysql/grastate.dat狀態文件中的0為1
/bin/galera_new_cluster啟動服務。啟動正常。登錄並查看wsrep狀態。

第2步:修改galera成員主機中的/var/lib/mysql/grastate.dat狀態文件中的0為1
systemctl restart mariadb重啟服務。啟動正常。登錄並查看wsrep狀態。

經過實際發現,以下操作步驟也可以:
第1步、修改garlera群主主機的/var/lib/mysql/grastate.dat狀態文件中的0為1
systemctl restart mariadb重啟服務

第2步:修改galera成員主機直接使用systemctl restart mariadb重啟服務

RabbitMQ集群

RabbitMQ采用原生Cluster集群,所有節點同步鏡像隊列。三台物理機,其中2個Mem節點主要提供服務,1個Disk節點用於持久化消息,客戶端根據需求分別配置主從策略。

目前測試將RabbitMQ三個節點部署到了控制節點上。

img

下載相關軟件包(所有控制節點)

以controller01節點為例,RabbbitMQ基與erlang開發,首先安裝erlang,采用yum方式

yum install erlang rabbitmq-server -y
systemctl enable rabbitmq-server.service

構建rabbitmq集群

任選1個控制節點首先啟動rabbitmq服務

這里選擇controller01節點

systemctl start rabbitmq-server.service
rabbitmqctl cluster_status

分發.erlang.cookie到其他控制節點

scp -p /var/lib/rabbitmq/.erlang.cookie  controller02:/var/lib/rabbitmq/
scp -p /var/lib/rabbitmq/.erlang.cookie  controller03:/var/lib/rabbitmq/

修改controller02和03節點.erlang.cookie文件的用戶/組

[root@controller02 ~]# chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie

[root@controller03 ~]# chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
  • 注意:修改全部控制節點.erlang.cookie文件的權限,默認為400權限,可用不修改

啟動controller02和03節點的rabbitmq服務

[root@controller02 ~]# systemctl start rabbitmq-server

[root@controller03 ~]# systemctl start rabbitmq-server

構建集群,controller02和03節點以ram節點的形式加入集群

[root@controller02 ~]# rabbitmqctl stop_app
[root@controller02 ~]# rabbitmqctl join_cluster --ram rabbit@controller01
[root@controller02 ~]# rabbitmqctl start_app
[root@controller03 ~]# rabbitmqctl stop_app
[root@controller03 ~]# rabbitmqctl join_cluster --ram rabbit@controller01
[root@controller03 ~]# rabbitmqctl start_app

任意控制節點查看RabbitMQ集群狀態

$ rabbitmqctl cluster_status
Cluster status of node rabbit@controller01
[{nodes,[{disc,[rabbit@controller01]},
         {ram,[rabbit@controller03,rabbit@controller02]}]},
 {running_nodes,[rabbit@controller03,rabbit@controller02,rabbit@controller01]},
 {cluster_name,<<"rabbit@controller01">>},
 {partitions,[]},
 {alarms,[{rabbit@controller03,[]},
          {rabbit@controller02,[]},
          {rabbit@controller01,[]}]}]

創建rabbitmq管理員賬號

# 在任意節點新建賬號並設置密碼,以controller01節點為例
[root@controller01 ~]# rabbitmqctl add_user openstack 123456
Creating user "openstack"

# 設置新建賬號的狀態
[root@controller01 ~]# rabbitmqctl set_user_tags openstack administrator
Setting tags for user "openstack" to [administrator]

# 設置新建賬號的權限
[root@controller01 ~]# rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"

# 查看賬號
[root@controller01 ~]# rabbitmqctl list_users 
Listing users
openstack       [administrator]
guest   [administrator]

鏡像隊列的ha

設置鏡像隊列高可用

[root@controller01 ~]# rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'
Setting policy "ha-all" for pattern "^" to "{\"ha-mode\":\"all\"}" with priority "0"

任意控制節點查看鏡像隊列策略

[root@controller01 ~]# rabbitmqctl list_policies
Listing policies
/       ha-all  all     ^       {"ha-mode":"all"}       0

安裝web管理插件

在全部控制節點安裝web管理插件,以controller01節點為例

[root@controller01 ~]# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
  amqp_client
  cowlib
  cowboy
  rabbitmq_web_dispatch
  rabbitmq_management_agent
  rabbitmq_management

Applying plugin configuration to rabbit@controller01... started 6 plugins.


[root@controller01 ~]# ss -ntlp|grep 5672
LISTEN     0      128          *:25672                    *:*                   users:(("beam",pid=2222,fd=42))
LISTEN     0      1024         *:15672                    *:*                   users:(("beam",pid=2222,fd=54))
LISTEN     0      128       [::]:5672                  [::]:*                   users:(("beam",pid=2222,fd=53))

image-20211104144552147

image-20211104144643284

Memcached集群

Memcached是無狀態的,各控制節點獨立部署,openstack各服務模塊統一調用多個控制節點的memcached服務即可。

安裝memcache的軟件包

在全部控制節點安裝

yum install memcached python-memcached -y

設置memcached

在全部安裝memcached服務的節點設置服務監聽本地地址

sed -i 's|127.0.0.1,::1|0.0.0.0|g' /etc/sysconfig/memcached

啟動服務

systemctl enable memcached.service
systemctl start memcached.service
systemctl status memcached.service
ss -tnlp|grep memcached

高可用 haproxy + keepalived

Openstack官網使用開源的pacemaker cluster stack做為集群高可用資源管理軟件。但是我沒接觸過,也不想去研究了,直接使用熟悉的配方:haproxy + keepalived。

vip規划:10.10.10.10

安裝軟件

兩台 ha 節點執行

yum install haproxy keepalived -y

配置 keepalived

修改 ha01 節點 keepalived 配置 /etc/keepalived/keepalived.conf:

! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
    script "/data/sh/check_haproxy.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.10.10.10
    }
    track_script {
        chk_haproxy
    }
}
  • 注意網卡名與vip

修改 ha02 節點 keepalived 配置 /etc/keepalived/keepalived.conf:

! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script chk_haproxy {
    script "/data/sh/check_haproxy.sh"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.10.10.10
    }
    track_script {
        chk_haproxy
    }
}

ha01 和 ha02 添加 haproxy 檢測腳本:

$ mkdir -p /data/sh/
$ vi /data/sh/check_haproxy.sh
#!/bin/bash

#auto check haprox process

haproxy_process_count=$(ps aux|grep haproxy|grep -v check_haproxy|grep -v grep|wc -l)

if [[ $haproxy_process_count == 0 ]];then
   systemctl stop keepalived
fi

$ chmod 755 /data/sh/check_haproxy.sh

啟動 haproxy 與 keepalived

systemctl enable haproxy
systemctl start haproxy
systemctl status haproxy
systemctl enable keepalived
systemctl start keepalived
systemctl status keepalived

啟動正常后,在 ha01 節點應該可以看到已經正常添加 vip 10.10.10.10

image-20211104151247959

測試高可用

在 ha01 停止 haproxy,正常的話 vip 會漂移到 ha02 主機上

image-20211104151423454

在 ha01 重新啟動 haproxy 和 keepalived 后 vip 會漂移回來

image-20211104151627452

配置 haproxy

建議開啟haproxy的日志功能,便於后續的問題排查

mkdir /var/log/haproxy
chmod a+w /var/log/haproxy

在rsyslog文件下修改以下字段

# 取消注釋並添加
$ vi /etc/rsyslog.conf
 19 $ModLoad imudp
 20 $UDPServerRun 514
 
 24 $ModLoad imtcp
 25 $InputTCPServerRun 514

# 在文件最后添加haproxy配置日志
local0.=info    -/var/log/haproxy/haproxy-info.log
local0.=err     -/var/log/haproxy/haproxy-err.log
local0.notice;local0.!=err      -/var/log/haproxy/haproxy-notice.log

# 重啟rsyslog
$ systemctl restart rsyslog

haproxy 配置中涉及服務較多,這里針對涉及到的 openstack 服務,一次性設置完成

全部 ha 節點都需配置,配置文件 /etc/haproxy/haproxy.cfg

global
  log      127.0.0.1     local0
  chroot   /var/lib/haproxy
  daemon
  group    haproxy
  user     haproxy
  maxconn  4000
  pidfile  /var/run/haproxy.pid
  stats    socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    maxconn                 4000    #最大連接數
    option                  httplog
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout check           10s


# haproxy監控頁
listen stats
  bind 0.0.0.0:1080
  mode http
  stats enable
  stats uri /
  stats realm OpenStack\ Haproxy
  stats auth admin:123456
  stats  refresh 30s
  stats  show-node
  stats  show-legends
  stats  hide-version

# horizon服務
 listen dashboard_cluster
  bind  10.10.10.10:80
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 10.10.10.31:80 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:80 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:80 check inter 2000 rise 2 fall 5

# mariadb服務;
#設置controller01節點為master,controller02/03節點為backup,一主多備的架構可規避數據不一致性;
#另外官方示例為檢測9200(心跳)端口,測試在mariadb服務宕機的情況下,雖然”/usr/bin/clustercheck”腳本已探測不到服務,但受xinetd控制的9200端口依然正常,導致haproxy始終將請求轉發到mariadb服務宕機的節點,暫時修改為監聽3306端口
listen galera_cluster
  bind 10.10.10.10:3306
  balance  source
  mode    tcp
  server controller01 10.10.10.31:3306 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:3306 backup check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:3306 backup check inter 2000 rise 2 fall 5

#為rabbirmq提供ha集群訪問端口,供openstack各服務訪問;
#如果openstack各服務直接連接rabbitmq集群,這里可不設置rabbitmq的負載均衡
 listen rabbitmq_cluster
   bind 10.10.10.10:5672
   mode tcp
   option tcpka
   balance roundrobin
   timeout client  3h
   timeout server  3h
   option  clitcpka
   server controller01 10.10.10.31:5672 check inter 10s rise 2 fall 5
   server controller02 10.10.10.32:5672 check inter 10s rise 2 fall 5
   server controller03 10.10.10.33:5672 check inter 10s rise 2 fall 5

# glance_api服務
 listen glance_api_cluster
  bind  10.10.10.10:9292
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  timeout client 3h 
  timeout server 3h
  server controller01 10.10.10.31:9292 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:9292 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:9292 check inter 2000 rise 2 fall 5

# keystone_public _api服務
 listen keystone_public_cluster
  bind 10.10.10.10:5000
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 10.10.10.31:5000 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:5000 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:5000 check inter 2000 rise 2 fall 5

 listen nova_compute_api_cluster
  bind 10.10.10.10:8774
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 10.10.10.31:8774 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:8774 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:8774 check inter 2000 rise 2 fall 5

 listen nova_placement_cluster
  bind 10.10.10.10:8778
  balance  source
  option  tcpka
  option  tcplog
  server controller01 10.10.10.31:8778 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:8778 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:8778 check inter 2000 rise 2 fall 5

 listen nova_metadata_api_cluster
  bind 10.10.10.10:8775
  balance  source
  option  tcpka
  option  tcplog
  server controller01 10.10.10.31:8775 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:8775 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:8775 check inter 2000 rise 2 fall 5

 listen nova_vncproxy_cluster
  bind 10.10.10.10:6080
  balance  source
  option  tcpka
  option  tcplog
  server controller01 10.10.10.31:6080 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:6080 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:6080 check inter 2000 rise 2 fall 5

 listen neutron_api_cluster
  bind 10.10.10.10:9696
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 10.10.10.31:9696 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:9696 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:9696 check inter 2000 rise 2 fall 5

 listen cinder_api_cluster
  bind 10.10.10.10:8776
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 10.10.10.31:8776 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:8776 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:8776 check inter 2000 rise 2 fall 5
  • bind ip 設置為 vip

重啟 haproxy

systemctl restart haproxy
systemctl status haproxy

訪問 haproxy 自帶 web 管理頁面:

http://10.10.10.10:1080/ or http://10.10.10.21:1080/ or http://10.10.10.22:1080/

admin 123456

每個項的狀態可以清晰看到;可以看到很多為紅色,正常,因為這些服務現在還未安裝;

image-20211104153418231

至此,openstack的基礎依賴服務基本部署完成。

三、Keystone集群部署

Keystone 的主要功能:

  • 管理用戶及其權限;
  • 維護 OpenStack 服務的 Endpoint;
  • Authentication(認證)和 Authorization(鑒權)。

創建keystone數據庫

在任意控制節點創建數據庫,數據庫自動同步

$ mysql -uroot -p123456
create database keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
flush privileges;
exit;

安裝keystone

在全部控制節點安裝 keystone

wget ftp://ftp.pbone.net/mirror/archive.fedoraproject.org/epel/testing/6.2019-05-29/x86_64/Packages/p/python2-qpid-proton-0.28.0-1.el7.x86_64.rpm
wget ftp://ftp.pbone.net/mirror/vault.centos.org/7.8.2003/messaging/x86_64/qpid-proton/Packages/q/qpid-proton-c-0.28.0-1.el7.x86_64.rpm
rpm -ivh qpid-proton-c-0.28.0-1.el7.x86_64.rpm
rpm -ivh python2-qpid-proton-0.28.0-1.el7.x86_64.rpm
# yum install openstack-keystone httpd python3-mod_wsgi mod_ssl -y # centos 8
yum install openstack-keystone httpd mod_wsgi mod_ssl -y

#備份Keystone配置文件
cp /etc/keystone/keystone.conf{,.bak}
egrep -v '^$|^#' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
  • 如果要使用https訪問,需要安裝mod_ssl

  • 自帶的 python2-qpid-proton 為 0.26,不滿足版本需求,需升級

配置Keystone

openstack-config --set /etc/keystone/keystone.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/keystone/keystone.conf cache enabled true
openstack-config --set /etc/keystone/keystone.conf cache memcache_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:123456@10.10.10.10/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet
  • 三個控制節點配置一樣

初始化keystone數據庫

在任意控制節點操作

# keystone用戶初始化數據庫
$ su -s /bin/sh -c "keystone-manage db_sync" keystone

# 驗證數據庫
$ mysql -uroot -p123456 keystone -e "show tables";
+------------------------------------+
| Tables_in_keystone                 |
+------------------------------------+
| access_rule                        |
| access_token                       |
| application_credential             |
| application_credential_access_rule |
| application_credential_role        |
| assignment                         |
| config_register                    |
| consumer                           |
| credential                         |
| endpoint                           |
| endpoint_group                     |
| federated_user                     |
| federation_protocol                |
| group                              |
| id_mapping                         |
| identity_provider                  |
| idp_remote_ids                     |
| implied_role                       |
| limit                              |
| local_user                         |
| mapping                            |
| migrate_version                    |
| nonlocal_user                      |
| password                           |
| policy                             |
| policy_association                 |
| project                            |
| project_endpoint                   |
| project_endpoint_group             |
| project_option                     |
| project_tag                        |
| region                             |
| registered_limit                   |
| request_token                      |
| revocation_event                   |
| role                               |
| role_option                        |
| sensitive_config                   |
| service                            |
| service_provider                   |
| system_assignment                  |
| token                              |
| trust                              |
| trust_role                         |
| user                               |
| user_group_membership              |
| user_option                        |
| whitelisted_config                 |
+------------------------------------+

初始化Fernet密鑰存儲庫,無報錯即為成功

# 在/etc/keystone/生成相關秘鑰及目錄
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

#並將初始化的密鑰拷貝到其他的控制節點
scp -rp /etc/keystone/fernet-keys /etc/keystone/credential-keys controller02:/etc/keystone/
scp -rp /etc/keystone/fernet-keys /etc/keystone/credential-keys controller03:/etc/keystone/

#同步后修改另外兩台控制節點fernet的權限
chown -R keystone:keystone /etc/keystone/credential-keys/
chown -R keystone:keystone /etc/keystone/fernet-keys/

認證引導

任意控制節點操作;初始化admin用戶(管理用戶)與密碼,3種api端點,服務實體可用區等

注意:這里使用的是vip

keystone-manage bootstrap --bootstrap-password 123456 \
    --bootstrap-admin-url http://10.10.10.10:5000/v3/ \
    --bootstrap-internal-url http://10.10.10.10:5000/v3/ \
    --bootstrap-public-url http://10.10.10.10:5000/v3/ \
    --bootstrap-region-id RegionOne

配置Http Server

在全部控制節點設置,以controller01節點為例

配置 httpd.conf

#修改域名為主機名
cp /etc/httpd/conf/httpd.conf{,.bak}
sed -i "s/#ServerName www.example.com:80/ServerName ${HOSTNAME}/" /etc/httpd/conf/httpd.conf

#不同的節點替換不同的ip地址
# controller01
sed -i "s/Listen\ 80/Listen\ 10.10.10.31:80/g" /etc/httpd/conf/httpd.conf

# controller02
sed -i "s/Listen\ 80/Listen\ 10.10.10.32:80/g" /etc/httpd/conf/httpd.conf

# controller03
sed -i "s/Listen\ 80/Listen\ 10.10.10.33:80/g" /etc/httpd/conf/httpd.conf

配置wsgi-keystone.conf

#創建軟連接wsgi-keystone.conf文件
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

#不同的節點替換不同的ip地址
##controller01
sed -i "s/Listen\ 5000/Listen\ 10.10.10.31:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s#*:5000#10.10.10.31:5000#g" /etc/httpd/conf.d/wsgi-keystone.conf

##controller02
sed -i "s/Listen\ 5000/Listen\ 10.10.10.32:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s#*:5000#10.10.10.32:5000#g" /etc/httpd/conf.d/wsgi-keystone.conf

##controller03
sed -i "s/Listen\ 5000/Listen\ 10.10.10.33:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf
sed -i "s#*:5000#10.10.10.33:5000#g" /etc/httpd/conf.d/wsgi-keystone.conf

啟動服務

systemctl restart httpd.service
systemctl enable httpd.service
systemctl status httpd.service

配置admin用戶變量腳本

openstack client環境腳本定義client調用openstack api環境變量,以方便api的調用(不必在命令行中攜帶環境變量);
官方文檔將admin用戶和demo租戶的變量寫入到了家目錄下,根據不同的用戶角色,需要定義不同的腳本;
一般將腳本創建在用戶主目錄

admin-openrc

$ cat >> ~/admin-openrc << EOF
# admin-openrc
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://10.10.10.10:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

$ source  ~/admin-openrc

# 驗證
$ openstack domain list
+---------+---------+---------+--------------------+
| ID      | Name    | Enabled | Description        |
+---------+---------+---------+--------------------+
| default | Default | True    | The default domain |
+---------+---------+---------+--------------------+

# 也可以使用下面的命令
$ openstack token issue

#拷貝到其他的控制節點
scp -rp ~/admin-openrc controller02:~/
scp -rp ~/admin-openrc controller03:~/

創建新域、項目、用戶和角色

身份服務為每個OpenStack服務提供身份驗證服務,其中包括服務使用域、項目、用戶和角色的組合。

在任意控制節點操作

創建域

$ openstack domain create --description "An Example Domain" example
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | An Example Domain                |
| enabled     | True                             |
| id          | 4a208138a0004bb1a05d6c61e14f47dc |
| name        | example                          |
| options     | {}                               |
| tags        | []                               |
+-------------+----------------------------------+

$ openstack domain list
+----------------------------------+---------+---------+--------------------+
| ID                               | Name    | Enabled | Description        |
+----------------------------------+---------+---------+--------------------+
| 4a208138a0004bb1a05d6c61e14f47dc | example | True    | An Example Domain  |
| default                          | Default | True    | The default domain |
+----------------------------------+---------+---------+--------------------+

創建demo項目

由於admin的項目角色用戶都已經存在了;重新創建一個新的項目角色demo

以創建demo項目為例,demo項目屬於”default”域

openstack project create --domain default --description "demo Project" demo

創建demo用戶

需要輸入新用戶的密碼

--password-prompt為交互式;--password+密碼為非交互式

openstack user create --domain default   --password 123456 demo

創建user角色

openstack role create user

查看角色

openstack role list

將user角色添加到demo項目和demo用戶

openstack role add --project demo --user  demo user

配置 demo 用戶變量腳本

cat >> ~/demo-openrc << EOF
#demo-openrc
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_PROJECT_NAME=
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://10.10.10.10:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

source  ~/demo-openrc
openstack token issue 

# 拷貝到其他的控制節點
scp -rp ~/demo-openrc controller02:~/
scp -rp ~/demo-openrc controller03:~/

驗證keystone

任意一台控制節點;以admin用戶身份,請求身份驗證令牌, 使用admin用戶變量

$ source admin-openrc
$ openstack --os-auth-url http://10.10.10.10:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue

任意一台控制節點;以demo用戶身份,請請求認證令牌, 使用demo用戶變量

$ source demo-openrc
$ openstack --os-auth-url http://10.10.10.10:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name demo --os-username demo token issue

四、Glance集群部署

Glance 具體功能如下:

  • 提供 RESTful API 讓用戶能夠查詢和獲取鏡像的元數據和鏡像本身;
  • 支持多種方式存儲鏡像,包括普通的文件系統、Swift、Ceph 等;
  • 對實例執行快照創建新的鏡像。

創建glance數據庫

在任意控制節點創建數據庫,數據庫自動同步,以controller01節點為例

$ mysql -u root -p123456
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '123456';
flush privileges;
exit;

創建glance-api相關服務憑證

在任意控制節點創建數據庫,以controller01節點為例

source ~/admin-openrc
# 創建service項目
openstack project create --domain default --description "Service Project" service

# 創建glance用戶
openstack user create --domain default --password 123456 glance

# 將管理員admin用戶添加到glance用戶和項目中
openstack role add --project service --user glance admin

# 創建glance服務實體
openstack service create --name glance --description "OpenStack Image" image

# 創建glance-api;
openstack endpoint create --region RegionOne image public http://10.10.10.10:9292
openstack endpoint create --region RegionOne image internal http://10.10.10.10:9292
openstack endpoint create --region RegionOne image admin http://10.10.10.10:9292

# 查看創建之后的api;
openstack endpoint list

部署與配置glance

安裝glance

在全部控制節點安裝glance,以controller01節點為例

yum install openstack-glance python-glance python-glanceclient -y

# 備份glance配置文件
cp /etc/glance/glance-api.conf{,.bak}
egrep -v '^$|^#' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf

配置glance-api.conf

注意bind_host參數,根據不同節點修改;以controller01節點為例

openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 10.10.10.31
openstack-config --set /etc/glance/glance-api.conf database connection  mysql+pymysql://glance:123456@10.10.10.10/glance
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri   http://10.10.10.10:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url  http://10.10.10.10:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password 123456
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

創建鏡像存儲目錄並賦權限

/var/lib/glance/images是默認的存儲目錄,在全部控制節點創建

mkdir /var/lib/glance/images/
chown glance:nobody /var/lib/glance/images

初始化glance數據庫

任意控制節點操作

su -s /bin/sh -c "glance-manage db_sync" glance

驗證glance數據庫是否正常寫入

$ mysql -uglance -p123456 -e "use glance;show tables;"
+----------------------------------+
| Tables_in_glance                 |
+----------------------------------+
| alembic_version                  |
| image_locations                  |
| image_members                    |
| image_properties                 |
| image_tags                       |
| images                           |
| metadef_namespace_resource_types |
| metadef_namespaces               |
| metadef_objects                  |
| metadef_properties               |
| metadef_resource_types           |
| metadef_tags                     |
| migrate_version                  |
| task_info                        |
| tasks                            |
+----------------------------------+

啟動服務

全部控制節點

systemctl enable openstack-glance-api.service
systemctl restart openstack-glance-api.service
systemctl status openstack-glance-api.service
sleep 3s
lsof -i:9292

下載cirros鏡像驗證glance服務

在任意控制節點上;下載cirros鏡像;格式指定為qcow2,bare;設置public權限;

鏡像生成后,在指定的存儲目錄下生成以鏡像id命名的鏡像文件

$ source ~/admin-openrc
$ wget -c http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img

$ openstack image create --file ~/cirros-0.5.2-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros-qcow2

$ openstack image list
+--------------------------------------+--------------+--------+
| ID                                   | Name         | Status |
+--------------------------------------+--------------+--------+
| 1c66cd7e-b6d9-4e70-a3d4-f73b27a84230 | cirros-qcow2 | active |
+--------------------------------------+--------------+--------+

查看鏡像

[root@controller01 ~]# ls -l /var/lib/glance/images/
total 0

[root@controller02 ~]# ls -l /var/lib/glance/images/
total 15956
-rw-r----- 1 glance glance 16338944 Nov  4 17:25 1c66cd7e-b6d9-4e70-a3d4-f73b27a84230

[root@controller03 ~]# ls -l /var/lib/glance/images/
total 0

scp -pr /var/lib/glance/images/* controller01:/var/lib/glance/images/
scp -pr /var/lib/glance/images/* controller03:/var/lib/glance/images/
chown -R glance. /var/lib/glance/images/*

這時候發現只有1台glance節點上有相關鏡像,如果請求發到沒有的機器就會找不到鏡像;所以實際生產中一般用共性存儲 nfs,或者 swift、ceph,方法后面再說。

五、Placement服務部署

Placement具體功能:

  • 通過HTTP請求來跟蹤和過濾資源
  • 數據保存在本地數據庫中
  • 具備豐富的資源管理和篩選策略

創建Placement數據庫

在任意控制節點創建數據庫

$ mysql -u root -p123456

CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '123456';
flush privileges;
exit;

創建placement-api

在任意控制節點操作

創建Placement服務用戶

openstack user create --domain default --password=123456 placement

將Placement用戶添加到服務項目並賦予admin權限

openstack role add --project service --user placement admin

創建placement API服務實體

openstack service create --name placement --description "Placement API" placement

創建placement API服務訪問端點

openstack endpoint create --region RegionOne placement public http://10.10.10.10:8778
openstack endpoint create --region RegionOne placement internal http://10.10.10.10:8778
openstack endpoint create --region RegionOne placement admin http://10.10.10.10:8778
  • 使用vip

安裝placement軟件包

在全部控制節點操作

yum install openstack-placement-api -y

修改配置文件

在全部控制節點操作

# 備份Placement配置
cp /etc/placement/placement.conf /etc/placement/placement.conf.bak
grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:123456@10.10.10.10/placement
openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url  http://10.10.10.10:5000/v3
openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
openstack-config --set /etc/placement/placement.conf keystone_authtoken password 123456

初始化placement數據庫

任意控制節點操作

su -s /bin/sh -c "placement-manage db sync" placement
mysql -uroot -p123456 placement -e " show tables;"

配置00-placement-api.conf

修改placement的apache配置文件

在全部控制節點操作,以controller01節點為例;注意根據不同節點修改監聽地址;官方文檔沒有提到,如果不修改,計算服務檢查時將會報錯;

# 備份00-Placement-api配置
# controller01上
cp /etc/httpd/conf.d/00-placement-api.conf{,.bak}
sed -i "s/Listen\ 8778/Listen\ 10.10.10.31:8778/g" /etc/httpd/conf.d/00-placement-api.conf
sed -i "s/*:8778/10.10.10.31:8778/g" /etc/httpd/conf.d/00-placement-api.conf

# controller02上
cp /etc/httpd/conf.d/00-placement-api.conf{,.bak}
sed -i "s/Listen\ 8778/Listen\ 10.10.10.32:8778/g" /etc/httpd/conf.d/00-placement-api.conf
sed -i "s/*:8778/10.10.10.32:8778/g" /etc/httpd/conf.d/00-placement-api.conf

# controller03上
cp /etc/httpd/conf.d/00-placement-api.conf{,.bak}
sed -i "s/Listen\ 8778/Listen\ 10.10.10.33:8778/g" /etc/httpd/conf.d/00-placement-api.conf
sed -i "s/*:8778/10.10.10.33:8778/g" /etc/httpd/conf.d/00-placement-api.conf

啟用placement API訪問

在全部控制節點操作

$ vi /etc/httpd/conf.d/00-placement-api.conf (15gg)
...
  #SSLCertificateKeyFile
  #SSLCertificateKeyFile ...
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
...

重啟apache服務

在全部控制節點操作;啟動placement-api監聽端口

systemctl restart httpd.service
ss -tnlp|grep 8778
lsof -i:8778
# curl地址看是否能返回json
$ curl http://10.10.10.10:8778
{"versions": [{"id": "v1.0", "max_version": "1.36", "min_version": "1.0", "status": "CURRENT", "links": [{"rel": "self", "href": ""}]}]}

驗證檢查Placement健康狀態

$ placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

六、Nova控制節點集群部署

Nova具體功能如下:

  1. 實例生命周期管理
  2. 管理計算資源
  3. 網絡和認證管理
  4. REST風格的API
  5. 異步的一致性通信
  6. Hypervisor透明:支持Xen,XenServer/XCP, KVM, UML, VMware vSphere and Hyper-V

創建nova相關數據庫

在任意控制節點創建數據庫

# 創建nova_api,nova和nova_cell0數據庫並授權
mysql -uroot -p123456
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';

flush privileges;
exit;

創建nova相關服務憑證

在任意控制節點操作

創建nova用戶

source ~/admin-openrc
openstack user create --domain default --password 123456 nova

向nova用戶賦予admin權限

openstack role add --project service --user nova admin

創建nova服務實體

openstack service create --name nova --description "OpenStack Compute" compute

創建Compute API服務端點

api地址統一采用vip,如果public/internal/admin分別設計使用不同的vip,請注意區分;

--region與初始化admin用戶時生成的region一致;

openstack endpoint create --region RegionOne compute public http://10.10.10.10:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://10.10.10.10:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://10.10.10.10:8774/v2.1

安裝nova軟件包

在全部控制節點安裝nova相關服務,以controller01節點為例

  • nova-api(nova主服務)

  • nova-scheduler(nova調度服務)

  • nova-conductor(nova數據庫服務,提供數據庫訪問)

  • nova-novncproxy(nova的vnc服務,提供實例的控制台)

yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

部署與配置

在全部控制節點配置nova相關服務,以controller01節點為例

注意my_ip參數,根據節點修改;注意nova.conf文件的權限:root:nova

# 備份配置文件/etc/nova/nova.conf
cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip  10.10.10.31
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron  true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver

# 暫不使用haproxy配置的rabbitmq;直接連接rabbitmq集群
#openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:123456@10.10.10.10:5672
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672

# 自動發現 nova 計算節點
openstack-config --set  /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 600

openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen_port 8774
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen_port 8775
openstack-config --set /etc/nova/nova.conf DEFAULT metadata_listen '$my_ip'
openstack-config --set /etc/nova/nova.conf DEFAULT osapi_compute_listen '$my_ip'

openstack-config --set /etc/nova/nova.conf api auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf api_database  connection  mysql+pymysql://nova:123456@10.10.10.10/nova_api

openstack-config --set /etc/nova/nova.conf cache backend oslo_cache.memcache_pool
openstack-config --set /etc/nova/nova.conf cache enabled True
openstack-config --set /etc/nova/nova.conf cache memcache_servers controller01:11211,controller02:11211,controller03:11211

openstack-config --set /etc/nova/nova.conf database connection  mysql+pymysql://nova:123456@10.10.10.10/nova

openstack-config --set /etc/nova/nova.conf keystone_authtoken www_authenticate_uri  http://10.10.10.10:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url  http://10.10.10.10:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type  password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name  Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name  Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name  service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username  nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password  123456

openstack-config --set /etc/nova/nova.conf vnc enabled  true
openstack-config --set /etc/nova/nova.conf vnc server_listen  '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_host '$my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_port  6080

openstack-config --set /etc/nova/nova.conf glance  api_servers  http://10.10.10.10:9292

openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp

openstack-config --set /etc/nova/nova.conf placement region_name  RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name  Default
openstack-config --set /etc/nova/nova.conf placement project_name  service
openstack-config --set /etc/nova/nova.conf placement auth_type  password
openstack-config --set /etc/nova/nova.conf placement user_domain_name  Default
openstack-config --set /etc/nova/nova.conf placement auth_url  http://10.10.10.10:5000/v3
openstack-config --set /etc/nova/nova.conf placement username  placement
openstack-config --set /etc/nova/nova.conf placement password  123456

注意:

前端采用haproxy時,服務連接rabbitmq會出現連接超時重連的情況,可通過各服務與rabbitmq的日志查看;

transport_url=rabbit://openstack:123456@10.10.10.10:5672

rabbitmq本身具備集群機制,官方文檔建議直接連接rabbitmq集群;但采用此方式時服務啟動有時會報錯,原因不明;如果沒有此現象,建議連接rabbitmq直接對接集群而非通過前端haproxy的vip+端口

openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672

初始化nova相關數據庫並驗證

任意控制節點操作

# 初始化nova-api數據庫,無輸出
# 初始化cell0數據庫,無輸出
# 創建cell1表
# 初始化nova數據庫
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova

驗證nova cell0和cell1是否正確注冊

$ su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+
|  Name |                 UUID                 |               Transport URL               |               Database Connection                | Disabled |
+-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |                   none:/                  | mysql+pymysql://nova:****@10.10.10.10/nova_cell0 |  False   |
| cell1 | 3e74f43a-74db-4eba-85ee-c8330f906b1b | rabbit://openstack:****@controller03:5672 |    mysql+pymysql://nova:****@10.10.10.10/nova    |  False   |
+-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+

驗證nova數據庫是否正常寫入

mysql -unova -p123456 -e "use nova_api;show tables;"
mysql -unova -p123456 -e "use nova;show tables;"
mysql -unova -p123456 -e "use nova_cell0;show tables;"

啟動nova服務

在全部控制節點操作,以controller01節點為例

systemctl enable openstack-nova-api.service 
systemctl enable openstack-nova-scheduler.service 
systemctl enable openstack-nova-conductor.service 
systemctl enable openstack-nova-novncproxy.service

systemctl restart openstack-nova-api.service 
systemctl restart openstack-nova-scheduler.service 
systemctl restart openstack-nova-conductor.service 
systemctl restart openstack-nova-novncproxy.service

systemctl status openstack-nova-api.service 
systemctl status openstack-nova-scheduler.service 
systemctl status openstack-nova-conductor.service 
systemctl status openstack-nova-novncproxy.service

ss -tlnp | egrep '8774|8775|8778|6080'
curl http://10.10.10.10:8774

驗證

列出各服務控制組件,查看狀態

$ source ~/admin-openrc 
$ openstack compute service list
+----+----------------+--------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host         | Zone     | Status  | State | Updated At                 |
+----+----------------+--------------+----------+---------+-------+----------------------------+
| 21 | nova-scheduler | controller01 | internal | enabled | up    | 2021-11-04T10:24:01.000000 |
| 24 | nova-conductor | controller01 | internal | enabled | up    | 2021-11-04T10:24:05.000000 |
| 27 | nova-scheduler | controller02 | internal | enabled | up    | 2021-11-04T10:24:13.000000 |
| 30 | nova-scheduler | controller03 | internal | enabled | up    | 2021-11-04T10:24:05.000000 |
| 33 | nova-conductor | controller02 | internal | enabled | up    | 2021-11-04T10:24:07.000000 |
| 36 | nova-conductor | controller03 | internal | enabled | up    | 2021-11-04T10:24:10.000000 |
+----+----------------+--------------+----------+---------+-------+----------------------------+

展示api端點

$ openstack catalog list
+-----------+-----------+------------------------------------------+
| Name      | Type      | Endpoints                                |
+-----------+-----------+------------------------------------------+
| placement | placement | RegionOne                                |
|           |           |   admin: http://10.10.10.10:8778         |
|           |           | RegionOne                                |
|           |           |   internal: http://10.10.10.10:8778      |
|           |           | RegionOne                                |
|           |           |   public: http://10.10.10.10:8778        |
|           |           |                                          |
| glance    | image     | RegionOne                                |
|           |           |   public: http://10.10.10.10:9292        |
|           |           | RegionOne                                |
|           |           |   internal: http://10.10.10.10:9292      |
|           |           | RegionOne                                |
|           |           |   admin: http://10.10.10.10:9292         |
|           |           |                                          |
| keystone  | identity  | RegionOne                                |
|           |           |   internal: http://10.10.10.10:5000/v3/  |
|           |           | RegionOne                                |
|           |           |   admin: http://10.10.10.10:5000/v3/     |
|           |           | RegionOne                                |
|           |           |   public: http://10.10.10.10:5000/v3/    |
|           |           |                                          |
| nova      | compute   | RegionOne                                |
|           |           |   public: http://10.10.10.10:8774/v2.1   |
|           |           | RegionOne                                |
|           |           |   internal: http://10.10.10.10:8774/v2.1 |
|           |           | RegionOne                                |
|           |           |   admin: http://10.10.10.10:8774/v2.1    |
|           |           |                                          |
+-----------+-----------+------------------------------------------+

檢查cell與placement api;都為success為正常

$ nova-status upgrade check
+--------------------------------------------------------------------+
| Upgrade Check Results                                              |
+--------------------------------------------------------------------+
| Check: Cells v2                                                    |
| Result: Success                                                    |
| Details: No host mappings or compute nodes were found. Remember to |
|   run command 'nova-manage cell_v2 discover_hosts' when new        |
|   compute hosts are deployed.                                      |
+--------------------------------------------------------------------+
| Check: Placement API                                               |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Ironic Flavor Migration                                     |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+
| Check: Cinder API                                                  |
| Result: Success                                                    |
| Details: None                                                      |
+--------------------------------------------------------------------+

七、Nova計算節點集群部署

安裝nova-compute

在全部計算節點安裝nova-compute服務,以compute01節點為例

# 在基礎配置時已經下載好了openstack的源和需要的依賴,所以直接下載需要的服務組件即可
wget ftp://ftp.pbone.net/mirror/archive.fedoraproject.org/epel/testing/6.2019-05-29/x86_64/Packages/p/python2-qpid-proton-0.28.0-1.el7.x86_64.rpm
wget ftp://ftp.pbone.net/mirror/vault.centos.org/7.8.2003/messaging/x86_64/qpid-proton/Packages/q/qpid-proton-c-0.28.0-1.el7.x86_64.rpm
rpm -ivh qpid-proton-c-0.28.0-1.el7.x86_64.rpm
rpm -ivh python2-qpid-proton-0.28.0-1.el7.x86_64.rpm
yum install openstack-nova-compute -y

部署與配置

在全部計算節點安裝nova-compute服務,以compute01節點為例

注意my_ip參數,根據節點修改;注意nova.conf文件的權限:root:nova

# 備份配置文件/etc/nova/nova.conf
cp /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf

確定計算節點是否支持虛擬機硬件加速

$ egrep -c '(vmx|svm)' /proc/cpuinfo
4
# 如果此命令返回值不是0,則計算節點支持硬件加速,不需要加入下面的配置。
# 如果此命令返回值是0,則計算節點不支持硬件加速,並且必須配置libvirt為使用QEMU而不是KVM
# 需要編輯/etc/nova/nova.conf 配置中的[libvirt]部分, vmware按照下面設置可以開啟硬件加速

image-20211104183711075

編輯配置文件nova.conf

openstack-config --set  /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set  /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:123456@10.10.10.10
openstack-config --set  /etc/nova/nova.conf DEFAULT my_ip 10.10.10.41
openstack-config --set  /etc/nova/nova.conf DEFAULT use_neutron  true
openstack-config --set  /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver

openstack-config --set  /etc/nova/nova.conf api auth_strategy  keystone

openstack-config --set /etc/nova/nova.conf  keystone_authtoken www_authenticate_uri  http://10.10.10.10:5000
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_url  http://10.10.10.10:5000
openstack-config --set  /etc/nova/nova.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_domain_name  Default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken user_domain_name  Default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_name  service
openstack-config --set  /etc/nova/nova.conf keystone_authtoken username  nova
openstack-config --set  /etc/nova/nova.conf keystone_authtoken password  123456

openstack-config --set /etc/nova/nova.conf libvirt virt_type  kvm

openstack-config --set  /etc/nova/nova.conf vnc enabled  true
openstack-config --set  /etc/nova/nova.conf vnc server_listen  0.0.0.0
openstack-config --set  /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
openstack-config --set  /etc/nova/nova.conf vnc novncproxy_base_url http://10.10.10.10:6080/vnc_auto.html

openstack-config --set  /etc/nova/nova.conf glance api_servers  http://10.10.10.10:9292

openstack-config --set  /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp

openstack-config --set  /etc/nova/nova.conf placement region_name  RegionOne
openstack-config --set  /etc/nova/nova.conf placement project_domain_name  Default
openstack-config --set  /etc/nova/nova.conf placement project_name  service
openstack-config --set  /etc/nova/nova.conf placement auth_type  password
openstack-config --set  /etc/nova/nova.conf placement user_domain_name  Default
openstack-config --set  /etc/nova/nova.conf placement auth_url  http://10.10.10.10:5000/v3
openstack-config --set  /etc/nova/nova.conf placement username  placement
openstack-config --set  /etc/nova/nova.conf placement password  123456

啟動計算節點的nova服務

全部計算節點操作

systemctl restart libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service

向cell數據庫添加計算節點

任意控制節點執行;查看計算節點列表

$ source ~/admin-openrc 
$ openstack compute service list --service nova-compute
+----+--------------+-----------+------+---------+-------+----------------------------+
| ID | Binary       | Host      | Zone | Status  | State | Updated At                 |
+----+--------------+-----------+------+---------+-------+----------------------------+
| 39 | nova-compute | compute01 | nova | enabled | up    | 2021-11-04T10:45:46.000000 |
| 42 | nova-compute | compute02 | nova | enabled | up    | 2021-11-04T10:45:48.000000 |
+----+--------------+-----------+------+---------+-------+----------------------------+

控制節點上發現計算主機

添加每台新的計算節點時,都必須在控制器節點上運行

手動發現計算節點

$ su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 3e74f43a-74db-4eba-85ee-c8330f906b1b
Checking host mapping for compute host 'compute01': a476abf2-030f-4943-b8a7-167d4a65a393
Creating host mapping for compute host 'compute01': a476abf2-030f-4943-b8a7-167d4a65a393
Checking host mapping for compute host 'compute02': ed0a899f-d898-4a73-9100-a69a26edb932
Creating host mapping for compute host 'compute02': ed0a899f-d898-4a73-9100-a69a26edb932
Found 2 unmapped computes in cell: 3e74f43a-74db-4eba-85ee-c8330f906b1b

自動發現計算節點

為避免新加入計算節點時,手動執行注冊操作nova-manage cell_v2 discover_hosts,可設置控制節點定時自動發現主機;涉及控制節點nova.conf文件的[scheduler]字段;
在全部控制節點操作;設置自動發現時間為10min,可根據實際環境調節

openstack-config --set  /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 600
systemctl restart openstack-nova-api.service

驗證

列出服務組件以驗證每個進程的成功啟動和注冊情況

$ source ~/admin-openrc 
$ openstack compute service list
+----+----------------+--------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host         | Zone     | Status  | State | Updated At                 |
+----+----------------+--------------+----------+---------+-------+----------------------------+
| 21 | nova-scheduler | controller01 | internal | enabled | up    | 2021-11-04T10:49:48.000000 |
| 24 | nova-conductor | controller01 | internal | enabled | up    | 2021-11-04T10:49:42.000000 |
| 27 | nova-scheduler | controller02 | internal | enabled | up    | 2021-11-04T10:49:43.000000 |
| 30 | nova-scheduler | controller03 | internal | enabled | up    | 2021-11-04T10:49:45.000000 |
| 33 | nova-conductor | controller02 | internal | enabled | up    | 2021-11-04T10:49:47.000000 |
| 36 | nova-conductor | controller03 | internal | enabled | up    | 2021-11-04T10:49:50.000000 |
| 39 | nova-compute   | compute01    | nova     | enabled | up    | 2021-11-04T10:49:46.000000 |
| 42 | nova-compute   | compute02    | nova     | enabled | up    | 2021-11-04T10:49:48.000000 |
+----+----------------+--------------+----------+---------+-------+----------------------------+

列出身份服務中的API端點以驗證與身份服務的連接

$ openstack catalog list
+-----------+-----------+------------------------------------------+
| Name      | Type      | Endpoints                                |
+-----------+-----------+------------------------------------------+
| placement | placement | RegionOne                                |
|           |           |   admin: http://10.10.10.10:8778         |
|           |           | RegionOne                                |
|           |           |   internal: http://10.10.10.10:8778      |
|           |           | RegionOne                                |
|           |           |   public: http://10.10.10.10:8778        |
|           |           |                                          |
| glance    | image     | RegionOne                                |
|           |           |   public: http://10.10.10.10:9292        |
|           |           | RegionOne                                |
|           |           |   internal: http://10.10.10.10:9292      |
|           |           | RegionOne                                |
|           |           |   admin: http://10.10.10.10:9292         |
|           |           |                                          |
| keystone  | identity  | RegionOne                                |
|           |           |   internal: http://10.10.10.10:5000/v3/  |
|           |           | RegionOne                                |
|           |           |   admin: http://10.10.10.10:5000/v3/     |
|           |           | RegionOne                                |
|           |           |   public: http://10.10.10.10:5000/v3/    |
|           |           |                                          |
| nova      | compute   | RegionOne                                |
|           |           |   public: http://10.10.10.10:8774/v2.1   |
|           |           | RegionOne                                |
|           |           |   internal: http://10.10.10.10:8774/v2.1 |
|           |           | RegionOne                                |
|           |           |   admin: http://10.10.10.10:8774/v2.1    |
|           |           |                                          |
+-----------+-----------+------------------------------------------+

列出鏡像服務中的鏡像以及鏡像的狀態

$ openstack image list
+--------------------------------------+--------------+--------+
| ID                                   | Name         | Status |
+--------------------------------------+--------------+--------+
| 1c66cd7e-b6d9-4e70-a3d4-f73b27a84230 | cirros-qcow2 | active |
+--------------------------------------+--------------+--------+

檢查Cells和placement API是否正常運行

$ nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+

擴展:openstack(nova)、kvm、qemu和libvirtd之間的聯系

一:QEMU

QEMU是一個模擬器,通過動態二進制轉換來模擬cpu以及其他一系列硬件,使guest os認為自己就是在和真正的硬件打交道,其實是和qemu模擬的硬件交互。這種模式下,guest os可以和主機上的硬件進行交互,但是所有的指令都需要qemu來進行翻譯,性能會比較差。

二:KVM

KVM是Linux內核提供的虛擬化架構,它需要硬件硬件CPU支持,比如采用硬件輔助虛擬化的Intel-VT,AMD-V。

KVM通過一個內核模塊kvm.ko來實現核心虛擬化功能,以及一個和處理器相關的模塊,如kvm-intel.ko或者kvm-amd.ko。kvm本身不實現模擬,僅暴露一個接口/dev/kvm,用戶態程序可以通過訪問這個接口的ioctl函數來實現vcpu的創建,和虛擬內存的地址空間分配。

有了kvm之后,guest-os的CPU指令不用再經過qemu翻譯就可以運行,大大提升了運行速度。

但是kvm只能模擬cpu和內存,不能模擬其他設備,於是就有了下面這個兩者合一的技術qemu-kvm。

三:QEMU-KVM

qemu-kvm,是qemu一個特定於kvm加速模塊的分支。

qemu將kvm整合進來,通過ioctl調用/dev/kvm,將cpu相關的指令交給內核模塊來做,kvm只實現了cpu和內存虛擬化,但不能模擬其它設備,因此qemu還需要模擬其它設備(如:硬盤、網卡等),qemu加上kvm就是完整意義上的服務器虛擬化

綜上所述,QEMU-KVM具有兩大作用:

  1. 提供對cpu,內存(KVM負責),IO設備(QEMU負責)的虛擬
  2. 對各種虛擬設備的創建,調用進行管理(QEMU負責)

qemu-kvm架構如下:

四:libvirtd

Libvirtd是目前使用最廣泛的對kvm虛擬機進行管理的工具和api。Libvirtd是一個Domain進程可以被本地virsh調用,也可以被遠端的virsh調用,libvirtd調用kvm-qemu控制虛擬機。

libvirtd由幾個不同的部分組成,其中包括應用程序編程接口(API)庫,一個守護進程(libvirtd)和一個默認的命令行工具(virsh),libvirtd守護進程負責對虛擬機的管理,因此要確保這個進程的運行。

五:openstack(nova)、kvm、qemu-kvm和libvirtd之間的關系。

kvm是最底層的VMM,它可以模擬cpu和內存,但是缺少對網絡、I/O及周邊設備的支持,因此不能直接使用。

qemu-kvm是構建與kvm之上的,它提供了完整的虛擬化方案

openstack(nova)的核心功能就是管理一大堆虛擬機,虛擬機可以是各種各樣(kvm, qemu, xen, vmware...),而且管理的方法也可以是各種各樣(libvirt, xenapi, vmwareapi...)。而nova中默認使用的管理虛擬機的API就是libvirtd。

簡單說就是,openstack不會去直接控制qemu-kvm,而是通過libvirtd庫去間接控制qemu-kvm。

另外,libvirt還提供了跨VM平台的功能,它可以控制除了QEMU之外的模擬器,包括vmware, virtualbox, xen等等。所以為了openstack的跨VM性,所以openstack只會用libvirt而不直接用qemu-kvm

1364540142_7304

八、Neutron控制+網絡節點集群部署(linuxbridge方式)

Nova具體功能如下:

  • Neutron 為整個 OpenStack 環境提供網絡支持,包括二層交換,三層路由,負載均衡,防火牆和 VPN 等。
  • Neutron 提供了一個靈活的框架,通過配置,無論是開源還是商業軟件都可以被用來實現這些功能。

創建nova相關數據庫(控制節點)

在任意控制節點創建數據庫;

$ mysql -uroot -p123456
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
flush privileges;
exit;

創建neutron相關服務憑證

在任意控制節點操作;

創建neutron用戶

source admin-openrc
openstack user create --domain default --password 123456 neutron

向neutron用戶賦予admin權限

openstack role add --project service --user neutron admin

創建neutron服務實體

openstack service create --name neutron --description "OpenStack Networking" network

創建neutron API服務端點

api地址統一采用vip,如果public/internal/admin分別設計使用不同的vip,請注意區分;

--region與初始化admin用戶時生成的region一致;neutron-api 服務類型為network;

openstack endpoint create --region RegionOne network public http://10.10.10.10:9696
openstack endpoint create --region RegionOne network internal http://10.10.10.10:9696
openstack endpoint create --region RegionOne network admin http://10.10.10.10:9696

安裝配置

  • openstack-neutron:neutron-server的包

  • openstack-neutron-ml2:ML2 plugin的包

  • openstack-neutron-linuxbridge:linux bridge network provider相關的包

  • ebtables:防火牆相關的包

  • conntrack-tools: 該模塊可以對iptables進行狀態數據包檢查

安裝軟件包

在全部控制節點安裝neutron相關服務

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables conntrack-tools -y

在全部控制節點配置neutron相關服務,以controller01節點為例;

配置neutron.conf

注意my_ip參數,根據節點修改;注意neutron.conf文件的權限:root:neutron

注意bind_host參數,根據節點修改;

# 備份配置文件/etc/neutron/neutron.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 10.10.10.31
openstack-config --set  /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set  /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set  /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
#直接連接rabbitmq集群
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  true
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  true
#啟用l3 ha功能
openstack-config --set  /etc/neutron/neutron.conf DEFAULT l3_ha True
#最多在幾個l3 agent上創建ha router
openstack-config --set  /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3
#可創建ha router的最少正常運行的l3 agnet數量
openstack-config --set  /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2
#dhcp高可用,在3個網絡節點各生成1個dhcp服務器
openstack-config --set  /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3
#配置RPC的超時時間,默認為60s,可能導致超時異常.設置為180s
openstack-config --set  /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180

openstack-config --set  /etc/neutron/neutron.conf database connection  mysql+pymysql://neutron:123456@10.10.10.10/neutron

openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url  http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name  service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username  neutron
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password  123456

openstack-config --set  /etc/neutron/neutron.conf nova  auth_url http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf nova  auth_type password
openstack-config --set  /etc/neutron/neutron.conf nova  project_domain_name default
openstack-config --set  /etc/neutron/neutron.conf nova  user_domain_name default
openstack-config --set  /etc/neutron/neutron.conf nova  region_name RegionOne
openstack-config --set  /etc/neutron/neutron.conf nova  project_name service
openstack-config --set  /etc/neutron/neutron.conf nova  username nova
openstack-config --set  /etc/neutron/neutron.conf nova  password 123456

openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp

配置ml2_conf.ini

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,vxlan
# 可同時設置多種租戶網絡類型,第一個值是常規租戶創建網絡時的默認值,同時也默認是master router心跳信號的傳遞網絡類型
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan,vlan,flat
# ml2 mechanism_driver 列表,l2population對gre/vxlan租戶網絡有效
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
# 指定flat網絡類型名稱為”external”,”*”表示任意網絡,空值表示禁用flat網絡
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  external
# 指定vlan網絡類型的網絡名稱為”vlan”;如果不設置vlan id則表示不受限
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges  vlan:3001:3500
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 10001:20000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  true

創建ml2的軟連接 文件指向ML2插件配置的軟鏈接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

配置nova.conf

# 修改配置文件/etc/nova/nova.conf
# 在全部控制節點上配置nova服務與網絡節點服務進行交互
openstack-config --set  /etc/nova/nova.conf neutron url  http://10.10.10.10:9696
openstack-config --set  /etc/nova/nova.conf neutron auth_url  http://10.10.10.10:5000
openstack-config --set  /etc/nova/nova.conf neutron auth_type  password
openstack-config --set  /etc/nova/nova.conf neutron project_domain_name  default
openstack-config --set  /etc/nova/nova.conf neutron user_domain_name  default
openstack-config --set  /etc/nova/nova.conf neutron region_name  RegionOne
openstack-config --set  /etc/nova/nova.conf neutron project_name  service
openstack-config --set  /etc/nova/nova.conf neutron username  neutron
openstack-config --set  /etc/nova/nova.conf neutron password  123456
openstack-config --set  /etc/nova/nova.conf neutron service_metadata_proxy  true
openstack-config --set  /etc/nova/nova.conf neutron metadata_proxy_shared_secret  123456

初始化neutron數據庫

任意控制節點操作

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

驗證neutron數據庫是否正常寫入

mysql -u neutron -p123456 -e "use neutron;show tables;"

注意:如果控制節點只跑neutron-server,則修改以上配置即可,然后使用:systemctl restart openstack-nova-api.service && systemctl enable neutron-server.service && systemctl restart neutron-server.service 啟動服務即可

配置linuxbridge_agent.ini

  • Linux網橋代理

  • Linux網橋代理為實例構建第2層(橋接和交換)虛擬網絡基礎結構並處理安全組

  • 網絡類型名稱與物理網卡對應,這里提供商網絡provider對應規划的ens34網卡,vlan租戶網絡對應規划的ens36網卡,在創建相應網絡時采用的是網絡名稱而非網卡名稱;

  • 需要明確的是物理網卡是本地有效,根據主機實際使用的網卡名確定;

  • 可以使用openvswitch代替

在全部控制節點操作,以controller01節點為例;

#備份配置文件
cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
# 網絡類型名稱與物理網卡對應,這里flat external網絡對應規划的ens34,vlan租戶網絡對應規划的ens36,在創建相應網絡時采用的是網絡名稱而非網卡名稱;
# 需要明確的是物理網卡是本地有效,根據主機實際使用的網卡名確定;
# 另有” bridge_mappings”參數對應網橋
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  external:ens34,vlan:ens36
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
#tunnel租戶網絡(vxlan)vtep端點,這里對應規划的ens35地址,根據節點做相應修改
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.10.30.31
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置l3_agent.ini(self-networking)

  • l3代理為租戶虛擬網絡提供路由和NAT服務

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/l3_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge

配置dhcp_agent.ini

  • DHCP代理,DHCP代理為虛擬網絡提供DHCP服務;
  • 使用dnsmasp提供dhcp服務;

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

配置metadata_agent.ini

  • 元數據代理提供配置信息,例如實例的憑據
  • metadata_proxy_shared_secret 的密碼與控制節點上/etc/nova/nova.conf文件中密碼一致;

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host 10.10.10.10
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret 123456
openstack-config --set /etc/neutron/metadata_agent.ini cache memcache_servers controller01:11211,controller02:11211,controller03:11211

啟動服務

全部控制節點操作;

# 變更nova配置文件,首先需要重啟nova服務
systemctl restart openstack-nova-api.service && systemctl status openstack-nova-api.service

# 設置開機啟動
systemctl enable neutron-server.service \
 neutron-linuxbridge-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service
 
# 啟動
systemctl restart neutron-server.service \
 neutron-linuxbridge-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service
 
# 查看狀態
systemctl status neutron-server.service \
 neutron-linuxbridge-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service

驗證

. admin-openrc 

# 查看加載的擴展服務
openstack extension list --network

# 查看agent服務
openstack network agent list

image-20211108201837117

  • agent 啟動可能會花一點時間

九、Neutron計算節點(linuxbridge方式)

安裝neutron-linuxbridge

在全部計算節點安裝neutro-linuxbridge服務,以compute01節點為例

yum install openstack-neutron-linuxbridge ebtables ipset -y

配置linuxbridge_agent.ini

所有計算節點操作

#備份配置文件
cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
# 網絡類型名稱與物理網卡對應,這里vlan租戶網絡對應規划的ens36,在創建相應網絡時采用的是網絡名稱而非網卡名稱;
# 需要明確的是物理網卡是本地有效,根據主機實際使用的網卡名確定;
# 另有” bridge_mappings”參數對應網橋
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  vlan:ens36
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
#tunnel租戶網絡(vxlan)vtep端點,這里對應規划的ens35地址,根據節點做相應修改
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 10.10.30.41
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置neutron.conf

所有計算節點操作,注意修改 bind_host

# 備份配置文件/etc/neutron/neutron.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 10.10.10.41
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy keystone 
#配置RPC的超時時間,默認為60s,可能導致超時異常.設置為180s
openstack-config --set  /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180

openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password 123456

openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

配置nova.conf

在全部計算節點操作;配置只涉及nova.conf的[neutron]字段

openstack-config --set  /etc/nova/nova.conf neutron url http://10.10.10.10:9696
openstack-config --set  /etc/nova/nova.conf neutron auth_url http://10.10.10.10:5000
openstack-config --set  /etc/nova/nova.conf neutron auth_type password
openstack-config --set  /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set  /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set  /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set  /etc/nova/nova.conf neutron project_name service
openstack-config --set  /etc/nova/nova.conf neutron username neutron
openstack-config --set  /etc/nova/nova.conf neutron password 123456

啟動服務

在全部計算節點操作;

# nova.conf文件已變更,首先需要重啟全部計算節點的nova服務
systemctl restart openstack-nova-compute.service && systemctl status openstack-nova-compute.service

systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service

驗證

在任意控制節點操作;

$ . admin-openrc

# 查看neutron相關的agent;
# 或:openstack network agent list --agent-type linux-bridge
# type 類型 'bgp', 'dhcp', 'open-vswitch', 'linux-bridge', 'ofa', 'l3', 'loadbalancer', 'metering', 'metadata', 'macvtap', 'nic'
$ openstack network agent list

image-20211109100112932

八、Neutron控制+網絡節點集群部署(openvswitch方式)

Nova具體功能如下:

  • Neutron 為整個 OpenStack 環境提供網絡支持,包括二層交換,三層路由,負載均衡,防火牆和 VPN 等。
  • Neutron 提供了一個靈活的框架,通過配置,無論是開源還是商業軟件都可以被用來實現這些功能。

創建nova相關數據庫(控制節點)

在任意控制節點創建數據庫;

$ mysql -uroot -p123456
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
flush privileges;
exit;

創建neutron相關服務憑證

在任意控制節點操作;

創建neutron用戶

source ~/admin-openrc
openstack user create --domain default --password 123456 neutron

向neutron用戶賦予admin權限

openstack role add --project service --user neutron admin

創建neutron服務實體

openstack service create --name neutron --description "OpenStack Networking" network

創建neutron API服務端點

api地址統一采用vip,如果public/internal/admin分別設計使用不同的vip,請注意區分;

--region與初始化admin用戶時生成的region一致;neutron-api 服務類型為network;

openstack endpoint create --region RegionOne network public http://10.10.10.10:9696
openstack endpoint create --region RegionOne network internal http://10.10.10.10:9696
openstack endpoint create --region RegionOne network admin http://10.10.10.10:9696

安裝配置

  • openstack-neutron:neutron-server的包

  • openstack-neutron-ml2:ML2 plugin的包

  • openstack-neutron-openvswitch:openvswitch相關的包

  • ebtables:防火牆相關的包

  • conntrack-tools: 該模塊可以對iptables進行狀態數據包檢查

安裝軟件包

在全部控制節點安裝neutron相關服務

yum install openstack-neutron openstack-neutron-ml2 ebtables conntrack-tools openstack-neutron-openvswitch libibverbs net-tools -y

在全部控制節點配置neutron相關服務,以controller01節點為例;

內核配置

在全部控制節點執行

echo '
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
' >> /etc/sysctl.conf
sysctl -p

配置neutron.conf

注意my_ip參數,根據節點修改;注意neutron.conf文件的權限:root:neutron

注意bind_host參數,根據節點修改;

# 備份配置文件/etc/neutron/neutron.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 10.10.10.31
openstack-config --set  /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set  /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set  /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
#直接連接rabbitmq集群
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  true
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  true
#啟用l3 ha功能
openstack-config --set  /etc/neutron/neutron.conf DEFAULT l3_ha True
#最多在幾個l3 agent上創建ha router
openstack-config --set  /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3
#可創建ha router的最少正常運行的l3 agnet數量
openstack-config --set  /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2
#dhcp高可用,在3個網絡節點各生成1個dhcp服務器
openstack-config --set  /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3
#配置RPC的超時時間,默認為60s,可能導致超時異常.設置為180s
openstack-config --set  /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180

openstack-config --set  /etc/neutron/neutron.conf database connection  mysql+pymysql://neutron:123456@10.10.10.10/neutron

openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url  http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers  controller01:11211,controller02:11211,controller03:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name  service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username  neutron
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password  123456

openstack-config --set  /etc/neutron/neutron.conf nova  auth_url http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf nova  auth_type password
openstack-config --set  /etc/neutron/neutron.conf nova  project_domain_name default
openstack-config --set  /etc/neutron/neutron.conf nova  user_domain_name default
openstack-config --set  /etc/neutron/neutron.conf nova  region_name RegionOne
openstack-config --set  /etc/neutron/neutron.conf nova  project_name service
openstack-config --set  /etc/neutron/neutron.conf nova  username nova
openstack-config --set  /etc/neutron/neutron.conf nova  password 123456

openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp

配置ml2_conf.ini

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,vxlan
# 可同時設置多種租戶網絡類型,第一個值是常規租戶創建網絡時的默認值,同時也默認是master router心跳信號的傳遞網絡類型
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan,vlan,flat
# ml2 mechanism_driver 列表,l2population對gre/vxlan租戶網絡有效
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  openvswitch,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
# 指定flat網絡類型名稱為”external”,”*”表示任意網絡,空值表示禁用flat網絡
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  external
# 指定vlan網絡類型的網絡名稱為”vlan”;如果不設置vlan id則表示不受限
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vlan network_vlan_ranges  vlan:3001:3500
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 10001:20000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  true

創建ml2的軟連接 文件指向ML2插件配置的軟鏈接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

配置nova.conf

在全部控制節點操作,以controller01節點為例;

# 修改配置文件/etc/nova/nova.conf
# 在全部控制節點上配置nova服務與網絡節點服務進行交互
openstack-config --set  /etc/nova/nova.conf neutron url  http://10.10.10.10:9696
openstack-config --set  /etc/nova/nova.conf neutron auth_url  http://10.10.10.10:5000
openstack-config --set  /etc/nova/nova.conf neutron auth_type  password
openstack-config --set  /etc/nova/nova.conf neutron project_domain_name  default
openstack-config --set  /etc/nova/nova.conf neutron user_domain_name  default
openstack-config --set  /etc/nova/nova.conf neutron region_name  RegionOne
openstack-config --set  /etc/nova/nova.conf neutron project_name  service
openstack-config --set  /etc/nova/nova.conf neutron username  neutron
openstack-config --set  /etc/nova/nova.conf neutron password  123456
openstack-config --set  /etc/nova/nova.conf neutron service_metadata_proxy  true
openstack-config --set  /etc/nova/nova.conf neutron metadata_proxy_shared_secret  123456

openstack-config --set  /etc/nova/nova.conf DEFAULT linuxnet_interface_driver  nova.network.linux_net.LinuxOVSInterfaceDriver

初始化neutron數據庫

任意控制節點操作

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

驗證neutron數據庫是否正常寫入

mysql -u neutron -p123456 -e "use neutron;show tables;"

注意:如果控制節點只跑neutron-server,則修改以上配置即可,然后使用:systemctl restart openstack-nova-api.service && systemctl enable neutron-server.service && systemctl restart neutron-server.service 啟動服務即可

配置l3_agent.ini(self-networking)

  • l3代理為租戶虛擬網絡提供路由和NAT服務

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/l3_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver  neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge  br-ex

配置dhcp_agent.ini

  • DHCP代理,DHCP代理為虛擬網絡提供DHCP服務;
  • 使用dnsmasp提供dhcp服務;

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set  /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

配置metadata_agent.ini

  • 元數據代理提供配置信息,例如實例的憑據
  • metadata_proxy_shared_secret 的密碼與控制節點上/etc/nova/nova.conf文件中密碼一致;

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host 10.10.10.10
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret 123456
openstack-config --set /etc/neutron/metadata_agent.ini cache memcache_servers controller01:11211,controller02:11211,controller03:11211

配置openvswitch_agent.ini

在全部控制節點操作,以controller01節點為例;

# 備份配置文件
cp -a /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak > /etc/neutron/plugins/ml2/openvswitch_agent.ini

local_ip修改為當前節點的主機ip

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
# tunnel租戶網絡(vxlan)vtep端點,這里對應規划的ens35地址,根據節點做相應修改
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 10.10.30.31

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings  external:br-ex
#openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings

#openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan,gre
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population true
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent arp_responder true

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group true

#openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.openvswitch_firewall.OVSFirewallDriver

啟動openvswitch服務

在全部控制節點操作,以controller01節點為例;

systemctl enable openvswitch.service
systemctl restart openvswitch.service
systemctl status openvswitch.service

創建網橋br-ex

在全部控制節點操作,以controller01節點為例;

將外部網絡ip轉移到網橋,添加到開機啟動

ip地址修改為當前節點ens34地址;以controller01為例;

echo '#
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex ens34
ovs-vsctl show
ifconfig ens34 0.0.0.0 
ifconfig br-ex 10.10.20.31/24
#route add default gw 10.10.20.2 # 可選,添加默認路由
#' >> /etc/rc.d/rc.local

創建並驗證

[root@controller01 ~]# chmod +x /etc/rc.d/rc.local; tail -n 8 /etc/rc.d/rc.local | bash
ad5867f6-9ddd-4746-9de7-bc3c2b2e98f8
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "ens34"
            Interface "ens34"
    ovs_version: "2.12.0"
[root@controller01 ~]# 
[root@controller01 ~]# ifconfig br-ex
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.20.31  netmask 255.255.255.0  broadcast 10.10.20.255
        inet6 fe80::20c:29ff:fedd:69e5  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:dd:69:e5  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 586 (586.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@controller01 ~]# 
[root@controller01 ~]# ifconfig ens34
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::5f52:19c8:6c65:c9f3  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:dd:69:e5  txqueuelen 1000  (Ethernet)
        RX packets 1860  bytes 249541 (243.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7014  bytes 511816 (499.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

關閉網卡的開機自啟

全部控制節點執行;關閉的目的是以保證OVS創建的網卡可以安全使用

sed -i 's#ONBOOT=yes#ONBOOT=no#g' /etc/sysconfig/network-scripts/ifcfg-ens34

啟動服務

全部控制節點操作;

# 變更nova配置文件,首先需要重啟nova服務
systemctl restart openstack-nova-api.service && systemctl status openstack-nova-api.service

# 設置開機啟動
systemctl enable neutron-server.service \
 neutron-openvswitch-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service
 
# 啟動
systemctl restart neutron-server.service \
 neutron-openvswitch-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service
 
# 查看狀態
systemctl status neutron-server.service \
 neutron-openvswitch-agent.service \
 neutron-l3-agent.service \
 neutron-dhcp-agent.service \
 neutron-metadata-agent.service

驗證

. ~/admin-openrc 

# 查看加載的擴展服務
openstack extension list --network

# 查看agent服務
openstack network agent list

image-20211123131745807

  • agent 啟動可能會花一點時間
[root@controller01 ~]# ovs-vsctl show
2b64473f-6320-411b-b8ce-d9d802c08cd0
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "ens34"
            Interface "ens34"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "2.12.0"
  • br-ex 連接外部網絡以及不同網絡vm通信用,br-int 同計算節點同網絡vm通信用,br-tun 不同計算節點同網絡vm通信用
  • 其中 br-ex 需要手動創建,br-int 與 br-tun 由 neutron-openvswitch-agent 自動創建
  • compute 計算節點由於沒有外部網絡接口,所以沒有 br-ex 網橋,后面計算節點驗證會看到

九、Neutron計算節點(openvswitch方式)

安裝neutron-openvswitch

在全部計算節點安裝neutron-openvswitch服務,以compute01節點為例

yum install -y openstack-neutron-openvswitch libibverbs net-tools

內核配置

在全部計算節點執行

echo '
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
' >> /etc/sysctl.conf
sysctl -p

配置openvswitch_agent.ini

所有計算節點操作

# 備份配置文件
cp -a /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak > /etc/neutron/plugins/ml2/openvswitch_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
# tunnel租戶網絡(vxlan)vtep端點,這里對應規划的ens35地址,根據節點做相應修改
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 10.10.30.41

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings

#openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan,gre
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population true
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent arp_responder true

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group true

#openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.openvswitch_firewall.OVSFirewallDriver
  • 注意 bridge_mappings 為空,因為計算節點無外部網絡 br-ex

配置neutron.conf

所有計算節點操作,注意修改 bind_host

# 備份配置文件/etc/neutron/neutron.conf
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 10.10.10.41
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy keystone 
#配置RPC的超時時間,默認為60s,可能導致超時異常.設置為180s
openstack-config --set  /etc/neutron/neutron.conf DEFAULT rpc_response_timeout 180

openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url http://10.10.10.10:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password 123456

openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

配置nova.conf

在全部計算節點操作;

openstack-config --set  /etc/nova/nova.conf neutron url http://10.10.10.10:9696
openstack-config --set  /etc/nova/nova.conf neutron auth_url http://10.10.10.10:5000
openstack-config --set  /etc/nova/nova.conf neutron auth_type password
openstack-config --set  /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set  /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set  /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set  /etc/nova/nova.conf neutron project_name service
openstack-config --set  /etc/nova/nova.conf neutron username neutron
openstack-config --set  /etc/nova/nova.conf neutron password 123456

openstack-config --set  /etc/nova/nova.conf DEFAULT linuxnet_interface_driver  nova.network.linux_net.LinuxOVSInterfaceDriver

啟動openvswitch服務

在全部計算節點操作;

systemctl enable openvswitch.service
systemctl restart openvswitch.service
systemctl status openvswitch.service

啟動服務

在全部計算節點操作;

# nova.conf文件已變更,首先需要重啟全部計算節點的nova服務
systemctl restart openstack-nova-compute.service && systemctl status openstack-nova-compute.service

systemctl enable neutron-openvswitch-agent.service
systemctl restart neutron-openvswitch-agent.service
systemctl status neutron-openvswitch-agent.service

驗證

在任意控制節點操作;

$ . ~/admin-openrc

# 查看neutron相關的agent;
# 或:openstack network agent list --agent-type open-vswitch
# type 類型 'bgp', 'dhcp', 'open-vswitch', 'linux-bridge', 'ofa', 'l3', 'loadbalancer', 'metering', 'metadata', 'macvtap', 'nic'
$ openstack network agent list

image-20211123132124154

[root@compute01 ~]# ovs-vsctl show
fa9b0958-41d1-42fb-8e93-874ca40ee2e1
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "2.12.0"
[root@compute01 ~]# 
  • 計算節點無外部網絡網卡,無 br-ex

十、Horazion儀表盤集群部署

  • OpenStack儀表板Dashboard服務的項目名稱是Horizon,它所需的唯一服務是身份服務keystone,開發語言是python的web框架Django。

  • 儀表盤使得通過OpenStack API與OpenStack計算雲控制器進行基於web的交互成為可能。

  • Horizon 允許自定義儀表板的商標;並提供了一套內核類和可重復使用的模板及工具。

安裝Train版本的Horizon有以下要求

Python 2.7、3.6或3.7
Django 1.11、2.0和2.2
Django 2.0和2.2支持在Train版本中處於試驗階段
Ussuri發行版(Train發行版之后的下一個發行版)將使用Django 2.2作為主要的Django版本。Django 2.0支持將被刪除。

安裝dashboard

在全部控制節點安裝dashboard服務,以controller01節點為例

yum install openstack-dashboard memcached python-memcached -y

配置local_settings

OpenStack Horizon 參數設置說明

在全部控制節點操作;

# 備份配置文件/etc/openstack-dashboard/local_settings
cp -a /etc/openstack-dashboard/local_settings{,.bak}
grep -Ev '^$|#' /etc/openstack-dashboard/local_settings.bak > /etc/openstack-dashboard/local_settings

添加或者修改以下配置:

# 指定在網絡服務器中配置儀表板的訪問位置;默認值: "/"
WEBROOT = '/dashboard/'
# 配置儀表盤在controller節點上使用OpenStack服務
OPENSTACK_HOST = "10.10.10.10"

# 允許主機訪問儀表板,接受所有主機,不安全不應在生產中使用
ALLOWED_HOSTS = ['*', 'localhost']
#ALLOWED_HOSTS = ['one.example.com', 'two.example.com']

# 配置memcached會話存儲服務
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller01:11211,controller02:11211,controller03:11211',
    }
}

#啟用身份API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

#啟用對域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

#配置API版本
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

# 配置Default為通過儀表板創建的用戶的默認域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

# 配置user為通過儀表板創建的用戶的默認角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

# 如果選擇網絡選項1,請禁用對第3層網絡服務的支持,如果選擇網絡選項2,則可以打開
OPENSTACK_NEUTRON_NETWORK = {
    #自動分配的網絡
    'enable_auto_allocated_network': False,
    #Neutron分布式虛擬路由器(DVR)
    'enable_distributed_router': False,
    #FIP拓撲檢查
    'enable_fip_topology_check': False,
    #高可用路由器模式
    'enable_ha_router': True,
    #下面三個已過時,不用過多了解,官方文檔配置中是關閉的
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    #ipv6網絡
    'enable_ipv6': True,
    #Neutron配額功能
    'enable_quotas': True,
    #rbac政策
    'enable_rbac_policy': True,
    #路由器的菜單和浮動IP功能,Neutron部署中有三層功能的支持;可以打開
    'enable_router': True,
    #默認的DNS名稱服務器
    'default_dns_nameservers': [],
    #網絡支持的提供者類型,在創建網絡時,該列表中的網絡類型可供選擇
    'supported_provider_types': ['*'],
    #使用與提供網絡ID范圍,僅涉及到VLAN,GRE,和VXLAN網絡類型
    'segmentation_id_range': {},
    #使用與提供網絡類型
    'extra_provider_types': {},
    #支持的vnic類型,用於與端口綁定擴展
    'supported_vnic_types': ['*'],
    #物理網絡
    'physical_networks': [],
}

# 配置時區為亞洲上海
TIME_ZONE = "Asia/Shanghai"
  • 其中的中文注釋最好不要寫入配置文件

修改好的 /etc/openstack-dashboard/local_settings 參考:

import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
ALLOWED_HOSTS = ['*', 'localhost']
LOCAL_PATH = '/tmp'
WEBROOT = '/dashboard/'
SECRET_KEY='00be7c741571a0ea5a64'
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
OPENSTACK_HOST = "10.10.10.10"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_NEUTRON_NETWORK = {
    'enable_auto_allocated_network': False,
    'enable_distributed_router': False,
    'enable_fip_topology_check': False,
    'enable_ha_router': True,
    'enable_ipv6': True,
    'enable_quotas': True,
    'enable_rbac_policy': True,
    'enable_router': True,
    'default_dns_nameservers': [],
    'supported_provider_types': ['*'],
    'segmentation_id_range': {},
    'extra_provider_types': {},
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}
TIME_ZONE = "Asia/Shanghai"
LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'console': {
            'format': '%(levelname)s %(name)s %(message)s'
        },
        'operation': {
            'format': '%(message)s'
        },
    },
    'handlers': {
        'null': {
            'level': 'DEBUG',
            'class': 'logging.NullHandler',
        },
        'console': {
            'level': 'DEBUG' if DEBUG else 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'console',
        },
        'operation': {
            'level': 'INFO',
            'class': 'logging.StreamHandler',
            'formatter': 'operation',
        },
    },
    'loggers': {
        'horizon': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'horizon.operation_log': {
            'handlers': ['operation'],
            'level': 'INFO',
            'propagate': False,
        },
        'openstack_dashboard': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'novaclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'cinderclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneauth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'keystoneclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'glanceclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'neutronclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'swiftclient': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'oslo_policy': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'openstack_auth': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django': {
            'handlers': ['console'],
            'level': 'DEBUG',
            'propagate': False,
        },
        'django.db.backends': {
            'handlers': ['null'],
            'propagate': False,
        },
        'requests': {
            'handlers': ['null'],
            'propagate': False,
        },
        'urllib3': {
            'handlers': ['null'],
            'propagate': False,
        },
        'chardet.charsetprober': {
            'handlers': ['null'],
            'propagate': False,
        },
        'iso8601': {
            'handlers': ['null'],
            'propagate': False,
        },
        'scss': {
            'handlers': ['null'],
            'propagate': False,
        },
    },
}
SECURITY_GROUP_RULES = {
    'all_tcp': {
        'name': _('All TCP'),
        'ip_protocol': 'tcp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_udp': {
        'name': _('All UDP'),
        'ip_protocol': 'udp',
        'from_port': '1',
        'to_port': '65535',
    },
    'all_icmp': {
        'name': _('All ICMP'),
        'ip_protocol': 'icmp',
        'from_port': '-1',
        'to_port': '-1',
    },
    'ssh': {
        'name': 'SSH',
        'ip_protocol': 'tcp',
        'from_port': '22',
        'to_port': '22',
    },
    'smtp': {
        'name': 'SMTP',
        'ip_protocol': 'tcp',
        'from_port': '25',
        'to_port': '25',
    },
    'dns': {
        'name': 'DNS',
        'ip_protocol': 'tcp',
        'from_port': '53',
        'to_port': '53',
    },
    'http': {
        'name': 'HTTP',
        'ip_protocol': 'tcp',
        'from_port': '80',
        'to_port': '80',
    },
    'pop3': {
        'name': 'POP3',
        'ip_protocol': 'tcp',
        'from_port': '110',
        'to_port': '110',
    },
    'imap': {
        'name': 'IMAP',
        'ip_protocol': 'tcp',
        'from_port': '143',
        'to_port': '143',
    },
    'ldap': {
        'name': 'LDAP',
        'ip_protocol': 'tcp',
        'from_port': '389',
        'to_port': '389',
    },
    'https': {
        'name': 'HTTPS',
        'ip_protocol': 'tcp',
        'from_port': '443',
        'to_port': '443',
    },
    'smtps': {
        'name': 'SMTPS',
        'ip_protocol': 'tcp',
        'from_port': '465',
        'to_port': '465',
    },
    'imaps': {
        'name': 'IMAPS',
        'ip_protocol': 'tcp',
        'from_port': '993',
        'to_port': '993',
    },
    'pop3s': {
        'name': 'POP3S',
        'ip_protocol': 'tcp',
        'from_port': '995',
        'to_port': '995',
    },
    'ms_sql': {
        'name': 'MS SQL',
        'ip_protocol': 'tcp',
        'from_port': '1433',
        'to_port': '1433',
    },
    'mysql': {
        'name': 'MYSQL',
        'ip_protocol': 'tcp',
        'from_port': '3306',
        'to_port': '3306',
    },
    'rdp': {
        'name': 'RDP',
        'ip_protocol': 'tcp',
        'from_port': '3389',
        'to_port': '3389',
    },
}
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller01:11211,controller02:11211,controller03:11211',
    }
}
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

配置openstack-dashboard.conf

在全部控制節點操作;

cp /etc/httpd/conf.d/openstack-dashboard.conf{,.bak}

#建立策略文件(policy.json)的軟鏈接,否則登錄到dashboard將出現權限錯誤和顯示混亂
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf

#賦權,在第3行后新增 WSGIApplicationGroup %{GLOBAL}
sed -i '3a WSGIApplicationGroup\ %{GLOBAL}' /etc/httpd/conf.d/openstack-dashboard.conf

最終:/etc/httpd/conf.d/openstack-dashboard.conf

<VirtualHost *:80>

    ServerAdmin webmaster@openstack.org
    ServerName  openstack_dashboard

    DocumentRoot /usr/share/openstack-dashboard/

    LogLevel warn
    ErrorLog /var/log/httpd/openstack_dashboard-error.log
    CustomLog /var/log/httpd/openstack_dashboard-access.log combined

    WSGIScriptReloading On
    WSGIDaemonProcess openstack_dashboard_website processes=3
    WSGIProcessGroup openstack_dashboard_website
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On

    WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi

    <Location "/">
        Require all granted
    </Location>
    Alias /dashboard/static /usr/share/openstack-dashboard/static
    <Location "/static">
        SetHandler None
    </Location>
</Virtualhost>

重啟apache和memcache

在全部控制節點操作;

systemctl restart httpd.service memcached.service
systemctl enable httpd.service memcached.service
systemctl status httpd.service memcached.service

驗證訪問

地址:http://10.10.10.10/dashboard

域/賬號/密碼:default/admin/123456,或:default/demo/123456

image-20211109111725049

image-20211109111856008

十一、創建虛擬網絡並啟動實例操作

創建external外部網絡

只能admin管理員操作,可添加多個外部網絡;

為了驗證訪問外部網絡的功能,需要先在 vmware 中把VMnet2設置為NAT模式,前面已提到過

管理員---網絡---創建網絡

image-20211109112621868

  • 物理網絡對應前面配置:openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external
  • 需要勾選外部網絡

image-20211109112745902

  • 因為前面配置:openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings external:ens34,vlan:ens36,external 對應物理網卡 ens34,所以這里的真實網絡地址為:10.10.20.0/24

image-20211109112955005

image-20211109113021565

創建路由

image-20211109150220603

  • 可以不選擇外部網絡(這樣的路由器只用於不同vpc之間的連通)

image-20211109150324241

路由已連接的外部網絡

image-20211109150420354

創建安全組

image-20211109150534153

image-20211109150603513

  • 放行所有出口流量
  • 放行所有同安全組的入口流量
  • 放行外部22、icmp的入口流量

創建實例類型

image-20211109113227671

  • 注意Swap磁盤會占用計算節點本地磁盤,即使后面對接了分布式存儲ceph,所以你會發現大多數公有雲廠商swap默認就是關閉的
  • 對接ceph后根磁盤大小就由創建實例指定的卷大小決定了

image-20211109113247035

創建vpc網絡

image-20211109151201332

image-20211109151234550

  • 網絡地址自行規划,因為創建的是私網地址

image-20211109151319968

  • dns 服務器一般為外部網絡的網關即可

image-20211109151522617

把 vpc 網絡連接到路由器,以便訪問外部網絡

image-20211109151626939

  • IP地址設置為VPC子網的網關IP

image-20211109151959542

創建實例

image-20211109152123549

image-20211109152141132

image-20211109152158133

image-20211109152215682

  • 選擇 vpc,這里其實可以直接選擇 EXT 外部網絡,讓每個實例分配外部網絡IP,直接訪問外部網絡,但是因為我們外部網絡只在3個控制節點存在,如果這里使用 EXT 網絡,會報錯:ERROR nova.compute.manager [instance: a8847440-8fb9-48ca-a530-4258bd623dce] PortBindingFailed: Binding failed for port f0687181-4539-4096-b09e-0cfdeab000af, please check neutron logs for more information.(計算節點日志 /var/log/nova/nova-compute.log)
  • 當某個vpc的子網中有兩個(或以上)個網段時,創建虛擬機時,quantum會從默認的子網段(一般為最新創建的子網段)中獲取IP地址,綁定新創建的虛擬機port。但是有的時候想讓新創建的虛擬機獲得指定的子網段IP地址,不使用默認的IP地址段時,則需要設置一下,具體方法自行百度

image-20211109152926596

  • 使用新建的安全組 group01

其他選項卡可以不填

image-20211109153000484

image-20211109153124100

  • 默認情況下實例會出錯:ImageUnacceptable: Image 1c66cd7e-b6
    d9-4e70-a3d4-f73b27a84230 is unacceptable: Image has no associated data。原因是當前glance服務使用的是本地存儲鏡像,這就會導致三個glance節點只有一個節點上存在鏡像,臨時解決方法是把 /var/lib/glance/images/ 下所有鏡像拷貝到所有glance節點,拷貝完后注意目錄授權,最終解決方案是glance使用共享存儲(nfs等)、分布式存儲(swift、ceph等)

image-20211109154514811

  • 可以通過SNAT訪問外網

網絡拓撲如下:

image-20211109160511713

不同vpc內主機之間通信

不同vpc之間的主機如果需要通信的話只需要把兩個 vpc 都連接到相同路由器上面即可

image-20211109164209916

image-20211109164239077

image-20211109164412262

image-20211109164712087

image-20211109164527569

浮動IP

把 vpc 通過路由器和ext外部網絡連接后,雖然實例可以訪問外網了,但是外面無法訪問進來,這時候需要使用浮動IP功能了。

先申請浮動IP

image-20211109165145014

image-20211109165206741

綁定浮動ip到具體實例

image-20211109165250271

image-20211109165331372

image-20211109165351184

驗證

image-20211109165507037

image-20211109165857594

image-20211109170924261

十二、Ceph集群部署

基礎安裝見:ceph-deploy 安裝 ceph - leffss - 博客園 (cnblogs.com)

十三、Cinder部署

Cinder的核心功能是對卷的管理,允許對卷、卷的類型、卷的快照、卷備份進行處理。它為后端不同的存儲設備提供給了統一的接口,不同的塊設備服務廠商在Cinder中實現其驅動,可以被Openstack整合管理,nova與cinder的工作原理類似。支持多種 back-end(后端)存儲方式,包括 LVM,NFS,Ceph 和其他諸如 EMC、IBM 等商業存儲產品和方案。

Cinder各組件功能

  • Cinder-api 是 cinder 服務的 endpoint,提供 rest 接口,負責處理 client 請求,並將 RPC 請求發送至 cinder-scheduler 組件。
  • Cinder-scheduler 負責 cinder 請求調度,其核心部分就是 scheduler_driver, 作為 scheduler manager 的 driver,負責 cinder-volume 具體的調度處理,發送 cinder RPC 請求到選擇的 cinder-volume。
  • Cinder-volume 負責具體的 volume 請求處理,由不同后端存儲提供 volume 存儲空間。目前各大存儲廠商已經積極地將存儲產品的 driver 貢獻到 cinder 社區

img

其中Cinder-api與Cinder-scheduler構成控制節點,Cinder-volume 構成存儲節點;

在采用ceph或其他商業/非商業后端存儲時,建議將Cinder-api、Cinder-scheduler與Cinder-volume服務部署在控制節點;

這里由於只有計算節點上能訪問 ceph 從 client 網絡(Cinder-volume需要訪問ceph集群,nova計算節點也需要),所以將Cinder-api、Cinder-scheduler部署在3台控制節點,Cinder-volume部署在2台計算節點;

Cinder控制節點集群部署

創建cinder數據庫

在任意控制節點創建數據庫;

mysql -u root -p123456

create database cinder;
grant all privileges on cinder.* to 'cinder'@'%' identified by '123456';
grant all privileges on cinder.* to 'cinder'@'localhost' identified by '123456';
flush privileges;

創建cinder相關服務憑證

在任意控制節點操作,以controller01節點為例;

創建cinder服務用戶

source admin-openrc 
openstack user create --domain default --password 123456 cinder

向cinder用戶賦予admin權限

openstack role add --project service --user cinder admin

創建cinderv2和cinderv3服務實體

# cinder服務實體類型 "volume"

openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

創建塊存儲服務API端點

  • 塊存儲服務需要每個服務實體的端點
  • cinder-api后綴為用戶project-id,可通過openstack project list查看
# v2
openstack endpoint create --region RegionOne volumev2 public http://10.10.10.10:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://10.10.10.10:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://10.10.10.10:8776/v2/%\(project_id\)s
# v3
openstack endpoint create --region RegionOne volumev3 public http://10.10.10.10:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://10.10.10.10:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://10.10.10.10:8776/v3/%\(project_id\)s

部署與配置cinder

安裝cinder

在全部控制節點安裝cinder服務,以controller01節點為例

yum install openstack-cinder -y

配置cinder.conf

在全部控制節點操作,以controller01節點為例;注意my_ip參數,根據節點修改;

# 備份配置文件/etc/cinder/cinder.conf
cp -a /etc/cinder/cinder.conf{,.bak}
grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 10.10.10.31 
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://10.10.10.10:9292
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen '$my_ip'
openstack-config --set /etc/cinder/cinder.conf DEFAULT osapi_volume_listen_port 8776
openstack-config --set /etc/cinder/cinder.conf DEFAULT log_dir /var/log/cinder
#直接連接rabbitmq集群
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672

openstack-config --set /etc/cinder/cinder.conf  database connection mysql+pymysql://cinder:123456@10.10.10.10/cinder

openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  www_authenticate_uri  http://10.10.10.10:5000
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  auth_url  http://10.10.10.10:5000
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  memcached_servers  controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  username  cinder
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken  password 123456

openstack-config --set /etc/cinder/cinder.conf  oslo_concurrency  lock_path  /var/lib/cinder/tmp

配置nova.conf使用塊存儲

在全部控制節點操作,以controller01節點為例;

配置只涉及nova.conf的[cinder]字段;

openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

初始化cinder數據庫

任意控制節點操作;

su -s /bin/sh -c "cinder-manage db sync" cinder

#驗證
mysql -ucinder -p123456 -e "use cinder;show tables;"

啟動服務

全部控制節點操作;修改了nova配置文件,首先需要重啟nova服務

systemctl restart openstack-nova-api.service && systemctl status openstack-nova-api.service

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

控制節點驗證

$ source admin-openrc
$ openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller01 | nova | enabled | up    | 2021-11-10T04:47:23.000000 |
| cinder-scheduler | controller02 | nova | enabled | up    | 2021-11-10T04:47:27.000000 |
| cinder-scheduler | controller03 | nova | enabled | up    | 2021-11-10T04:47:29.000000 |
+------------------+--------------+------+---------+-------+----------------------------+

image-20211110124902225

Cinder存儲節點集群部署

Openstack的存儲面臨的問題

https://docs.openstack.org/arch-design/

企業上線openstack,必須要思考和解決三方面的難題:

  1. 控制集群的高可用和負載均衡,保障集群沒有單點故障,持續可用,
  2. 網絡的規划和neutron L3的高可用和負載均衡,
  3. 存儲的高可用性和性能問題。

存儲openstack中的痛點與難點之一
在上線和運維中,值得考慮和規划的重要點,openstack支持各種存儲,包括分布式的文件系統,常見的有:ceph,glusterfs和sheepdog,同時也支持商業的FC存儲,如IBM,EMC,NetApp和huawei的專業存儲設備,一方面能夠滿足企業的利舊和資源的統一管理。

Ceph概述
ceph作為近年來呼聲最高的統一存儲,在雲環境下適應而生,ceph成就了openstack和cloudstack這類的開源的平台方案,同時openstack的快速發展,也吸引了越來越多的人參與到ceph的研究中來。ceph在整個社區的活躍度越來越高,越來越多的企業,使用ceph做為openstack的glance,nova,cinder的存儲。

ceph是一種統一的分布式文件系統;能夠支持三種常用的接口:

  1. 對象存儲接口,兼容於S3,用於存儲結構化的數據,如圖片,視頻,音頻等文件,其他對象存儲有:S3,Swift,FastDFS等;
  2. 文件系統接口,通過cephfs來完成,能夠實現類似於nfs的掛載文件系統,需要由MDS來完成,類似的文件系存儲有:nfs,samba,glusterfs等;
  3. 塊存儲,通過rbd實現,專門用於存儲雲環境下塊設備,如openstack的cinder卷存儲,這也是目前ceph應用最廣泛的地方。

安裝cinder

在全部計算點安裝;

yum install openstack-cinder targetcli python2-keystone -y

配置cinder.conf

在全部計算點配置;注意my_ip參數,根據節點修改;

# 備份配置文件/etc/cinder/cinder.conf
cp -a /etc/cinder/cinder.conf{,.bak}
grep -Ev '#|^$' /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf  DEFAULT transport_url rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672
openstack-config --set /etc/cinder/cinder.conf  DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf  DEFAULT my_ip 10.10.10.41
openstack-config --set /etc/cinder/cinder.conf  DEFAULT glance_api_servers http://10.10.10.10:9292
#openstack-config --set /etc/cinder/cinder.conf  DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf  DEFAULT enabled_backends ceph

openstack-config --set /etc/cinder/cinder.conf  database connection mysql+pymysql://cinder:123456@10.10.10.10/cinder

openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken www_authenticate_uri http://10.10.10.10:5000
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken auth_url http://10.10.10.10:5000
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf  keystone_authtoken password 123456

openstack-config --set /etc/cinder/cinder.conf  oslo_concurrency lock_path /var/lib/cinder/tmp

啟動服務

全部計算節點操作;

systemctl restart openstack-cinder-volume.service target.service
systemctl enable openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service

在控制節點進行驗證

$ source admin-openrc 
$ openstack volume service list
+------------------+----------------+------+---------+-------+----------------------------+
| Binary           | Host           | Zone | Status  | State | Updated At                 |
+------------------+----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller01   | nova | enabled | up    | 2021-11-10T05:04:24.000000 |
| cinder-scheduler | controller02   | nova | enabled | up    | 2021-11-10T05:04:27.000000 |
| cinder-scheduler | controller03   | nova | enabled | up    | 2021-11-10T05:04:20.000000 |
| cinder-volume    | compute01@ceph | nova | enabled | down  | 2021-11-10T05:01:08.000000 |
| cinder-volume    | compute02@ceph | nova | enabled | down  | 2021-11-10T05:01:10.000000 |
+------------------+----------------+------+---------+-------+----------------------------+
  • 默認情況下 2 個cinder-volume的state狀態是down的,原因是此時后端存儲服務為ceph,但ceph相關服務尚未啟用並集成到cinder-volume

十四、對接Ceph存儲

OpenStack 使用 Ceph 作為后端存儲可以帶來以下好處:

  • 不需要購買昂貴的商業存儲設備,降低 OpenStack 的部署成本

  • Ceph 同時提供了塊存儲、文件系統和對象存儲,能夠完全滿足 OpenStack 的存儲類型需求

  • RBD COW 特性支持快速的並發啟動多個 OpenStack 實例

  • 為 OpenStack 實例默認的提供持久化卷

  • 為 OpenStack 卷提供快照、備份以及復制功能

  • 為 Swift 和 S3 對象存儲接口提供了兼容的 API 支持

在生產環境中,我們經常能夠看見將 Nova、Cinder、Glance 與 Ceph RBD 進行對接。除此之外,還可以將 Swift、Manila 分別對接到 Ceph RGW 與 CephFS。Ceph 作為統一存儲解決方案,有效降低了 OpenStack 雲環境的復雜性與運維成本。

Openstack環境中,數據存儲可分為臨時性存儲與永久性存儲。

臨時性存儲:主要由本地文件系統提供,並主要用於nova虛擬機的本地系統與臨時數據盤,以及存儲glance上傳的系統鏡像;

永久性存儲:主要由cinder提供的塊存儲與swift提供的對象存儲構成,以cinder提供的塊存儲應用最為廣泛,塊存儲通常以雲盤的形式掛載到虛擬機中使用。

Openstack中需要進行數據存儲的三大項目主要是nova項目(虛擬機鏡像文件),glance項目(共用模版鏡像)與cinder項目(塊存儲)。

下圖為cinder,glance與nova訪問ceph集群的邏輯圖:

ceph與openstack集成主要用到ceph的rbd服務,ceph底層為rados存儲集群,ceph通過librados庫實現對底層rados的訪問;

openstack各項目客戶端調用librbd,再由librbd調用librados訪問底層rados;
實際使用中,nova需要使用libvirtdriver驅動以通過libvirt與qemu調用librbd;cinder與glance可直接調用librbd;

寫入ceph集群的數據被條帶切分成多個object,object通過hash函數映射到pg(構成pg容器池pool),然后pg通過幾圈crush算法近似均勻地映射到物理存儲設備osd(osd是基於文件系統的物理存儲設備,如xfs,ext4等)。

img

img

CEPH PG數量設置與詳細介紹

在創建池之前要設置一下每個OSD的最大PG 數量

PG PGP官方計算公式計算器

參數解釋:

  • Target PGs per OSD:預估每個OSD的PG數,一般取100計算。當預估以后集群OSD數不會增加時,取100計算;當預估以后集群OSD數會增加一倍時,取200計算。
  • OSD #:集群OSD數量。
  • %Data:預估該pool占該OSD集群總容量的近似百分比。
  • Size:該pool的副本數。

img

依據參數使用公式計算新的 PG 的數目:
PG 總數= ((OSD總數*100)/最大副本數)/池數
3x100/3/3=33.33 ;舍入到2的N次幕為32

我們會將三個重要的OpenStack服務:Cinder(塊存儲)、Glance(鏡像)和Nova(虛擬機虛擬磁盤)與Ceph集成。

openstack 集群准備

openstack集群作為ceph的客戶端;下面需要再openstack集群上進行ceph客戶端的環境配置

全部節點上添加ceph的yum源

rpm -ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm
sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#g' /etc/yum.repos.d/ceph.repo
  • ha 節點可以不需要

安裝Ceph客戶端

openstack全部節點安裝ceph;已配置yum源,直接安裝即可;目的是可以在openstack集群使用ceph的命令

yum install ceph -y

glance服務所在節點安裝python2-rbd

glance-api服務運行在3個控制節點,因此3台控制節點都必須安裝

yum install python-rbd -y

cinder-volume與nova-compute服務所在節點安裝ceph-common

cinder-volumenova-compute服務運行在2個計算(存儲)節點;因此2台計算節點都必須安裝

yum install ceph-common -y

需要有ceph的配置文件(ceph集群上操作)

將配置文件和密鑰復制到openstack集群各節點

配置文件就是生成的ceph.conf;而密鑰是ceph.client.admin.keyring,當使用ceph客戶端連接至ceph集群時需要使用的密默認密鑰,這里我們所有節點都要復制,命令如下:

ceph-deploy admin controller01 controller02 controller03 compute01 compute02
  • 復制后配置文件在 /etc/ceph 目錄下

ceph 集群准備

需求說明

※Glance 作為openstack中鏡像服務,支持多種適配器,支持將鏡像存放到本地文件系統,http服務器,ceph分布式文件系統,glusterfs和sleepdog等開源的分布式文件系統上。目前glance采用的是本地filesystem的方式存儲,存放在默認的路徑/var/lib/glance/images下,當把本地的文件系統修改為分布式的文件系統ceph之后,原本在系統中鏡像將無法使用,所以建議當前的鏡像刪除,部署好ceph之后,再統一上傳至ceph中存儲。

※Nova 負責虛擬機的生命周期管理,包括創建,刪除,重建,開機,關機,重啟,快照等,作為openstack的核心,nova負責IaaS中計算重要的職責,其中nova的存儲格外重要,默認情況下,nova將instance的數據存放在/var/lib/nova/instances/%UUID目錄下,使用本地的存儲空間。使用這種方式帶來的好處是:簡單,易實現,速度快,故障域在一個可控制的范圍內。然而,缺點也非常明顯:compute出故障,上面的虛擬機down機時間長,沒法快速恢復,此外,一些特性如熱遷移live-migration,虛擬機容災nova evacuate等高級特性,將無法使用,對於后期的雲平台建設,有明顯的缺陷。對接 Ceph 主要是希望將實例的系統磁盤文件儲存到 Ceph 集群中。與其說是對接 Nova,更准確來說是對接 QEMU-KVM/libvirt,因為 librbd 早已原生集成到其中。

※Cinder 為 OpenStack 提供卷服務,支持非常廣泛的后端存儲類型。對接 Ceph 后,Cinder 創建的 Volume 本質就是 Ceph RBD 的塊設備,當 Volume 被虛擬機掛載后,Libvirt 會以 rbd 協議的方式使用這些 Disk 設備。除了 cinder-volume 之后,Cinder 的 Backup 服務也可以對接 Ceph,將備份的 Image 以對象或塊設備的形式上傳到 Ceph 集群。

原理解析

使用ceph的rbd接口,需要通過libvirt,所以需要在客戶端機器上安裝libvirt和qemu,關於ceph和openstack結合的結構如下,同時,在openstack中,需要用到存儲的地方有三個:

  1. glance的鏡像,默認的本地存儲,路徑在/var/lib/glance/images目錄下,
  2. nova虛擬機存儲,默認本地,路徑位於/var/lib/nova/instances目錄下,
  3. cinder存儲,默認采用LVM的存儲方式。

創建pool池

為 Glance、Nova、Cinder 創建專用的RBD Pools池

需要配置hosts解析文件,這里最開始已經配置完成,如未添加hosts解析需要進行配置

在ceph01管理節點上操作;命名為:volumes,vms,images,分布給Glance,Nova,Cinder 使用

依據參數使用公式計算新的 PG 的數目,集群中15個 osd,2個副本(默認3個,生產也建議3個),3個池:
PG 總數= ((OSD總數*100)/最大副本數)/池數
15x100/2/3=250 ;舍入到2的N次幕為128

# ceph默認創建了一個pool池為rbd
[root@ceph01 ~]# ceph osd lspools
1 .rgw.root,2 default.rgw.control,3 default.rgw.meta,4 default.rgw.log,
-----------------------------------
    
# 為 Glance、Nova、Cinder 創建專用的 RBD Pools,並格式化
ceph osd pool create images 128 128
ceph osd pool create volumes 128 128
ceph osd pool create vms 128 128

rbd pool init images
rbd pool init volumes
rbd pool init vms

-----------------------------------
    
# 查看pool的pg_num和pgp_num大小
[root@ceph01 ~]# ceph osd pool get vms pg_num
pg_num: 128
[root@ceph01 ~]# ceph osd pool get vms pgp_num
pgp_num: 128

-----------------------------------
    
# 查看ceph中的pools;忽略之前創建的pool
[root@ceph01 ~]# ceph osd lspools
1 .rgw.root,2 default.rgw.control,3 default.rgw.meta,4 default.rgw.log,5 images,6 volumes,7 vms,

[root@ceph01 ~]# ceph osd pool stats
...
pool images id 5
  nothing is going on

pool volumes id 6
  nothing is going on

pool vms id 7
  nothing is going on

ceph授權認證

在ceph01管理節點上操作

通過ceph管理節點為Glance、cinder創建用戶

針對pool設置權限,pool名對應創建的pool

[root@ceph01 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
        key = AQCyX4thKE90ERAAw7mrCSGzDip60gZQpoth7g==

[root@ceph01 ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
        key = AQC9X4thS8JUKxAApugrXmAkkgHt3NvW/v4lJg==

配置openstack節點與ceph的ssh免密連接

前面已配置,省略

推送client.glance和client.cinder秘鑰

#將創建client.glance用戶生成的秘鑰推送到運行glance-api服務的控制節點
ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
        key = AQCyX4thKE90ERAAw7mrCSGzDip60gZQpoth7g==

ceph auth get-or-create client.glance | ssh root@controller01 tee /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.glance | ssh root@controller02 tee /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.glance | ssh root@controller03 tee /etc/ceph/ceph.client.glance.keyring

#同時修改秘鑰文件的屬主與用戶組
ssh root@controller01 chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller02 chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller03 chown glance:glance /etc/ceph/ceph.client.glance.keyring

nova-compute與cinder-volume都部署在計算節點,不必重復操作,如果nova計算節點與cinder存儲節點分離則需要分別推送;

# 將創建client.cinder用戶生成的秘鑰推送到運行cinder-volume服務的節點
ceph auth get-or-create client.cinder | tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
        key = AQC9X4thS8JUKxAApugrXmAkkgHt3NvW/v4lJg==

ceph auth get-or-create client.cinder | ssh root@compute01 tee /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder | ssh root@compute02 tee /etc/ceph/ceph.client.cinder.keyring


# 同時修改秘鑰文件的屬主與用戶組
ssh root@compute01 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ssh root@compute02 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

授權添加到Libvirt守護進程

在ceph管理節點上為nova節點創建keyring文件

  • nova-compute所在節點需要將client.cinder用戶的秘鑰文件存儲到libvirt中;當基於ceph后端的cinder卷被attach到虛擬機實例時,libvirt需要用到該秘鑰以訪問ceph集群;

  • 在ceph管理節點向計算(存儲)節點推送client.cinder秘鑰文件,生成的文件是臨時性的,將秘鑰添加到libvirt后可刪除

ceph auth get-key client.cinder | ssh root@compute01 tee /etc/ceph/client.cinder.key
ceph auth get-key client.cinder | ssh root@compute02 tee /etc/ceph/client.cinder.key

在計算節點將秘鑰加入libvirt

全部計算節點配置;以compute01節點為例;

  • 生成隨機 UUID,作為Libvirt秘鑰的唯一標識,全部計算節點可共用此uuid;
  • 只需要生成一次,所有的cinder-volume、nova-compute都是用同一個UUID,請保持一致;
[root@compute01 ~]# uuidgen
bae8efd1-e319-48cc-8fd0-9213dd0e3497
[root@compute01 ~]# 
# 創建Libvirt秘鑰文件,修改為生成的uuid
[root@compute01 ~]# cat >> /etc/ceph/secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>bae8efd1-e319-48cc-8fd0-9213dd0e3497</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF

scp -rp /etc/ceph/secret.xml compute02:/etc/ceph/

# 定義Libvirt秘鑰;全部計算節點執行
[root@compute01 ~]# virsh secret-define --file /etc/ceph/secret.xml
Secret bae8efd1-e319-48cc-8fd0-9213dd0e3497 created

[root@compute02 ~]# virsh secret-define --file /etc/ceph/secret.xml
Secret bae8efd1-e319-48cc-8fd0-9213dd0e3497 created

# 設置秘鑰的值為client.cinder用戶的key,Libvirt;憑此key就能以Cinder的用戶訪問Ceph集群
[root@compute01 ~]# virsh secret-set-value --secret bae8efd1-e319-48cc-8fd0-9213dd0e3497 --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set

[root@compute02 ~]# virsh secret-set-value --secret bae8efd1-e319-48cc-8fd0-9213dd0e3497 --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set

# 查看每台計算節點上的秘鑰清單
[root@compute01 ~]# virsh secret-list
 UUID                                  Usage
--------------------------------------------------------------------------------
 bae8efd1-e319-48cc-8fd0-9213dd0e3497  ceph client.cinder secret

[root@compute02 ~]# virsh secret-list
 UUID                                  Usage
--------------------------------------------------------------------------------
 bae8efd1-e319-48cc-8fd0-9213dd0e3497  ceph client.cinder secret

配置glance集成ceph

Glance 為 OpenStack 提供鏡像及其元數據注冊服務,Glance 支持對接多種后端存儲。與 Ceph 完成對接后,Glance 上傳的 Image 會作為塊設備儲存在 Ceph 集群中。新版本的 Glance 也開始支持 enabled_backends 了,可以同時對接多個存儲提供商。

寫時復制技術(copy-on-write):內核只為新生成的子進程創建虛擬空間結構,它們復制於父進程的虛擬空間結構,但是不為這些段分配物理內存,它們共享父進程的物理空間,當父子進程中有更改相應的段的行為發生時,再為子進程相應的段分配物理空間。寫時復制技術大大降低了進程對資源的浪費。

配置glance-api.conf

全部控制節點進行配置;以controller01節點為例;

只修改涉及glance集成ceph的相關配置

# 備份glance-api的配置文件;以便於恢復
cp /etc/glance/glance-api.conf{,.bak2}

# 刪除glance-api如下的默認配置
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
# 啟用映像的寫時復制
openstack-config --set /etc/glance/glance-api.conf DEFAULT show_image_direct_url True
# 變更默認使用的本地文件存儲為ceph rbd存儲
openstack-config --set /etc/glance/glance-api.conf glance_store stores rbd
openstack-config --set /etc/glance/glance-api.conf glance_store default_store rbd
openstack-config --set /etc/glance/glance-api.conf glance_store rbd_store_pool images
openstack-config --set /etc/glance/glance-api.conf glance_store rbd_store_user glance
openstack-config --set /etc/glance/glance-api.conf glance_store rbd_store_ceph_conf /etc/ceph/ceph.conf
openstack-config --set /etc/glance/glance-api.conf glance_store rbd_store_chunk_size 8

變更配置文件,重啟服務

systemctl restart openstack-glance-api.service
lsof -i:9292

上傳鏡像測試

對接 Ceph 之后,通常會以 RAW 格式創建 Glance Image,而不再使用 QCOW2 格式,否則創建虛擬機時需要進行鏡像復制,沒有利用 Ceph RBD COW 的優秀特性。

[root@controller01 ~]# ll
total 16448
-rw-r--r-- 1 root root      277 Nov  4 16:11 admin-openrc
-rw-r--r-- 1 root root 16338944 Aug 17 14:31 cirros-0.5.1-x86_64-disk.img
-rw-r--r-- 1 root root      269 Nov  4 16:21 demo-openrc
-rw-r--r-- 1 root root   300067 Nov  4 15:43 python2-qpid-proton-0.28.0-1.el7.x86_64.rpm
-rw-r--r-- 1 root root   190368 Nov  4 15:43 qpid-proton-c-0.28.0-1.el7.x86_64.rpm
[root@controller01 ~]# 
[root@controller01 ~]# qemu-img info cirros-0.5.1-x86_64-disk.img
image: cirros-0.5.1-x86_64-disk.img
file format: qcow2
virtual size: 112M (117440512 bytes)
disk size: 16M
cluster_size: 65536
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
[root@controller01 ~]# 

# 將鏡像從qcow2格式轉換為raw格式
[root@controller01 ~]# qemu-img convert -f qcow2 -O raw cirros-0.5.1-x86_64-disk.img  cirros-0.5.1-x86_64-disk.raw
[root@controller01 ~]# ls -l
total 33504
-rw-r--r-- 1 root root       277 Nov  4 16:11 admin-openrc
-rw-r--r-- 1 root root  16338944 Aug 17 14:31 cirros-0.5.1-x86_64-disk.img
-rw-r--r-- 1 root root 117440512 Nov 10 19:11 cirros-0.5.1-x86_64-disk.raw
-rw-r--r-- 1 root root       269 Nov  4 16:21 demo-openrc
-rw-r--r-- 1 root root    300067 Nov  4 15:43 python2-qpid-proton-0.28.0-1.el7.x86_64.rpm
-rw-r--r-- 1 root root    190368 Nov  4 15:43 qpid-proton-c-0.28.0-1.el7.x86_64.rpm
[root@controller01 ~]# 
[root@controller01 ~]# qemu-img info cirros-0.5.1-x86_64-disk.raw 
image: cirros-0.5.1-x86_64-disk.raw
file format: raw
virtual size: 112M (117440512 bytes)
disk size: 17M
[root@controller01 ~]# 
[root@controller01 ~]# 

# 上傳鏡像;查看glance和ceph聯動情況
[root@controller01 ~]# openstack image create --container-format bare --disk-format raw --file cirros-0.5.1-x86_64-disk.raw --unprotected --public cirros_raw
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                                                                                                                                |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 01e7d1515ee776be3228673441d449e6                                                                                                                                                                                                                                                                     |
| container_format | bare                                                                                                                                                                                                                                                                                                 |
| created_at       | 2021-11-10T11:13:57Z                                                                                                                                                                                                                                                                                 |
| disk_format      | raw                                                                                                                                                                                                                                                                                                  |
| file             | /v2/images/1c72f484-f828-4a9d-9a4c-5d542acbd203/file                                                                                                                                                                                                                                                 |
| id               | 1c72f484-f828-4a9d-9a4c-5d542acbd203                                                                                                                                                                                                                                                                 |
| min_disk         | 0                                                                                                                                                                                                                                                                                                    |
| min_ram          | 0                                                                                                                                                                                                                                                                                                    |
| name             | cirros_raw                                                                                                                                                                                                                                                                                           |
| owner            | 60f490ceabcb493db09bdd4c1990655f                                                                                                                                                                                                                                                                     |
| properties       | direct_url='rbd://18bdcd50-2ea5-4130-b27b-d1b61d1460c7/images/1c72f484-f828-4a9d-9a4c-5d542acbd203/snap', os_hash_algo='sha512', os_hash_value='d663dc8d739adc772acee23be3931075ea82a14ba49748553ab05f0e191286a8fe937d00d9f685ac69fd817d867b50be965e82e46d8cf3e57df6f86a57fa3c36', os_hidden='False' |
| protected        | False                                                                                                                                                                                                                                                                                                |
| schema           | /v2/schemas/image                                                                                                                                                                                                                                                                                    |
| size             | 117440512                                                                                                                                                                                                                                                                                            |
| status           | active                                                                                                                                                                                                                                                                                               |
| tags             |                                                                                                                                                                                                                                                                                                      |
| updated_at       | 2021-11-10T11:14:01Z                                                                                                                                                                                                                                                                                 |
| virtual_size     | None                                                                                                                                                                                                                                                                                                 |
| visibility       | public                                                                                                                                                                                                                                                                                               |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller01 ~]# 

查看鏡像和glance池數據

  • 查看openstack鏡像列表
[root@controller01 ~]# openstack image list
+--------------------------------------+--------------+--------+
| ID                                   | Name         | Status |
+--------------------------------------+--------------+--------+
| 1c66cd7e-b6d9-4e70-a3d4-f73b27a84230 | cirros-qcow2 | active |
| 1c72f484-f828-4a9d-9a4c-5d542acbd203 | cirros_raw   | active |
+--------------------------------------+--------------+--------+
[root@controller01 ~]# 
  • 查看images池的數據
[root@controller01 ~]# rbd ls images
1c72f484-f828-4a9d-9a4c-5d542acbd203
[root@controller01 ~]# 
  • 查看上傳的鏡像詳細rbd信息
[root@controller01 ~]# rbd info images/1c72f484-f828-4a9d-9a4c-5d542acbd203
rbd image '1c72f484-f828-4a9d-9a4c-5d542acbd203':
        size 112 MiB in 14 objects
        order 23 (8 MiB objects)
        snapshot_count: 1
        id: 5e9ee3899042
        block_name_prefix: rbd_data.5e9ee3899042
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features: 
        flags: 
        create_timestamp: Wed Nov 10 19:14:01 2021
[root@controller01 ~]# 
[root@controller01 ~]# 
  • 查看上傳的鏡像的快照列表
[root@controller01 ~]# rbd snap ls images/1c72f484-f828-4a9d-9a4c-5d542acbd203
SNAPID NAME SIZE    PROTECTED TIMESTAMP                
     4 snap 112 MiB yes       Wed Nov 10 19:14:04 2021 
[root@controller01 ~]# 
  • glance中的數據存儲到了ceph塊設備中
[root@controller01 ~]# rados ls -p images
rbd_directory
rbd_data.5e9ee3899042.0000000000000008
rbd_info
rbd_data.5e9ee3899042.0000000000000002
rbd_data.5e9ee3899042.0000000000000006
rbd_object_map.5e9ee3899042.0000000000000004
rbd_data.5e9ee3899042.0000000000000003
rbd_data.5e9ee3899042.0000000000000005
rbd_data.5e9ee3899042.000000000000000b
rbd_data.5e9ee3899042.000000000000000d
rbd_data.5e9ee3899042.0000000000000007
rbd_data.5e9ee3899042.0000000000000000
rbd_data.5e9ee3899042.0000000000000009
rbd_data.5e9ee3899042.000000000000000a
rbd_data.5e9ee3899042.0000000000000004
rbd_object_map.5e9ee3899042
rbd_id.1c72f484-f828-4a9d-9a4c-5d542acbd203
rbd_data.5e9ee3899042.000000000000000c
rbd_header.5e9ee3899042
rbd_data.5e9ee3899042.0000000000000001
[root@controller01 ~]# 
  • 在dashboard界面查看鏡像列表

image-20211110192056586

  • 在ceph監控界面查看上傳的鏡像

image-20211110192238322

Ceph執行image鏡像的步驟過程詳解

創建raw格式的Image時;Ceph中執行了以下步驟:

  • 在 Pool images 中新建了一個 {glance_image_uuid} 塊設備,塊設備的 Object Size 為 8M,對應的 Objects 有 2 個,足以放下 cirros.raw 13M 的數據。

  • 對新建塊設備執行快照。

  • 對該快照執行保護。

rbd -p ${GLANCE_POOL} create --size ${SIZE} ${IMAGE_ID}
rbd -p ${GLANCE_POOL} snap create ${IMAGE_ID}@snap
rbd -p ${GLANCE_POOL} snap protect ${IMAGE_ID}@snap

刪除raw格式的Image時;Ceph中執行了以下步驟:

  • 先取消快照保護
  • 對快照執行刪除
  • 對鏡像執行刪除
rbd -p ${GLANCE_POOL} snap unprotect ${IMAGE_ID}@snap
rbd -p ${GLANCE_POOL} snap rm ${IMAGE_ID}@snap
rbd -p ${GLANCE_POOL} rm ${IMAGE_ID} 

總結

將openstack集群中的glance鏡像的數據存儲到ceph中是一種非常好的解決方案,既能夠保障鏡像數據的安全性,同時glance和nova在同個存儲池中,能夠基於copy-on-write(寫時復制)的方式快速創建虛擬機,能夠在秒級為單位實現vm的創建。

使用Ceph作為Cinder的后端存儲

配置cinder.conf

全部計算節點進行配置;以compute01節點為例;只修改glance集成ceph的相關配置

# 備份cinder.conf的配置文件;以便於恢復
cp /etc/cinder/cinder.conf{,.bak2}
# 后端使用ceph存儲已經在部署cinder服務時進行配置
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends ceph
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_version 2

openstack-config --set /etc/cinder/cinder.conf ceph volume_driver cinder.volume.drivers.rbd.RBDDriver
openstack-config --set /etc/cinder/cinder.conf ceph rbd_pool volumes
openstack-config --set /etc/cinder/cinder.conf ceph rbd_ceph_conf /etc/ceph/ceph.conf
openstack-config --set /etc/cinder/cinder.conf ceph rbd_flatten_volume_from_snapshot false
openstack-config --set /etc/cinder/cinder.conf ceph rbd_max_clone_depth 5
openstack-config --set /etc/cinder/cinder.conf ceph rbd_store_chunk_size 4
openstack-config --set /etc/cinder/cinder.conf ceph rados_connect_timeout -1
openstack-config --set /etc/cinder/cinder.conf ceph rbd_user cinder
# 注意替換cinder用戶訪問ceph集群使用的Secret UUID
openstack-config --set /etc/cinder/cinder.conf ceph rbd_secret_uuid bae8efd1-e319-48cc-8fd0-9213dd0e3497 
openstack-config --set /etc/cinder/cinder.conf ceph volume_backend_name ceph

重啟cinder-volume服務

全部計算節點重啟cinder-volume服務;

systemctl restart openstack-cinder-volume.service
systemctl status openstack-cinder-volume.service

驗證服務狀態

任意控制節點上查看;

[root@controller01 ~]# openstack volume service list
+------------------+----------------+------+---------+-------+----------------------------+
| Binary           | Host           | Zone | Status  | State | Updated At                 |
+------------------+----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller01   | nova | enabled | up    | 2021-11-10T11:45:19.000000 |
| cinder-scheduler | controller02   | nova | enabled | up    | 2021-11-10T11:45:11.000000 |
| cinder-scheduler | controller03   | nova | enabled | up    | 2021-11-10T11:45:14.000000 |
| cinder-volume    | compute01@ceph | nova | enabled | up    | 2021-11-10T11:45:17.000000 |
| cinder-volume    | compute02@ceph | nova | enabled | up    | 2021-11-10T11:45:21.000000 |
+------------------+----------------+------+---------+-------+----------------------------+

創建空白卷Volume測試

設置卷類型

在任意控制節點為cinder的ceph后端存儲創建對應的type,在配置多存儲后端時可區分類型;

[root@controller01 ~]# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID                                   | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 98718aae-b0e8-4a4e-8b94-de0ace67b392 | ceph | -           | True      |
+--------------------------------------+------+-------------+-----------+
[root@controller01 ~]# 
# 可通過 cinder type-list 或 openstack volume type list 查看

為ceph type設置擴展規格,鍵值volume_backend_name,value值ceph

[root@controller01 ~]# cinder type-key ceph set volume_backend_name=ceph
[root@controller01 ~]# 
[root@controller01 ~]# cinder extra-specs-list
+--------------------------------------+-------------+---------------------------------+
| ID                                   | Name        | extra_specs                     |
+--------------------------------------+-------------+---------------------------------+
| 1eae6f86-f6ae-4685-8f2b-0064dcb9d917 | __DEFAULT__ | {}                              |
| 98718aae-b0e8-4a4e-8b94-de0ace67b392 | ceph        | {'volume_backend_name': 'ceph'} |
+--------------------------------------+-------------+---------------------------------+
[root@controller01 ~]# 

創建一個volume卷

任意控制節點上創建一個1GB的卷;最后的數字1代表容量為1G

[root@controller01 ~]# openstack volume create --type ceph --size 1 ceph-volume
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2021-11-10T11:56:02.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 77d1dc9c-2826-45f2-b738-f3571f90ef87 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | ceph-volume                          |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | ceph                                 |
| updated_at          | None                                 |
| user_id             | 05a3ad27698e41b0a3e10a6daffbf64e     |
+---------------------+--------------------------------------+
[root@controller01 ~]# 

驗證

查看創建好的卷

[root@controller01 ~]# openstack volume list
+--------------------------------------+-------------+-----------+------+-----------------------------+
| ID                                   | Name        | Status    | Size | Attached to                 |
+--------------------------------------+-------------+-----------+------+-----------------------------+
| 77d1dc9c-2826-45f2-b738-f3571f90ef87 | ceph-volume | available |    1 |                             |
| 0092b891-a249-4c62-b06d-71f9a5e66e37 |             | in-use    |    1 | Attached to s6 on /dev/vda  |
+--------------------------------------+-------------+-----------+------+-----------------------------+
[root@controller01 ~]# 

# 檢查ceph集群的volumes池
[root@controller01 ~]# rbd ls volumes
volume-0092b891-a249-4c62-b06d-71f9a5e66e37
volume-77d1dc9c-2826-45f2-b738-f3571f90ef87
[root@controller01 ~]# 
[root@controller01 ~]# 
[root@controller01 ~]# rbd info volumes/volume-77d1dc9c-2826-45f2-b738-f3571f90ef87
rbd image 'volume-77d1dc9c-2826-45f2-b738-f3571f90ef87':
        size 1 GiB in 256 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 60754c5853d9
        block_name_prefix: rbd_data.60754c5853d9
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features: 
        flags: 
        create_timestamp: Wed Nov 10 19:56:05 2021
[root@controller01 ~]# 
[root@controller01 ~]# rados ls -p volumes
rbd_data.5f93fbe03211.00000000000000c8
rbd_data.5f93fbe03211.0000000000000056
rbd_data.5f93fbe03211.00000000000000b8
rbd_data.5f93fbe03211.0000000000000011
rbd_data.5f93fbe03211.0000000000000026
rbd_data.5f93fbe03211.000000000000000e
rbd_data.5f93fbe03211.0000000000000080
rbd_data.5f93fbe03211.00000000000000ae
rbd_data.5f93fbe03211.00000000000000ac
rbd_data.5f93fbe03211.0000000000000003
rbd_data.5f93fbe03211.00000000000000ce
rbd_data.5f93fbe03211.0000000000000060
rbd_data.5f93fbe03211.0000000000000012
rbd_data.5f93fbe03211.00000000000000e4
rbd_data.5f93fbe03211.000000000000008c
rbd_data.5f93fbe03211.0000000000000042
rbd_data.5f93fbe03211.000000000000001c
rbd_data.5f93fbe03211.000000000000002c
rbd_data.5f93fbe03211.00000000000000cc
rbd_data.5f93fbe03211.0000000000000086
rbd_data.5f93fbe03211.0000000000000082
rbd_data.5f93fbe03211.000000000000006e
rbd_data.5f93fbe03211.00000000000000f4
rbd_data.5f93fbe03211.0000000000000094
rbd_data.5f93fbe03211.0000000000000008
rbd_data.5f93fbe03211.00000000000000d4
rbd_data.5f93fbe03211.0000000000000015
rbd_data.5f93fbe03211.00000000000000ca
rbd_header.60754c5853d9
rbd_data.5f93fbe03211.00000000000000da
rbd_data.5f93fbe03211.0000000000000084
rbd_data.5f93fbe03211.0000000000000009
rbd_directory
rbd_data.5f93fbe03211.00000000000000fa
rbd_data.5f93fbe03211.000000000000003a
rbd_data.5f93fbe03211.000000000000004c
rbd_object_map.60754c5853d9
rbd_data.5f93fbe03211.00000000000000e8
rbd_data.5f93fbe03211.000000000000003c
rbd_data.5f93fbe03211.00000000000000e6
rbd_data.5f93fbe03211.0000000000000054
rbd_data.5f93fbe03211.0000000000000006
rbd_data.5f93fbe03211.0000000000000032
rbd_data.5f93fbe03211.0000000000000046
rbd_data.5f93fbe03211.00000000000000f2
rbd_data.5f93fbe03211.0000000000000038
rbd_data.5f93fbe03211.0000000000000096
rbd_data.5f93fbe03211.0000000000000016
rbd_data.5f93fbe03211.000000000000004e
rbd_children
rbd_data.5f93fbe03211.00000000000000d6
rbd_data.5f93fbe03211.00000000000000aa
rbd_data.5f93fbe03211.000000000000006c
rbd_data.5f93fbe03211.0000000000000068
rbd_data.5f93fbe03211.0000000000000036
rbd_data.5f93fbe03211.0000000000000000
rbd_data.5f93fbe03211.0000000000000078
rbd_data.5f93fbe03211.00000000000000ba
rbd_data.5f93fbe03211.0000000000000004
rbd_data.5f93fbe03211.0000000000000014
rbd_data.5f93fbe03211.00000000000000c0
rbd_data.5f93fbe03211.000000000000009a
rbd_info
rbd_data.5f93fbe03211.000000000000007e
rbd_data.5f93fbe03211.000000000000000b
rbd_header.5f93fbe03211
rbd_data.5f93fbe03211.000000000000000c
rbd_data.5f93fbe03211.00000000000000b0
rbd_id.volume-0092b891-a249-4c62-b06d-71f9a5e66e37
rbd_data.5f93fbe03211.00000000000000b4
rbd_data.5f93fbe03211.000000000000005c
rbd_data.5f93fbe03211.0000000000000058
rbd_data.5f93fbe03211.0000000000000024
rbd_data.5f93fbe03211.00000000000000b6
rbd_data.5f93fbe03211.00000000000000a8
rbd_data.5f93fbe03211.0000000000000062
rbd_data.5f93fbe03211.0000000000000066
rbd_data.5f93fbe03211.00000000000000a0
rbd_id.volume-77d1dc9c-2826-45f2-b738-f3571f90ef87
rbd_data.5f93fbe03211.00000000000000d8
rbd_data.5f93fbe03211.0000000000000022
rbd_data.5f93fbe03211.000000000000007a
rbd_data.5f93fbe03211.000000000000003e
rbd_data.5f93fbe03211.00000000000000b2
rbd_data.5f93fbe03211.000000000000000d
rbd_data.5f93fbe03211.000000000000009c
rbd_data.5f93fbe03211.000000000000001e
rbd_data.5f93fbe03211.0000000000000020
rbd_data.5f93fbe03211.0000000000000076
rbd_data.5f93fbe03211.00000000000000a4
rbd_data.5f93fbe03211.00000000000000a6
rbd_data.5f93fbe03211.000000000000004a
rbd_data.5f93fbe03211.0000000000000010
rbd_data.5f93fbe03211.0000000000000030
rbd_data.5f93fbe03211.00000000000000d0
rbd_data.5f93fbe03211.0000000000000064
rbd_data.5f93fbe03211.000000000000000a
rbd_data.5f93fbe03211.000000000000001a
rbd_data.5f93fbe03211.000000000000007c
rbd_data.5f93fbe03211.00000000000000c4
rbd_data.5f93fbe03211.0000000000000005
rbd_data.5f93fbe03211.000000000000008a
rbd_data.5f93fbe03211.000000000000008e
rbd_data.5f93fbe03211.00000000000000ff
rbd_data.5f93fbe03211.0000000000000002
rbd_data.5f93fbe03211.00000000000000a2
rbd_data.5f93fbe03211.00000000000000e0
rbd_data.5f93fbe03211.0000000000000070
rbd_data.5f93fbe03211.00000000000000bc
rbd_data.5f93fbe03211.00000000000000fc
rbd_data.5f93fbe03211.0000000000000050
rbd_data.5f93fbe03211.00000000000000f0
rbd_data.5f93fbe03211.00000000000000dc
rbd_data.5f93fbe03211.0000000000000034
rbd_data.5f93fbe03211.000000000000002a
rbd_data.5f93fbe03211.00000000000000ec
rbd_data.5f93fbe03211.0000000000000052
rbd_data.5f93fbe03211.0000000000000074
rbd_data.5f93fbe03211.00000000000000d2
rbd_data.5f93fbe03211.000000000000006a
rbd_data.5f93fbe03211.00000000000000ee
rbd_data.5f93fbe03211.00000000000000c6
rbd_data.5f93fbe03211.00000000000000de
rbd_data.5f93fbe03211.00000000000000fe
rbd_data.5f93fbe03211.0000000000000088
rbd_data.5f93fbe03211.00000000000000e2
rbd_data.5f93fbe03211.0000000000000098
rbd_data.5f93fbe03211.00000000000000f6
rbd_data.5f93fbe03211.00000000000000c2
rbd_data.5f93fbe03211.0000000000000044
rbd_data.5f93fbe03211.000000000000002e
rbd_data.5f93fbe03211.000000000000005a
rbd_data.5f93fbe03211.0000000000000048
rbd_data.5f93fbe03211.000000000000009e
rbd_data.5f93fbe03211.0000000000000018
rbd_data.5f93fbe03211.0000000000000072
rbd_data.5f93fbe03211.0000000000000090
rbd_data.5f93fbe03211.00000000000000be
rbd_data.5f93fbe03211.00000000000000ea
rbd_data.5f93fbe03211.0000000000000028
rbd_data.5f93fbe03211.00000000000000f8
rbd_data.5f93fbe03211.0000000000000040
rbd_data.5f93fbe03211.000000000000005e
rbd_data.5f93fbe03211.0000000000000092
rbd_object_map.5f93fbe03211
[root@controller01 ~]# 

image-20211110200150645

image-20211110200214845

openstack創建一個空白 Volume,Ceph相當於執行了以下指令

rbd -p ${CINDER_POOL} create --new-format --size ${SIZE} volume-${VOLUME_ID}

卷可以連接到實例

微信截圖_20211110204617

image-20211110204724904

從鏡像創建Volume測試

從鏡像創建 Volume 的時候應用了 Ceph RBD COW Clone 功能,這是通過glance-api.conf [DEFAULT] show_image_direct_url = True 來開啟。這個配置項的作用是持久化 Image 的 location,此時 Glance RBD Driver 才可以通過 Image location 執行 Clone 操作。並且還會根據指定的 Volume Size 來調整 RBD Image 的 Size。

刪除僵屍鏡像的方法

[root@controller01 ~]# openstack image list
+--------------------------------------+--------------+--------+
| ID                                   | Name         | Status |
+--------------------------------------+--------------+--------+
| 1c66cd7e-b6d9-4e70-a3d4-f73b27a84230 | cirros-qcow2 | active |
| 1c72f484-f828-4a9d-9a4c-5d542acbd203 | cirros_raw   | active |
+--------------------------------------+--------------+--------+

一直存在的cirros_qcow2鏡像為對接ceph之前的鏡像,現在已無法使用,所以將之刪除

# 把鏡像屬性變為非可用狀態,必須保證無實例正在使用,否則會報 HTTP 500 錯誤
$ openstack image set --deactivate 1c66cd7e-b6d9-4e70-a3d4-f73b27a84230
$ openstack image list
+--------------------------------------+--------------+-------------+
| ID                                   | Name         | Status      |
+--------------------------------------+--------------+-------------+
| 1c66cd7e-b6d9-4e70-a3d4-f73b27a84230 | cirros-qcow2 | deactivated |
| 1c72f484-f828-4a9d-9a4c-5d542acbd203 | cirros_raw   | active      |
+--------------------------------------+--------------+-------------+

# 進入數據庫
mysql -uroot -p123456
use glance;
select id, status, name from images where id='1c66cd7e-b6d9-4e70-a3d4-f73b27a84230';
update images set deleted=1 where id='1c66cd7e-b6d9-4e70-a3d4-f73b27a84230';

$ openstack image list
+--------------------------------------+------------+--------+
| ID                                   | Name       | Status |
+--------------------------------------+------------+--------+
| 1c72f484-f828-4a9d-9a4c-5d542acbd203 | cirros_raw | active |
+--------------------------------------+------------+--------+

為cirros_raw鏡像創建一個1G的卷

$ openstack volume create --image 1c72f484-f828-4a9d-9a4c-5d542acbd203 --type ceph --size 1 cirros_raw_image
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2021-11-10T12:14:29.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | f79c3af2-101e-4a76-9e88-37ceb51f622c |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | cirros_raw_image                     |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | ceph                                 |
| updated_at          | None                                 |
| user_id             | 05a3ad27698e41b0a3e10a6daffbf64e     |
+---------------------+--------------------------------------+

或者web界面

image-20211110201422261

image-20211110201512521

從鏡像創建的卷就可以在創建實例時使用了

微信截圖_20211110201638

image-20211110201844522

查看images池的Objects信息

[root@controller01 ~]# rbd ls volumes
volume-0092b891-a249-4c62-b06d-71f9a5e66e37
volume-77d1dc9c-2826-45f2-b738-f3571f90ef87
volume-f79c3af2-101e-4a76-9e88-37ceb51f622c
[root@controller01 ~]# 
[root@controller01 ~]# rbd info volumes/volume-f79c3af2-101e-4a76-9e88-37ceb51f622c
rbd image 'volume-f79c3af2-101e-4a76-9e88-37ceb51f622c':
        size 1 GiB in 256 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 62606c5008db
        block_name_prefix: rbd_data.62606c5008db
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features: 
        flags: 
        create_timestamp: Wed Nov 10 20:14:29 2021
        parent: images/1c72f484-f828-4a9d-9a4c-5d542acbd203@snap
        overlap: 112 MiB
[root@controller01 ~]# 
[root@controller01 ~]# rados ls -p volumes|grep id
rbd_id.volume-0092b891-a249-4c62-b06d-71f9a5e66e37
rbd_id.volume-77d1dc9c-2826-45f2-b738-f3571f90ef87
rbd_id.volume-f79c3af2-101e-4a76-9e88-37ceb51f622c
[root@controller01 ~]# 

在openstack上從鏡像創建一個Volume,Ceph相當於執行了以下指令

rbd clone ${GLANCE_POOL}/${IMAGE_ID}@snap ${CINDER_POOL}/volume-${VOLUME_ID}

if [[ -n "${SIZE}" ]]; then
    rbd resize --size ${SIZE} ${CINDER_POOL}/volume-${VOLUME_ID}
fi

為鏡像創建的卷生成快照測試

任意控制節點操作;

創建cirros_raw_image卷的快照

[root@controller01 ~]# openstack volume snapshot create --volume cirros_raw_image cirros_raw_image_snap01
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| created_at  | 2021-11-10T12:26:02.607424           |
| description | None                                 |
| id          | 6658c988-8dcd-411e-9882-a1ac357fbe93 |
| name        | cirros_raw_image_snap01              |
| properties  |                                      |
| size        | 1                                    |
| status      | creating                             |
| updated_at  | None                                 |
| volume_id   | f79c3af2-101e-4a76-9e88-37ceb51f622c |
+-------------+--------------------------------------+
[root@controller01 ~]# 

查看快照列表

[root@controller01 ~]# openstack volume snapshot list
+--------------------------------------+-------------------------+-------------+-----------+------+
| ID                                   | Name                    | Description | Status    | Size |
+--------------------------------------+-------------------------+-------------+-----------+------+
| 6658c988-8dcd-411e-9882-a1ac357fbe93 | cirros_raw_image_snap01 | None        | available |    1 |
+--------------------------------------+-------------------------+-------------+-----------+------+
[root@controller01 ~]# 

或者 web 查看

image-20211110202724151

在ceph上查鏡像看創建的快照

[root@controller01 ~]# openstack volume snapshot list
+--------------------------------------+-------------------------+-------------+-----------+------+
| ID                                   | Name                    | Description | Status    | Size |
+--------------------------------------+-------------------------+-------------+-----------+------+
| 6658c988-8dcd-411e-9882-a1ac357fbe93 | cirros_raw_image_snap01 | None        | available |    1 |
+--------------------------------------+-------------------------+-------------+-----------+------+
[root@controller01 ~]# 
[root@controller01 ~]# openstack volume list
+--------------------------------------+------------------+-----------+------+-----------------------------+
| ID                                   | Name             | Status    | Size | Attached to                 |
+--------------------------------------+------------------+-----------+------+-----------------------------+
| f79c3af2-101e-4a76-9e88-37ceb51f622c | cirros_raw_image | available |    1 |                             |
| 77d1dc9c-2826-45f2-b738-f3571f90ef87 | ceph-volume      | available |    1 |                             |
| 0092b891-a249-4c62-b06d-71f9a5e66e37 |                  | in-use    |    1 | Attached to s6 on /dev/vda  |
+--------------------------------------+------------------+-----------+------+-----------------------------+
[root@controller01 ~]# 
[root@controller01 ~]# 
[root@controller01 ~]# rbd snap ls volumes/volume-f79c3af2-101e-4a76-9e88-37ceb51f622c
SNAPID NAME                                          SIZE  PROTECTED TIMESTAMP                
     4 snapshot-6658c988-8dcd-411e-9882-a1ac357fbe93 1 GiB yes       Wed Nov 10 20:26:02 2021 
[root@controller01 ~]# 

查看快照詳細信息

[root@controller01 ~]# rbd info volumes/volume-f79c3af2-101e-4a76-9e88-37ceb51f622c
rbd image 'volume-f79c3af2-101e-4a76-9e88-37ceb51f622c':
        size 1 GiB in 256 objects
        order 22 (4 MiB objects)
        snapshot_count: 1
        id: 62606c5008db
        block_name_prefix: rbd_data.62606c5008db
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features: 
        flags: 
        create_timestamp: Wed Nov 10 20:14:29 2021
        parent: images/1c72f484-f828-4a9d-9a4c-5d542acbd203@snap
        overlap: 112 MiB
[root@controller01 ~]# 

在openstack上對鏡像的卷創建快照,Ceph相當於執行了以下指令

rbd -p ${CINDER_POOL} snap create volume-${VOLUME_ID}@snapshot-${SNAPSHOT_ID}
rbd -p ${CINDER_POOL} snap protect volume-${VOLUME_ID}@snapshot-${SNAPSHOT_ID} 

完成后就可以從快照創建實例了

微信截圖_20211110203938

創建 Volume卷備份測試

如果說快照時一個時間機器,那么備份就是一個異地的時間機器,它具有容災的含義。所以一般來說 Ceph Pool backup 應該與 Pool images、volumes 以及 vms 處於不同的災備隔離域。

https://www.cnblogs.com/luohaixian/p/9344803.html

https://docs.openstack.org/zh_CN/user-guide/backup-db-incremental.html

一般的,備份具有以下類型:

  • 全量備份
  • 增量備份:
  • 差異備份

使用Ceph作為Nova的虛擬機存儲

Nova是OpenStack中的計算服務。 Nova存儲與默認的運行虛擬機相關聯的虛擬磁盤鏡像,在/var/lib/nova/instances/%UUID目錄下。Ceph是可以直接與Nova集成的存儲后端之一。

在虛擬磁盤映像的計算節點上使用本地存儲有一些缺點:

  • 鏡像存儲在根文件系統下。大鏡像可能導致文件系統被填滿,從而導致計算節點崩潰。
  • 計算節點上的磁盤崩潰可能導致虛擬磁盤丟失,因此無法進行虛擬機恢復。

img

Nova 為 OpenStack 提供計算服務,對接 Ceph 主要是希望將實例的系統磁盤文件儲存到 Ceph 集群中。與其說是對接 Nova,更准確來說是對接QEMU-KVM/libvirt,因為 librbd 早已原生集成到其中。

如果需要從ceph rbd中啟動虛擬機,必須將ceph配置為nova的臨時后端;

推薦在計算節點的配置文件中啟用rbd cache功能;

為了便於故障排查,配置admin socket參數,這樣每個使用ceph rbd的虛擬機都有1個socket將有利於虛擬機性能分析與故障解決;

相關配置只涉及全部計算節點ceph.conf文件的[client]與[client.cinder]字段,以compute01節點為例;

配置ceph.conf

  • 如果需要從ceph rbd中啟動虛擬機,必須將ceph配置為nova的臨時后端;

  • 推薦在計算節點的配置文件中啟用rbd cache功能;

  • 為了便於故障排查,配置admin socket參數,這樣每個使用ceph rbd的虛擬機都有1個socket將有利於虛擬機性能分析與故障解決;

全部計算節點配置ceph.conf文件相關的[client][client.cinder]字段,以compute01節點為例;

# 創建ceph.conf文件中指定的socker與log相關的目錄,並更改屬主,必須
mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/

# 新增以下配置
[root@compute01 ~]# vim /etc/ceph/ceph.conf
[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = true
admin_socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log_file = /var/log/qemu/qemu-guest-$pid.log
rbd_concurrent_management_ops = 20

[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring

配置nova.conf

在全部計算節點配置nova后端使用ceph集群的vms池,以compute01節點為例;

# 備份nova.conf的配置文件;以便於恢復
cp /etc/nova/nova.conf{,.bak2}
# 有時候碰到硬盤太大,比如需要創建80G的虛擬機,則會創建失敗,需要修改nova.conf里面的vif超時參數
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_timeout 0
openstack-config --set /etc/nova/nova.conf DEFAULT vif_plugging_is_fatal False

# 支持虛擬機硬件加速;前面已添加
#openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf libvirt images_type rbd
openstack-config --set /etc/nova/nova.conf libvirt images_rbd_pool vms
openstack-config --set /etc/nova/nova.conf libvirt images_rbd_ceph_conf /etc/ceph/ceph.conf
openstack-config --set /etc/nova/nova.conf libvirt rbd_user cinder
openstack-config --set /etc/nova/nova.conf libvirt rbd_secret_uuid bae8efd1-e319-48cc-8fd0-9213dd0e3497

openstack-config --set /etc/nova/nova.conf libvirt disk_cachemodes \"network=writeback\"
openstack-config --set /etc/nova/nova.conf libvirt live_migration_flag \"VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED\"

# 禁用文件注入
openstack-config --set /etc/nova/nova.conf libvirt inject_password false
openstack-config --set /etc/nova/nova.conf libvirt inject_key false
openstack-config --set /etc/nova/nova.conf libvirt inject_partition -2

# 虛擬機臨時root磁盤discard功能;unmap參數在scsi接口類型磁盤釋放后可立即釋放空間
openstack-config --set /etc/nova/nova.conf libvirt hw_disk_discard unmap

重啟計算服務

在全部計算節點操作;

systemctl restart libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service

配置live-migration熱遷移

修改/etc/libvirt/libvirtd.conf

在全部計算節點操作,以compute01節點為例;
以下給出libvirtd.conf文件的修改處所在的行num

#compute01
[root@compute01 ~]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf 
20:listen_tls = 0
34:listen_tcp = 1
52:tcp_port = "16509"
#取消注釋,並修改監聽端口
65:listen_addr = "10.10.10.41"
#取消注釋,同時取消認證
167:auth_tcp = "none"

#compute02
[root@compute02 ~]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf 
20:listen_tls = 0
34:listen_tcp = 1
52:tcp_port = "16509"
65:listen_addr = "10.10.10.42"
167:auth_tcp = "none"

修改/etc/sysconfig/libvirtd

在全部計算節點操作,以compute01節點為例;設置libvirtd 服務監聽

[root@compute01 ~]# egrep -vn "^$|^#" /etc/sysconfig/libvirtd
12:LIBVIRTD_ARGS="--listen"

[root@compute02 ~]# egrep -vn "^$|^#" /etc/sysconfig/libvirtd
12:LIBVIRTD_ARGS="--listen"

計算節點設置免密訪問

所有計算節點都必須相互設置nova用戶免密,遷移必須,不做遷移會失敗;

# 所有計算節點操作
# 設置登陸 shell
usermod  -s /bin/bash nova
# 設置一個密碼
passwd nova

# compute01 操作即可
su - nova
# 生成密鑰
ssh-keygen -t rsa -P ''

# 拷貝公鑰給本機
ssh-copy-id -i .ssh/id_rsa.pub nova@localhost

# 拷貝 .ssh 目錄所有文件到集群其他節點
scp -rp .ssh/ nova@compute02:/var/lib/nova

# ssh 登陸測試
# compute01 nova 賬號測試
ssh nova@compute02

# compute02 nova 賬號測試
ssh nova@compute01

設置iptables

測試環境已經關閉了iptables,因此暫時不需要設置;正式環境需要配置

  • live-migration時,源計算節點主動連接目的計算節點tcp 16509端口,可以使用virsh -c qemu+tcp://{node_ip or node_name}/system連接目的計算節點測試;
  • 遷移前后,在源目地計算節點上的被遷移instance使用tcp 49152~49161端口做臨時通信;
  • 因虛擬機已經啟用iptables相關規則,此時切忌隨意重啟iptables服務,盡量使用插入的方式添加規則;
  • 同時以修改配置文件的方式寫入相關規則,切忌使用iptables saved命令;

在全部計算節點操作;

iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 16509 -j ACCEPT
iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 49152:49161 -j ACCEPT 

需重啟服務

全部計算節點操作;

systemctl mask libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
systemctl restart libvirtd.service
systemctl restart openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
ss -ntlp|grep libvirtd

驗證熱遷移

首先創建一個存儲在 ceph 中是實例

image-20211111103241520

  • 卷大小就是實例 / 目錄的大小

  • 創建新卷選項選擇不的話,實例還是存儲在計算節點本機目錄:/var/lib/nova/instances/,不能實現熱遷移

  • 其他選項和創建普通實例沒區別,就不列出了

image-20211111103628096

查看 s1 在哪個計算節點

[root@controller01 ~]# source admin-openrc 
[root@controller01 ~]# 
[root@controller01 ~]# nova list
+--------------------------------------+------+--------+------------+-------------+---------------------+
| ID                                   | Name | Status | Task State | Power State | Networks            |
+--------------------------------------+------+--------+------------+-------------+---------------------+
| d5a0812e-59ba-4508-ac35-636717e20f04 | s1   | ACTIVE | -          | Running     | vpc03=172.20.10.168 |
+--------------------------------------+------+--------+------------+-------------+---------------------+
[root@controller01 ~]# 
[root@controller01 ~]# nova show s1
+--------------------------------------+----------------------------------------------------------------------------------+
| Property                             | Value                                                                            |
+--------------------------------------+----------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                                             |
| OS-EXT-AZ:availability_zone          | nova                                                                             |
| OS-EXT-SRV-ATTR:host                 | compute02                                                                        |
| OS-EXT-SRV-ATTR:hostname             | s1                                                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute02                                                                        |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000061                                                                |
| OS-EXT-SRV-ATTR:kernel_id            |                                                                                  |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                                                |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                                                  |
| OS-EXT-SRV-ATTR:reservation_id       | r-rny3edjk                                                                       |
| OS-EXT-SRV-ATTR:root_device_name     | /dev/vda                                                                         |
| OS-EXT-SRV-ATTR:user_data            | -                                                                                |
| OS-EXT-STS:power_state               | 1                                                                                |
| OS-EXT-STS:task_state                | -                                                                                |
| OS-EXT-STS:vm_state                  | active                                                                           |
| OS-SRV-USG:launched_at               | 2021-11-11T02:35:56.000000                                                       |
| OS-SRV-USG:terminated_at             | -                                                                                |
| accessIPv4                           |                                                                                  |
| accessIPv6                           |                                                                                  |
| config_drive                         |                                                                                  |
| created                              | 2021-11-11T02:35:48Z                                                             |
| description                          | -                                                                                |
| flavor:disk                          | 10                                                                               |
| flavor:ephemeral                     | 0                                                                                |
| flavor:extra_specs                   | {}                                                                               |
| flavor:original_name                 | instance1                                                                        |
| flavor:ram                           | 256                                                                              |
| flavor:swap                          | 128                                                                              |
| flavor:vcpus                         | 1                                                                                |
| hostId                               | 5822a85b8dd4e33ef68488497628775f8a77b492223e44535c31858d                         |
| host_status                          | UP                                                                               |
| id                                   | d5a0812e-59ba-4508-ac35-636717e20f04                                             |
| image                                | Attempt to boot from volume - no image supplied                                  |
| key_name                             | -                                                                                |
| locked                               | False                                                                            |
| locked_reason                        | -                                                                                |
| metadata                             | {}                                                                               |
| name                                 | s1                                                                               |
| os-extended-volumes:volumes_attached | [{"id": "03a348c2-f5cb-4258-ae7c-f4b4462c9856", "delete_on_termination": false}] |
| progress                             | 0                                                                                |
| security_groups                      | default                                                                          |
| server_groups                        | []                                                                               |
| status                               | ACTIVE                                                                           |
| tags                                 | []                                                                               |
| tenant_id                            | 60f490ceabcb493db09bdd4c1990655f                                                 |
| trusted_image_certificates           | -                                                                                |
| updated                              | 2021-11-11T02:35:57Z                                                             |
| user_id                              | 05a3ad27698e41b0a3e10a6daffbf64e                                                 |
| vpc03 network                        | 172.20.10.168                                                                    |
+--------------------------------------+----------------------------------------------------------------------------------+
[root@controller01 ~]# 
# 當前在 compute02 上,遷移到 compute01
[root@controller01 ~]# nova live-migration s1 compute01
[root@controller01 ~]# 
[root@controller01 ~]# nova list
+--------------------------------------+------+-----------+------------+-------------+---------------------+
| ID                                   | Name | Status    | Task State | Power State | Networks            |
+--------------------------------------+------+-----------+------------+-------------+---------------------+
| d5a0812e-59ba-4508-ac35-636717e20f04 | s1   | MIGRATING | migrating  | Running     | vpc03=172.20.10.168 |
+--------------------------------------+------+-----------+------------+-------------+---------------------+
[root@controller01 ~]# 
[root@controller01 ~]# 
[root@controller01 ~]# nova list
+--------------------------------------+------+--------+------------+-------------+---------------------+
| ID                                   | Name | Status | Task State | Power State | Networks            |
+--------------------------------------+------+--------+------------+-------------+---------------------+
| d5a0812e-59ba-4508-ac35-636717e20f04 | s1   | ACTIVE | -          | Running     | vpc03=172.20.10.168 |
+--------------------------------------+------+--------+------------+-------------+---------------------+
[root@controller01 ~]# 
+--------------------------------------+----------------------------------------------------------------------------------+
| Property                             | Value                                                                            |
+--------------------------------------+----------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                                             |
| OS-EXT-AZ:availability_zone          | nova                                                                             |
| OS-EXT-SRV-ATTR:host                 | compute01                                                                        |
| OS-EXT-SRV-ATTR:hostname             | s1                                                                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute01                                                                        |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000061                                                                |
| OS-EXT-SRV-ATTR:kernel_id            |                                                                                  |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                                                |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                                                  |
| OS-EXT-SRV-ATTR:reservation_id       | r-rny3edjk                                                                       |
| OS-EXT-SRV-ATTR:root_device_name     | /dev/vda                                                                         |
| OS-EXT-SRV-ATTR:user_data            | -                                                                                |
| OS-EXT-STS:power_state               | 1                                                                                |
| OS-EXT-STS:task_state                | -                                                                                |
| OS-EXT-STS:vm_state                  | active                                                                           |
| OS-SRV-USG:launched_at               | 2021-11-11T02:35:56.000000                                                       |
| OS-SRV-USG:terminated_at             | -                                                                                |
| accessIPv4                           |                                                                                  |
| accessIPv6                           |                                                                                  |
| config_drive                         |                                                                                  |
| created                              | 2021-11-11T02:35:48Z                                                             |
| description                          | -                                                                                |
| flavor:disk                          | 10                                                                               |
| flavor:ephemeral                     | 0                                                                                |
| flavor:extra_specs                   | {}                                                                               |
| flavor:original_name                 | instance1                                                                        |
| flavor:ram                           | 256                                                                              |
| flavor:swap                          | 128                                                                              |
| flavor:vcpus                         | 1                                                                                |
| hostId                               | bcb1cd1c0027f77a3e41d871686633a7e9dc272b27252fadf846e887                         |
| host_status                          | UP                                                                               |
| id                                   | d5a0812e-59ba-4508-ac35-636717e20f04                                             |
| image                                | Attempt to boot from volume - no image supplied                                  |
| key_name                             | -                                                                                |
| locked                               | False                                                                            |
| locked_reason                        | -                                                                                |
| metadata                             | {}                                                                               |
| name                                 | s1                                                                               |
| os-extended-volumes:volumes_attached | [{"id": "03a348c2-f5cb-4258-ae7c-f4b4462c9856", "delete_on_termination": false}] |
| progress                             | 0                                                                                |
| security_groups                      | default                                                                          |
| server_groups                        | []                                                                               |
| status                               | ACTIVE                                                                           |
| tags                                 | []                                                                               |
| tenant_id                            | 60f490ceabcb493db09bdd4c1990655f                                                 |
| trusted_image_certificates           | -                                                                                |
| updated                              | 2021-11-11T02:39:41Z                                                             |
| user_id                              | 05a3ad27698e41b0a3e10a6daffbf64e                                                 |
| vpc03 network                        | 172.20.10.168                                                                    |
+--------------------------------------+----------------------------------------------------------------------------------+
[root@controller01 ~]# 
# 查看實例所在節點
[root@controller01 ~]# nova hypervisor-servers compute01
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| ID                                   | Name              | Hypervisor ID                        | Hypervisor Hostname |
+--------------------------------------+-------------------+--------------------------------------+---------------------+
| d5a0812e-59ba-4508-ac35-636717e20f04  | instance-0000006d | ed0a899f-d898-4a73-9100-a69a26edb932 | compute01           |
+--------------------------------------+-------------------+--------------------------------------+---------------------+


# 計算節點上 qemu 的對應的實例配置文件記錄着掛載的磁盤,<disk type='network' device='disk'> 配置段
[root@compute01 ~]# cat /etc/libvirt/qemu/instance-0000006d.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit instance-0000006d
or other application using the libvirt API.
-->

<domain type='qemu'>
  <name>instance-0000006d</name>
  <uuid>e4bbad3e-499b-442b-9789-5fb386edfb3f</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="20.6.0-1.el7"/>
      <nova:name>s1</nova:name>
      <nova:creationTime>2021-11-11 05:34:19</nova:creationTime>
      <nova:flavor name="instance2">
        <nova:memory>2048</nova:memory>
        <nova:disk>20</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>2</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="05a3ad27698e41b0a3e10a6daffbf64e">admin</nova:user>
        <nova:project uuid="60f490ceabcb493db09bdd4c1990655f">admin</nova:project>
      </nova:owner>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <shares>2048</shares>
  </cputune>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>RDO</entry>
      <entry name='product'>OpenStack Compute</entry>
      <entry name='version'>20.6.0-1.el7</entry>
      <entry name='serial'>e4bbad3e-499b-442b-9789-5fb386edfb3f</entry>
      <entry name='uuid'>e4bbad3e-499b-442b-9789-5fb386edfb3f</entry>
      <entry name='family'>Virtual Machine</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <topology sockets='2' cores='1' threads='1'/>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <auth username='cinder'>
        <secret type='ceph' uuid='bae8efd1-e319-48cc-8fd0-9213dd0e3497'/>
      </auth>
      <source protocol='rbd' name='volumes/volume-4106914a-7f5c-4723-b3f8-410cb955d6d3'>
        <host name='10.10.50.51' port='6789'/>
        <host name='10.10.50.52' port='6789'/>
        <host name='10.10.50.53' port='6789'/>
      </source>
      <target dev='vda' bus='virtio'/>
      <serial>4106914a-7f5c-4723-b3f8-410cb955d6d3</serial>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <auth username='cinder'>
        <secret type='ceph' uuid='bae8efd1-e319-48cc-8fd0-9213dd0e3497'/>
      </auth>
      <source protocol='rbd' name='volumes/volume-6c5c6991-ca91-455d-835b-c3e0e9651ef6'>
        <host name='10.10.50.51' port='6789'/>
        <host name='10.10.50.52' port='6789'/>
        <host name='10.10.50.53' port='6789'/>
      </source>
      <target dev='vdb' bus='virtio'/>
      <serial>6c5c6991-ca91-455d-835b-c3e0e9651ef6</serial>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <interface type='bridge'>
      <mac address='fa:16:3e:66:ba:0c'/>
      <source bridge='brq7f327278-8f'/>
      <target dev='tapea75128c-89'/>
      <model type='virtio'/>
      <driver name='qemu'/>
      <mtu size='1450'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <log file='/var/lib/nova/instances/e4bbad3e-499b-442b-9789-5fb386edfb3f/console.log' append='off'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <log file='/var/lib/nova/instances/e4bbad3e-499b-442b-9789-5fb386edfb3f/console.log' append='off'/>
      <target type='serial' port='0'/>
    </console>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <stats period='10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
</domain>
  • 遷移會把實例整個內存狀態也遷移過去,注意目標計算節點的空閑資源
  • 遷移過程中實例會出現閃斷,具體閃斷時間跟實例內存大小有關
  • 如果遷移出錯,可以查看日志:控制節點 /var/log/nova/nova-conductor.log,或者計算節點 /var/log/nova/nova-compute.log

十五、修改Linuxbridge為Openvswitch

openvswitch的原理和常用命令
Neutron中Linux bridge與Open vSwitch兩種plugin優劣勢對比:

目前說到虛擬交換機,通常會想到使用Open vSwitch做虛擬交換機,因為支持Open vSwitch的個人和企業都想要有一個開放的模式把他們的服務集成進OpenStack。 Open vSwitch社區做了大量的工作,希望提升Open vSwitch作為最主要虛擬交換機的地位。社區期望Open vSwitch將在軟件定義網絡(SDN)接管網絡時代到來時,提供所有可能最好的交換服務。但是,Open vSwitch的復雜性使用戶渴望更簡單的網絡解決方案,在這過程中需要Linux Bridge這樣的簡單橋接技術來支持雲解決方案。

但是Open vSwitchh支持者會指出Linux Bridge 缺少可擴展性的隧道模型。Linux Bridge支持GRE隧道,但是並不支持更前沿和擴展性更好的VXLAN模型。因此有這些觀點的網絡專家們,他們會比較堅定的認為復雜的解決方案比一個簡單的解決方案要好。

當然,Linux Bridge已經發生了變化,也有助於縮小使用Open vSwitch和Linux Bridge的之間的差距,包括添加VXLAN支持隧道技術。但是在更大的網絡規模中,Linux Bridge的簡單性可能會產生更大的價值。

我們都知道,OpenStack社區官方的安裝文檔的步驟在liberty版本之前都是以OpenvSwitch為例子的。而且從OpenStack 用戶調查來看,使用 Open vSwitch的人比使用 linux bridge 多很多。

Liberty 版本之前社區官方文檔都是使用 neutron-plugin-openvswitch-agent, 但是Liberty 版本轉為使用 neutron-plugin-linuxbridge-agent。社區文檔只留了這么一句話,意思是Linuxbridge更加簡單。

“In comparison to provider networks with Open vSwitch (OVS), thisscenario relies completely on native Linux networking services which makes itthe simplest of all scenarios in this guide.”

以下是Open vSwitch與Linux bridge之間的優劣勢對比:
① Open vSwitch 目前還存在不少穩定性問題,比如:
1.Kernetlpanics 1.10
2.ovs-switchedsegfaults 1.11
3.廣播風暴
4.Datacorruption 2.01

② 於Linux bridge,OpenvSwitch有以下好處:
1.Qos配置,可以為每台vm配置不同的速度和帶寬
2.流量監控
3.數據包分析
4.將openflow引入到ovs中,實現控制邏輯和物理交換網絡分離

③ 為什么可以使用 Linux bridge?
1.穩定性和可靠性要求:Linux bridge 有十幾年的使用歷史,非常成熟。
2.易於問題診斷 (troubleshooting)
3.社區也支持
4.還是可以使用Overlay 網絡 VxLAN 9需要 Linux 內核 3.9 版本或者以上)

④ 使用 Linux bridge 的局限性
1.Neutron DVR還不支持 Linux bridge
2.不支持 GRE
3.一些 OVS 提供但是 Neutorn 不支持的功能

⑤ 長期來看,隨着穩定性的進一步提高,Open vSwitch 會在生產環境中成為主流。

可以看出:
(1)OVS 將各種功能都原生地實現在其中,這在起始階段不可避免地帶來潛在的穩定性和可調試性問題;
(2)Linux bridge 依賴各種其他模塊來實現各種功能,而這些模塊的發布時間往往都已經很長,穩定性較高;
(3)兩者在核心功能上沒有什么差距,只是在集中管控和性能優化這一塊Open vSwitch有一些新的功能或者優化。但是,從測試結果看,兩者的性能沒有明顯差異;

image-20211116110205045

總之,目前,Open vSwitch與Linux bridge都有各自的適合的場景,對於雲用戶來說也提供了更好的兩種優秀的網絡解決方案,除了SDN對集中管控的需求,和更新更多的網絡特性時,Open vSwitch更加有優勢,但是在穩定性,大規模網絡部署等場景中Linux bridge 是個較好的選擇。

之前使用的是官網提供的linuxbridge+vxlan模式;
這里我們將linuxbridge+vxlan模式改裝成openvswitch+vxlan模式!

當前集群network agent分布情況

[root@controller01 ~]# 
[root@controller01 ~]# openstack network agent list
+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host         | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+
| 009e7ba3-adb4-4940-9342-d38a73b3ad8f | L3 agent           | controller02 | nova              | :-)   | UP    | neutron-l3-agent          |
| 1ade0870-31e5-436b-a450-be7fcbeff5ff | Linux bridge agent | controller02 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 3e552a95-1e6e-48cc-8af0-a451fe5dd7f3 | Linux bridge agent | compute01    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 4c560347-6d6d-4ef5-a47c-95b72b2a8e97 | L3 agent           | controller03 | nova              | :-)   | UP    | neutron-l3-agent          |
| 591a9410-472e-4ac3-be07-7a64794fe0aa | DHCP agent         | controller03 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 5fbc8d24-28d7-470f-b692-f217a439b3cb | DHCP agent         | controller01 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 626f4429-51c1-46c5-bce0-6fd2005eb62e | Linux bridge agent | compute02    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 8324ca26-44ce-4ad4-9429-80dbf820d27d | Metadata agent     | controller01 | None              | :-)   | UP    | neutron-metadata-agent    |
| 86a4c4e7-7062-46b3-b48e-236f47d5bb8b | DHCP agent         | controller02 | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 97ad79b7-9c05-495d-a22e-f23c0b63eba6 | Linux bridge agent | controller01 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| c8339cc2-d09b-464a-b331-e5f795a7ac90 | Metadata agent     | controller03 | None              | :-)   | UP    | neutron-metadata-agent    |
| c84422ef-576f-47e8-902c-10668c19f9c3 | Linux bridge agent | controller03 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| d2164950-54cd-46e8-a2af-e1909f7603d3 | Metadata agent     | controller02 | None              | :-)   | UP    | neutron-metadata-agent    |
| dfd5ddbb-4917-493c-9d3d-8048a55c2e46 | L3 agent           | controller01 | nova              | :-)   | UP    | neutron-l3-agent          |
+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+
[root@controller01 ~]# 

只需要將controller、compute節點上的 neutron-linuxbridge-agent改成neutron-openvswitch-agent即可;

准備工作

刪除已經配置的linuxbridge網絡

刪除已經配置的linuxbridge網絡,可直接在dashboard上面操作;
刪除順序如下:釋放虛擬ip端口–>刪除路由–>刪除網絡;
驗證是否還有未刪除網絡,輸出為空;

openstack network list

查看安裝linuxbridge的節點

[root@controller01 ~]# openstack network agent list |grep linuxbridge
| 1ade0870-31e5-436b-a450-be7fcbeff5ff | Linux bridge agent | controller02 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 3e552a95-1e6e-48cc-8af0-a451fe5dd7f3 | Linux bridge agent | compute01    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 626f4429-51c1-46c5-bce0-6fd2005eb62e | Linux bridge agent | compute02    | None              | :-)   | UP    | neutron-linuxbridge-agent |
| 97ad79b7-9c05-495d-a22e-f23c0b63eba6 | Linux bridge agent | controller01 | None              | :-)   | UP    | neutron-linuxbridge-agent |
| c84422ef-576f-47e8-902c-10668c19f9c3 | Linux bridge agent | controller03 | None              | :-)   | UP    | neutron-linuxbridge-agent |
[root@controller01 ~]# 

關閉並卸載neutron-linuxbridge-agent

全部安裝linuxbridge的節點上;

systemctl disable neutron-linuxbridge-agent.service
systemctl stop neutron-linuxbridge-agent.service 
yum remove -y openstack-neutron-linuxbridge

安裝openvswitch

全部控制、計算節點上;

yum install -y openstack-neutron-openvswitch libibverbs net-tools

內核配置

echo '
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
' >> /etc/sysctl.conf
sysctl -p

控制節點配置修改

以controller01為例;

設置開啟route,前面已經設置過的,可不執行

openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router

修改 nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver

備份ml2配置及修改

[root@controller01 ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.linuxbridge}

[root@controller01 ~]# cat /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan,vlan,flat
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = external

[ml2_type_vlan]
network_vlan_ranges = vlan:3001:3500

[ml2_type_vxlan]
vni_ranges = 10001:20000

[securitygroup]
enable_ipset = true
  • 其實只修改了 mechanism_drivers 選項

刪除外部網絡 external 創建的網橋

全部控制節點操作;

[root@controller01 ~]# ip a
...
...
...
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master brqa5608ef7-90 state UP group default qlen 1000
    link/ether 00:0c:29:5e:76:2a brd ff:ff:ff:ff:ff:ff
...
...
...
25: brqa5608ef7-90: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:5e:76:2a brd ff:ff:ff:ff:ff:ff
    inet 10.10.20.32/24 brd 10.10.20.255 scope global brqa5608ef7-90
       valid_lft forever preferred_lft forever
    inet6 fe80::68c8:fdff:fe34:8df1/64 scope link 
       valid_lft forever preferred_lft forever
...
...
...
  • 可以看到ens34外部網絡的ip地址目前在網橋上
# 查看網橋
[root@controller01 ~]# yum install bridge-utils -y
[root@controller01 ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
brq26d46ca1-98          8000.52a94acf41ca       no              tapb2809284-84
                                                        vxlan-10001
brq7f327278-8f          8000.0abc052c5b02       no              tap47feef60-25
                                                        tapb7ed33a8-5b
                                                        vxlan-10002
brqa5608ef7-90          8000.000c29dd69e5       no              ens34
                                                        tap91e1f440-aa
                                                        tape82de1bc-c7
brqc7db4508-59          8000.36f9e8146c95       no              tap261f8904-b4
                                                        tapf76579c3-d1
                                                        vxlan-10003
[root@controller01 ~]# brctl show brqa5608ef7-90
bridge name     bridge id               STP enabled     interfaces
brqa5608ef7-90          8000.000c29dd69e5       no              ens34
                                                        tap91e1f440-aa
                                                        tape82de1bc-c7

# 刪除網橋接口
[root@controller01 ~]# brctl delif brqa5608ef7-90 tap91e1f440-aa
[root@controller01 ~]# 
[root@controller01 ~]# brctl delif brqa5608ef7-90 tape82de1bc-c7
[root@controller01 ~]# 
[root@controller01 ~]# brctl delif brqa5608ef7-90 ens34

# 關閉網橋,使用 ifconfig,默認未安裝
[root@controller01 ~]# yum install net-tools -y 
[root@controller01 ~]# ifconfig brqa5608ef7-90 down

# 刪除網橋
[root@controller01 ~]# brctl delbr brqa5608ef7-90
[root@controller01 ~]# brctl show 
bridge name     bridge id               STP enabled     interfaces
brq26d46ca1-98          8000.52a94acf41ca       no              tapb2809284-84
                                                        vxlan-10001
brq7f327278-8f          8000.0abc052c5b02       no              tap47feef60-25
                                                        tapb7ed33a8-5b
                                                        vxlan-10002
brqc7db4508-59          8000.36f9e8146c95       no              tap261f8904-b4
                                                        tapf76579c3-d1
                                                        vxlan-10003
# 重啟網卡 ens34
[root@controller01 ~]# ifdown ens34
[root@controller01 ~]# ifup ens34
[root@controller01 ~]# ip a
...
...
...
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:dd:69:e5 brd ff:ff:ff:ff:ff:ff
    inet 10.10.20.31/24 brd 10.10.20.255 scope global noprefixroute ens34
       valid_lft forever preferred_lft forever
    inet6 fe80::5f52:19c8:6c65:c9f3/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
...
...
...

修改 l3_agent.ini

全部控制節點操作;

[root@controller01 ~]# cp /etc/neutron/l3_agent.ini{,.linuxbridge}

[root@controller01 ~]# cat /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge = br-ex

修改dhcp_agent.ini

全部控制節點操作;

[root@controller01 ~]# cp /etc/neutron/dhcp_agent.ini{,.linuxbridge}

[root@controller01 ~] cat /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

配置openvswitch_agent.ini

# 備份配置文件
cp -a /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak > /etc/neutron/plugins/ml2/openvswitch_agent.ini

local_ip修改為當前節點的主機ip

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
# tunnel租戶網絡(vxlan)vtep端點,這里對應規划的ens35地址,根據節點做相應修改
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 10.10.30.31

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings  external:br-ex
#openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan,gre
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population true
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent arp_responder true

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

啟動openvswitch服務

systemctl enable openvswitch.service
systemctl restart openvswitch.service
systemctl status openvswitch.service

創建網橋br-ex

將ip轉移到網橋,添加到開機啟動

ip地址修改為當前節點ens34地址;以controller01為例;

[root@controller01 ~]# echo '#
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex ens34
ovs-vsctl show
ifconfig ens34 0.0.0.0 
ifconfig br-ex 10.10.20.31/24
#route add default gw 10.10.20.2 # 可選,添加默認路由
#' >> /etc/rc.d/rc.local

創建並驗證

[root@controller01 ~]# chmod +x /etc/rc.d/rc.local; tail -n 8 /etc/rc.d/rc.local | bash
ad5867f6-9ddd-4746-9de7-bc3c2b2e98f8
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "ens34"
            Interface "ens34"
    ovs_version: "2.12.0"
[root@controller01 ~]# 
[root@controller01 ~]# ifconfig br-ex
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.20.31  netmask 255.255.255.0  broadcast 10.10.20.255
        inet6 fe80::20c:29ff:fedd:69e5  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:dd:69:e5  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7  bytes 586 (586.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@controller01 ~]# 
[root@controller01 ~]# ifconfig ens34
ens34: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::5f52:19c8:6c65:c9f3  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:dd:69:e5  txqueuelen 1000  (Ethernet)
        RX packets 1860  bytes 249541 (243.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7014  bytes 511816 (499.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

關閉網卡的開機自啟

全部控制節點;關閉的目的是以保證OVS創建的網卡可以安全使用

sed -i 's#ONBOOT=yes#ONBOOT=no#g' /etc/sysconfig/network-scripts/ifcfg-ens34

重啟相關服務

全部控制節點操作;

systemctl restart openstack-nova-api.service neutron-server.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service neutron-openvswitch-agent.service
systemctl status openstack-nova-api.service neutron-server.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service neutron-openvswitch-agent.service

計算節點配置修改

以compute01為例;

修改 nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver

配置openvswitch_agent.ini

# 備份配置文件
cp -a /etc/neutron/plugins/ml2/openvswitch_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/openvswitch_agent.ini.bak > /etc/neutron/plugins/ml2/openvswitch_agent.ini

local_ip修改為當前節點的主機ip

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs integration_bridge br-int
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs tunnel_bridge br-tun
# tunnel租戶網絡(vxlan)vtep端點,這里對應規划的ens35地址,根據節點做相應修改
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs local_ip 10.10.30.41

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini ovs bridge_mappings

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent tunnel_types vxlan,gre
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent l2_population true
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini agent arp_responder true

openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup enable_security_group true
#openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/openvswitch_agent.ini securitygroup firewall_driver neutron.agent.linux.openvswitch_firewall.OVSFirewallDriver

啟動openvswitch服務

systemctl enable openvswitch.service
systemctl restart openvswitch.service
systemctl status openvswitch.service

啟動相關服務

systemctl restart openstack-nova-compute.service
systemctl status openstack-nova-compute.service

systemctl enable neutron-openvswitch-agent.service
systemctl restart neutron-openvswitch-agent.service
systemctl status neutron-openvswitch-agent.service

刪除linuxbridge服務

在控制節點上刪除去除的linuxbridge服務;刪除對應的ID即可

openstack network agent delete 1ade0870-31e5-436b-a450-be7fcbeff5ff
openstack network agent delete 3e552a95-1e6e-48cc-8af0-a451fe5dd7f3
openstack network agent delete 626f4429-51c1-46c5-bce0-6fd2005eb62e
openstack network agent delete 97ad79b7-9c05-495d-a22e-f23c0b63eba6
openstack network agent delete c84422ef-576f-47e8-902c-10668c19f9c3

驗證

[root@controller01 ~]# openstack network agent list --agent-type open-vswitch
+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host         | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+
| 007a4eb8-38f7-4baa-942e-2c9be2f23026 | Open vSwitch agent | controller03 | None              | :-)   | UP    | neutron-openvswitch-agent |
| 3cdf307d-0da9-4638-8215-860a7f8a949c | Open vSwitch agent | compute01    | None              | :-)   | UP    | neutron-openvswitch-agent |
| 8a17faff-c84e-46e3-9407-9b924d5f86d5 | Open vSwitch agent | controller02 | None              | :-)   | UP    | neutron-openvswitch-agent |
| 96cf5c94-d256-478e-87e8-fc22e50ea236 | Open vSwitch agent | compute02    | None              | :-)   | UP    | neutron-openvswitch-agent |
| ca779df0-516b-433b-8a2e-dfd73bd67110 | Open vSwitch agent | controller01 | None              | :-)   | UP    | neutron-openvswitch-agent |
+--------------------------------------+--------------------+--------------+-------------------+-------+-------+---------------------------+
[root@controller01 ~]# 
[root@controller01 ~]# ovs-vsctl show
6f229e8d-5890-4f34-943c-cfa388e8359d
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port "ens34"
            Interface "ens34"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port "vxlan-0a0a1e21"
            Interface "vxlan-0a0a1e21"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.10.30.31", out_key=flow, remote_ip="10.10.30.33"}
        Port "vxlan-0a0a1e29"
            Interface "vxlan-0a0a1e29"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.10.30.31", out_key=flow, remote_ip="10.10.30.41"}
        Port "vxlan-0a0a1e2a"
            Interface "vxlan-0a0a1e2a"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.10.30.31", out_key=flow, remote_ip="10.10.30.42"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a0a1e20"
            Interface "vxlan-0a0a1e20"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.10.30.31", out_key=flow, remote_ip="10.10.30.32"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port "tapc3a68efd-d4"
            tag: 3
            Interface "tapc3a68efd-d4"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qr-6dfa2fed-6d"
            tag: 5
            Interface "qr-6dfa2fed-6d"
                type: internal
        Port "tapa69307b5-77"
            tag: 6
            Interface "tapa69307b5-77"
                type: internal
        Port "qr-f46d6fd0-a4"
            tag: 6
            Interface "qr-f46d6fd0-a4"
                type: internal
        Port "qg-049e7fcb-cc"
            tag: 3
            Interface "qg-049e7fcb-cc"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "ha-d7a1ea8b-88"
            tag: 4
            Interface "ha-d7a1ea8b-88"
                type: internal
        Port "tape0e73419-6f"
            tag: 5
            Interface "tape0e73419-6f"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    ovs_version: "2.12.0"
[root@compute01 ~]# ovs-vsctl show
f17eae04-efeb-4bfe-adfb-05c024e26a48
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port "qvo0fcef193-78"
            tag: 1
            Interface "qvo0fcef193-78"
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: system
        Port "vxlan-0a0a1e20"
            Interface "vxlan-0a0a1e20"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.10.30.41", out_key=flow, remote_ip="10.10.30.32"}
        Port "vxlan-0a0a1e1f"
            Interface "vxlan-0a0a1e1f"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.10.30.41", out_key=flow, remote_ip="10.10.30.31"}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a0a1e21"
            Interface "vxlan-0a0a1e21"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.10.30.41", out_key=flow, remote_ip="10.10.30.33"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.12.0"
[root@compute01 ~]# 
  • br-ex 連接外部網絡以及不同網絡vm通信用,br-int 同計算節點同網絡vm通信用,br-tun 不同計算節點同網絡vm通信用
  • 其中 br-ex 需要手動創建,br-int 與 br-tun 由 neutron-openvswitch-agent 自動創建
  • compute 計算節點由於沒有外部網絡接口,所以沒有 br-ex 網橋

十六、負載均衡Octavia部署

更新 haproxy 配置

ha 高可用節點 /etc/haproxy/haproxy.cfg新增配置:

 listen octavia_api_cluster
  bind 10.10.10.10:9876
  balance  source
  option  tcpka
  option  httpchk
  option  tcplog
  server controller01 10.10.10.31:9876 check inter 2000 rise 2 fall 5
  server controller02 10.10.10.32:9876 check inter 2000 rise 2 fall 5
  server controller03 10.10.10.33:9876 check inter 2000 rise 2 fall 5

創建數據庫

任意控制節點執行;

mysql -uroot -p123456
CREATE DATABASE octavia;
GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'%' IDENTIFIED BY '123456';
flush privileges;
exit;

創建octavia的keystone認證體系(用戶、角色、endpoint)

任意控制節點執行;

. ~/admin-openrc
openstack user create --domain default --password 123456 octavia
#openstack role add --project admin --user octavia admin
openstack role add --project service --user octavia admin
openstack service create --name octavia --description "OpenStack Octavia" load-balancer
openstack endpoint create --region RegionOne load-balancer public http://10.10.10.10:9876
openstack endpoint create --region RegionOne load-balancer internal http://10.10.10.10:9876
openstack endpoint create --region RegionOne load-balancer admin http://10.10.10.10:9876

生成octavia-openrc,全部控制節點執行;

cat >> ~/octavia-openrc << EOF
# octavia-openrc
export OS_USERNAME=octavia
export OS_PASSWORD=123456
export OS_PROJECT_NAME=service
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://10.10.10.10:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

安裝軟件包

全部控制節點執行;

yum install -y openstack-octavia-api.noarch openstack-octavia-common.noarch openstack-octavia-health-manager.noarch openstack-octavia-housekeeping.noarch openstack-octavia-worker.noarch openstack-octavia-diskimage-create.noarch python2-octaviaclient.noarch python-pip.noarch net-tools bridge-utils

or 第二種安裝方法

git clone https://github.com/openstack/python-octaviaclient.git -b stable/train
cd python-octaviaclient
pip install -r requirements.txt -e .

制作Amphora鏡像

官方教程:Building Octavia Amphora Images — octavia 9.1.0.dev16 documentation (openstack.org)

任意控制節點執行;

升級 git

centos7.9使用yum安裝的git最新版本也只是1.8,無-C參數,會報錯

# 如果不升級,制作鏡像時報錯
2021-11-18 02:52:31.402 | Unknown option: -C
2021-11-18 02:52:31.402 | usage: git [--version] [--help] [-c name=value]
2021-11-18 02:52:31.402 |            [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path]
2021-11-18 02:52:31.402 |            [-p|--paginate|--no-pager] [--no-replace-objects] [--bare]
2021-11-18 02:52:31.402 |            [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>]
2021-11-18 02:52:31.402 |            <command> [<args>]

# 升級 git
yum install gcc openssl-devel libcurl-devel expat-devel zlib-devel perl cpio gettext-devel xmlto docbook2X autoconf xmlto -y

ln -s /usr/bin/db2x_docbook2texi /usr/bin/docbook2x-texi

# 安裝 asciidoc: 
# 到官網下載 asciidoc 
# http://www.methods.co.nz/asciidoc/index.html 
# http://sourceforge.net/projects/asciidoc/
# 安裝
cp asciidoc-8.5.2.tar.gz /root/src
cd /root/src
tar xvfz asciidoc-8.5.2.tar.gz
cd asciidoc-8.5.2
./configure
make && make install

git clone https://github.com/git/git
cd git
make prefix=/usr/local install install-doc install-html install-info

cd /usr/bin
mv git{,.bak}
mv git-receive-pack{,.bak}
mv git-shell{,.bak}
mv git-upload-archive{,.bak}
mv git-upload-pack{,.bak}
ln -s /usr/local/bin/git* /usr/bin/

# 退出重新登陸
$ git --version
git version 2.34.0

開始制作

yum install python-pip -y
pip install pip==20.0.1
pip install virtualenv
yum install python3 -y
virtualenv -p /usr/bin/python3 octavia_disk_image_create

source octavia_disk_image_create/bin/activate

git config --global http.postBuffer 242800000

git clone https://github.com/openstack/octavia.git
cd octavia/diskimage-create/
pip install -r requirements.txt

yum install qemu-img git e2fsprogs policycoreutils-python-utils -y

#export DIB_REPOLOCATION_amphora_agent=/root/octavia
./diskimage-create.sh -i centos-minimal -t qcow2 -o amphora-x64-haproxy -s 5

$ ll
total 1463100
drwxr-xr-x 3 root root         27 Nov 18 11:31 amphora-x64-haproxy.d
-rw-r--r-- 1 root root  490209280 Nov 18 11:32 amphora-x64-haproxy.qcow2
  • -h 查看幫助,-i 設置發行系統,-r 指定 root 密碼,生成環境不建議使用 root 密碼
  • -s 大小最好和后面的 flavor 大小一樣
  • 制作鏡像時腳本會去讀取國外的源,網絡環境不好的情況下會無法順利創建鏡像
  • 注意這里不要加 -g stable/train,使用這個方式制作的鏡像有問題,跑出來的amphora實例會如下錯誤:
# amphora 實例中 /var/log/amphora-agent.log 報錯
# 這個問題折騰了好幾天才搞定
[2021-11-19 06:09:16 +0000] [1086] [ERROR] Socket error processing request.
Traceback (most recent call last):
  File "/opt/amphora-agent-venv/lib64/python3.6/site-packages/gunicorn/workers/sync.py", line 133, in handle
    req = next(parser)
  File "/opt/amphora-agent-venv/lib64/python3.6/site-packages/gunicorn/http/parser.py", line 41, in __next__
    self.mesg = self.mesg_class(self.cfg, self.unreader, self.req_count)
  File "/opt/amphora-agent-venv/lib64/python3.6/site-packages/gunicorn/http/message.py", line 180, in __init__
    super().__init__(cfg, unreader)
  File "/opt/amphora-agent-venv/lib64/python3.6/site-packages/gunicorn/http/message.py", line 53, in __init__
    unused = self.parse(self.unreader)
  File "/opt/amphora-agent-venv/lib64/python3.6/site-packages/gunicorn/http/message.py", line 192, in parse
    self.get_data(unreader, buf, stop=True)
  File "/opt/amphora-agent-venv/lib64/python3.6/site-packages/gunicorn/http/message.py", line 183, in get_data
    data = unreader.read()
  File "/opt/amphora-agent-venv/lib64/python3.6/site-packages/gunicorn/http/unreader.py", line 37, in read
    d = self.chunk()
  File "/opt/amphora-agent-venv/lib64/python3.6/site-packages/gunicorn/http/unreader.py", line 64, in chunk
    return self.sock.recv(self.mxchunk)
  File "/usr/lib64/python3.6/ssl.py", line 956, in recv
    return self.read(buflen)
  File "/usr/lib64/python3.6/ssl.py", line 833, in read
    return self._sslobj.read(len, buffer)
  File "/usr/lib64/python3.6/ssl.py", line 592, in read
    v = self._sslobj.read(len)
OSError: [Errno 0] Error

自行下載centos鏡像制作

前面的方式是腳本自動下載鏡像,還可以自行提前下載好鏡像

wget https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-2009.qcow2

yum install yum-utils -y

yum install python3 -y
pip3 install pyyaml
pip3 install diskimage-builder
cp /usr/bin/pip3 /usr/bin/pip
pip install --upgrade pip

yum -y install libvirt libguestfs-tools

systemctl start libvirtd

export LIBGUESTFS_BACKEND=direct

# 時間有點久
virt-customize -a CentOS-7-x86_64-GenericCloud-2009.qcow2  --selinux-relabel --run-command 'yum install -y centos-release-openstack-train'

git config --global http.postBuffer 242800000

export DIB_REPOLOCATION_amphora_agent=/root/octavia
export DIB_LOCAL_IMAGE=/root/octavia/diskimage-create/CentOS-7-x86_64-GenericCloud-2009.qcow2

./diskimage-create.sh -i centos-minimal -t qcow2 -o amphora-x64-haproxy -s 5

導入鏡像

任意控制節點執行;

$ . ~/octavia-openrc
$ openstack image create --disk-format qcow2 --container-format bare --private --tag amphora --file amphora-x64-haproxy.qcow2 amphora-x64-haproxy

$ openstack image list
+--------------------------------------+---------------------+--------+
| ID                                   | Name                | Status |
+--------------------------------------+---------------------+--------+
| 677c0174-dffc-47d8-828b-26aeb0ba44a5 | amphora-x64-haproxy | active |
| c1f2829f-1f52-48c9-8c23-689f1a745ebd | cirros-qcow2        | active |
| 4ea7b60f-b7ed-4b7e-8f37-a02c85099ec5 | cirros-qcow2-0.5.2  | active |
+--------------------------------------+---------------------+--------+

鏡像注冊到其中一台控制節點,需要手動復制到其他控制節點上,並且修改鏡像權限,否則會報錯,使用 ceph 后不需要

[root@controller03 ~]# ls -l /var/lib/glance/images/
total 510596
-rw-r----- 1 glance glance  16300544 Nov 16 19:49 4ea7b60f-b7ed-4b7e-8f37-a02c85099ec5
-rw-r----- 1 glance glance 490209280 Nov 18 12:20 677c0174-dffc-47d8-828b-26aeb0ba44a5
-rw-r----- 1 glance glance  16338944 Nov 16 15:42 c1f2829f-1f52-48c9-8c23-689f1a745ebd
[root@controller03 ~]# 
[root@controller03 ~]# cd /var/lib/glance/images/
[root@controller03 ~]# 
[root@controller03 images]# scp 677c0174-dffc-47d8-828b-26aeb0ba44a5 controller01:/var/lib/glance/images/
677c0174-dffc-47d8-828b-26aeb0ba44a5                                                           100%  468MB  94.4MB/s   00:04    
[root@controller03 images]# 
[root@controller03 images]# scp 677c0174-dffc-47d8-828b-26aeb0ba44a5 controller02:/var/lib/glance/images/
677c0174-dffc-47d8-828b-26aeb0ba44a5                                                           100%  468MB  97.4MB/s   00:04    

[root@controller01 ~]# chown -R glance:glance /var/lib/glance/images/*

[root@controller02 ~]# chown -R glance:glance /var/lib/glance/images/*

創建實例模板

任意控制節點執行;

# 規則酌情修改,disk大小不能小於前面打包鏡像是設置的-s大小
$ openstack flavor create --id 200 --vcpus 1 --ram 512 --disk 5 "amphora" --private
$ openstack flavor list --all
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| ID                                   | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| 0e97b5a0-8cca-4126-baeb-9d0194129985 | instance1 |  128 |    1 |         0 |     1 | True      |
| 200                                  | amphora   | 1024 |    5 |         0 |     1 | False     |
| 9b87cf02-6e3a-4b00-ac26-048d3b611d97 | instance2 | 2048 |   10 |         0 |     4 | True      |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+

創建認證密鑰

octavia controller與amphora通信的證書,雙向認證

官方教程:Octavia Certificate Configuration Guide

任意控制節點執行;

創建目錄

mkdir certs
chmod 700 certs
cd certs

創建證書配置文件 vi openssl.cnf

# OpenSSL root CA configuration file.

[ ca ]
# `man ca`
default_ca = CA_default

[ CA_default ]
# Directory and file locations.
dir               = ./
certs             = $dir/certs
crl_dir           = $dir/crl
new_certs_dir     = $dir/newcerts
database          = $dir/index.txt
serial            = $dir/serial
RANDFILE          = $dir/private/.rand

# The root key and root certificate.
private_key       = $dir/private/ca.key.pem
certificate       = $dir/certs/ca.cert.pem

# For certificate revocation lists.
crlnumber         = $dir/crlnumber
crl               = $dir/crl/ca.crl.pem
crl_extensions    = crl_ext
default_crl_days  = 30

# SHA-1 is deprecated, so use SHA-2 instead.
default_md        = sha256

name_opt          = ca_default
cert_opt          = ca_default
default_days      = 3650
preserve          = no
policy            = policy_strict

[ policy_strict ]
# The root CA should only sign intermediate certificates that match.
# See the POLICY FORMAT section of `man ca`.
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

[ req ]
# Options for the `req` tool (`man req`).
default_bits        = 2048
distinguished_name  = req_distinguished_name
string_mask         = utf8only

# SHA-1 is deprecated, so use SHA-2 instead.
default_md          = sha256

# Extension to add when the -x509 option is used.
x509_extensions     = v3_ca

[ req_distinguished_name ]
# See <https://en.wikipedia.org/wiki/Certificate_signing_request>.
countryName                     = Country Name (2 letter code)
stateOrProvinceName             = State or Province Name
localityName                    = Locality Name
0.organizationName              = Organization Name
organizationalUnitName          = Organizational Unit Name
commonName                      = Common Name
emailAddress                    = Email Address

# Optionally, specify some defaults.
countryName_default             = US
stateOrProvinceName_default     = Oregon
localityName_default            =
0.organizationName_default      = OpenStack
organizationalUnitName_default  = Octavia
emailAddress_default            =
commonName_default              = example.org

[ v3_ca ]
# Extensions for a typical CA (`man x509v3_config`).
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
basicConstraints = critical, CA:true
keyUsage = critical, digitalSignature, cRLSign, keyCertSign

[ usr_cert ]
# Extensions for client certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, email
nsComment = "OpenSSL Generated Client Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, emailProtection

[ server_cert ]
# Extensions for server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = server
nsComment = "OpenSSL Generated Server Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ crl_ext ]
# Extension for CRLs (`man x509v3_config`).
authorityKeyIdentifier=keyid:always
  • 默認證書有效期 10 年,證書長度 2048 位

從服務器證書頒發機構,准備服務器端 CA 密鑰

mkdir client_ca
mkdir server_ca
cd server_ca
mkdir certs crl newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial

# 需要輸入密碼,這里設置 123456
openssl genrsa -aes256 -out private/ca.key.pem 4096

chmod 400 private/ca.key.pem

簽發服務器端 CA 證書

$ openssl req -config ../openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -sha256 -extensions v3_ca -out certs/ca.cert.pem

Enter pass phrase for private/ca.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:CN
State or Province Name [Oregon]:SICHUAN
Locality Name []:CHENGDU
Organization Name [OpenStack]:
Organizational Unit Name [Octavia]:
Common Name [example.org]:
Email Address []:
  • 需要輸入服務器端CA密鑰的密碼,也就是123456
  • 注意:后面需要輸入Country Name等信息的地方請保持和這里的一致

從服務器證書頒發機構,准備客戶端 CA 密鑰

cd ../client_ca
mkdir certs crl csr newcerts private
chmod 700 private
touch index.txt
echo 1000 > serial

# 需要輸入密碼,這里設置 123456
openssl genrsa -aes256 -out private/ca.key.pem 4096

chmod 400 private/ca.key.pem

簽發客戶端 CA 證書

$ openssl req -config ../openssl.cnf -key private/ca.key.pem -new -x509 -days 7300 -sha256 -extensions v3_ca -out certs/ca.cert.pem

Enter pass phrase for private/ca.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:CN
State or Province Name [Oregon]:SICHUAN
Locality Name []:CHENGDU
Organization Name [OpenStack]:
Organizational Unit Name [Octavia]:
Common Name [example.org]:
Email Address []:
  • 需要輸入客戶端 CA 密鑰的密碼,也就是123456

從服務器證書頒發機構,創建客戶端連接密鑰

# 需要輸入密碼,這里設置 123456
openssl genrsa -aes256 -out private/client.key.pem 2048

創建客戶端連接簽證請求

$ openssl req -config ../openssl.cnf -new -sha256 -key private/client.key.pem -out csr/client.csr.pem

Enter pass phrase for private/client.key.pem:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [US]:CN
State or Province Name [Oregon]:SICHUAN
Locality Name []:CHENGDU
Organization Name [OpenStack]:
Organizational Unit Name [Octavia]:
Common Name [example.org]:
Email Address []:
  • 需要輸入客戶端連接密鑰的密碼,也就是123456

簽發客戶端連接證書

openssl ca -config ../openssl.cnf -extensions usr_cert -days 7300 -notext -md sha256 -in csr/client.csr.pem -out certs/client.cert.pem
  • 需要輸入客戶端連接密鑰的密碼,也就是123456

把客戶端連接私鑰和證書組合成一個文件

# 需要輸入客戶端連接密鑰的密碼
openssl rsa -in private/client.key.pem -out private/client.cert-and-key.pem
cat certs/client.cert.pem >> private/client.cert-and-key.pem

復制相關證書到 Octavia 配置目錄

cd ..
mkdir -p /etc/octavia/certs
chmod 700 /etc/octavia/certs
cp server_ca/private/ca.key.pem /etc/octavia/certs/server_ca.key.pem
chmod 700 /etc/octavia/certs/server_ca.key.pem
cp server_ca/certs/ca.cert.pem /etc/octavia/certs/server_ca.cert.pem
cp client_ca/certs/ca.cert.pem /etc/octavia/certs/client_ca.cert.pem
cp client_ca/private/client.cert-and-key.pem /etc/octavia/certs/client.cert-and-key.pem
chmod 700 /etc/octavia/certs/client.cert-and-key.pem
chown -R octavia.octavia /etc/octavia/certs

然后把/etc/octavia/certs復制到所有其他控制節點,注意權限

ssh controller02 'mkdir -p /etc/octavia/certs'
ssh controller03 'mkdir -p /etc/octavia/certs'

scp /etc/octavia/certs/* controller02:/etc/octavia/certs/
scp /etc/octavia/certs/* controller03:/etc/octavia/certs/

ssh controller02 'chmod 700 /etc/octavia/certs'
ssh controller03 'chmod 700 /etc/octavia/certs'

ssh controller02 'chmod 700 /etc/octavia/certs/server_ca.key.pem'
ssh controller03 'chmod 700 /etc/octavia/certs/server_ca.key.pem'

ssh controller02 'chmod 700 /etc/octavia/certs/client.cert-and-key.pem'
ssh controller03 'chmod 700 /etc/octavia/certs/client.cert-and-key.pem'

ssh controller02 'chown -R octavia. /etc/octavia/certs'
ssh controller03 'chown -R octavia. /etc/octavia/certs'

創建安全組

任意控制節點執行

. ~/octavia-openrc

# Amphora 虛擬機使用,LB Network 與 Amphora 通信
openstack security group create lb-mgmt-sec-grp
openstack security group rule create --protocol icmp lb-mgmt-sec-grp
openstack security group rule create --protocol udp --dst-port 5555 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp
openstack security group rule create --protocol tcp --dst-port 9443 lb-mgmt-sec-grp

# Amphora 虛擬機使用,Health Manager 與 Amphora 通信
openstack security group create lb-health-mgr-sec-grp
openstack security group rule create --protocol icmp lb-health-mgr-sec-grp
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 lb-health-mgr-sec-grp
openstack security group rule create --protocol tcp --dst-port 9443 lb-health-mgr-sec-grp

創建登陸 amphora 實例的 ssh key

任意控制節點執行;

. ~/octavia-openrc

mkdir -p /etc/octavia/.ssh

ssh-keygen -b 2048 -t rsa -N "" -f /etc/octavia/.ssh/octavia_ssh_key

# 注意 key 的名稱 octavia_ssh_key,后面配置文件會用到
# nova keypair-add --pub-key=/etc/octavia/.ssh/octavia_ssh_key.pub octavia_ssh_key
openstack keypair create --public-key /etc/octavia/.ssh/octavia_ssh_key.pub octavia_ssh_key

chown -R octavia. /etc/octavia/.ssh/

然后把/etc/octavia/.ssh/復制到所有其他控制節點,注意權限

ssh controller02 'mkdir -p /etc/octavia/.ssh/'
ssh controller03 'mkdir -p /etc/octavia/.ssh/'

scp /etc/octavia/.ssh/* controller02:/etc/octavia/.ssh/
scp /etc/octavia/.ssh/* controller03:/etc/octavia/.ssh/

ssh controller02 'chown -R octavia. /etc/octavia/.ssh/'
ssh controller03 'chown -R octavia. /etc/octavia/.ssh/'

創建 dhclient.conf 配置文件

全部控制節點執行;

cd ~
git clone https://github.com/openstack/octavia.git
mkdir -m755 -p /etc/dhcp/octavia
cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia

創建網絡

controller01執行

cd ~
. ~/octavia-openrc

# openstack 租戶 tunnel 網絡,可設置隨意設置
OCTAVIA_MGMT_SUBNET=172.16.0.0/24
OCTAVIA_MGMT_SUBNET_START=172.16.0.100
OCTAVIA_MGMT_SUBNET_END=172.16.0.254
# 172.16.0.1網關,172.16.0.2 controller01,172.16.0.3 controller02,172.16.0.4 controller03
OCTAVIA_MGMT_PORT_IP=172.16.0.2

openstack network create lb-mgmt-net
openstack subnet create --subnet-range $OCTAVIA_MGMT_SUBNET --allocation-pool \
  start=$OCTAVIA_MGMT_SUBNET_START,end=$OCTAVIA_MGMT_SUBNET_END \
  --network lb-mgmt-net lb-mgmt-subnet
  
# 獲取子網id
SUBNET_ID=$(openstack subnet show lb-mgmt-subnet -f value -c id)
PORT_FIXED_IP="--fixed-ip subnet=$SUBNET_ID,ip-address=$OCTAVIA_MGMT_PORT_IP"

# 創建controller01使用的端口
MGMT_PORT_ID=$(openstack port create --security-group \
  lb-health-mgr-sec-grp --device-owner Octavia:health-mgr \
  --host=$(hostname) -c id -f value --network lb-mgmt-net \
  $PORT_FIXED_IP octavia-health-manager-listen-port)

# 端口 mac 地址
MGMT_PORT_MAC=$(openstack port show -c mac_address -f value $MGMT_PORT_ID)

echo "OCTAVIA_MGMT_PORT_IP: $OCTAVIA_MGMT_PORT_IP
SUBNET_ID: $SUBNET_ID
PORT_FIXED_IP: $PORT_FIXED_IP
MGMT_PORT_ID: $MGMT_PORT_ID
MGMT_PORT_MAC: $MGMT_PORT_MAC"

linuxbridge方式

創建管理端口

ip link add o-hm0 type veth peer name o-bhm0
NETID=$(openstack network show lb-mgmt-net -c id -f value)
BRNAME=brq$(echo $NETID|cut -c 1-11)
brctl addif $BRNAME o-bhm0
ip link set o-bhm0 up

ip link set dev o-hm0 address $MGMT_PORT_MAC
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

# 獲取剛才設置的 ip
$ dhclient -v o-hm0 -cf /etc/dhcp/octavia

Internet Systems Consortium DHCP Client 4.2.5
Copyright 2004-2013 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/o-hm0/fa:16:3e:3c:17:ee
Sending on   LPF/o-hm0/fa:16:3e:3c:17:ee
Sending on   Socket/fallback
DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 6 (xid=0x5cee3ddf)
DHCPREQUEST on o-hm0 to 255.255.255.255 port 67 (xid=0x5cee3ddf)
DHCPOFFER from 172.16.0.100
DHCPACK from 172.16.0.100 (xid=0x5cee3ddf)
bound to 172.16.0.2 -- renewal in 33571 seconds.

$ ip a s dev o-hm0
13: o-hm0@o-bhm0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:3c:17:ee brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.2/24 brd 172.16.0.255 scope global dynamic o-hm0
       valid_lft 86292sec preferred_lft 86292sec
    inet6 fe80::f816:3eff:fe3c:17ee/64 scope link 
       valid_lft forever preferred_lft forever

# 使用 dhclient 獲取到 ip 地址后,會設置一條默認路由 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0
# 這會和本機默認網關路由沖突,可能會導致本機連接外部網絡,建議刪除
# 並且還會設置本機 dns 地址為 172.16.0.0/24 網段的 dns 服務器地址,也需要改回來
[root@controller01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 o-hm0
0.0.0.0         10.10.10.2      0.0.0.0         UG    100    0        0 ens33
10.10.10.0      0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.10.20.0      0.0.0.0         255.255.255.0   U     0      0        0 brq5ba4cced-83
10.10.30.0      0.0.0.0         255.255.255.0   U     102    0        0 ens35
169.254.169.254 172.16.0.100    255.255.255.255 UGH   0      0        0 o-hm0
172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 o-hm0
[root@controller01 ~]# route del default gw 172.16.0.1

[root@controller01 ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search openstacklocal
nameserver 172.16.0.101
nameserver 172.16.0.100
[root@controller01 ~]# echo 'nameserver 10.10.10.2' > /etc/resolv.conf

[root@controller01 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=128 time=40.5 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=128 time=42.3 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=128 time=40.8 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=4 ttl=128 time=40.8 ms
^C
--- www.a.shifen.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 40.586/41.609/43.340/1.098 ms

設置開機啟動

$ vi /opt/octavia-interface-start.sh
#!/bin/bash

set -x

#MAC=$MGMT_PORT_MAC
#BRNAME=$BRNAME

# MAC 為 $MGMT_PORT_MAC ,BRNAME 為 $BRNAME,具體含義看前面
MAC="fa:16:3e:3c:17:ee"
BRNAME="brqbcfafa57-ff"

# 剛開機時可能 linuxbriage 創建的網卡還不存在,所以 sleep 一下
sleep 120s

ip link add o-hm0 type veth peer name o-bhm0
brctl addif $BRNAME o-bhm0
ip link set o-bhm0 up
ip link set o-hm0 up
ip link set dev o-hm0 address $MAC
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

dhclient -v o-hm0 -cf /etc/dhcp/octavia
route del default gw 172.16.0.1
echo 'nameserver 10.10.10.2' > /etc/resolv.conf

$ chmod +x /opt/octavia-interface-start.sh
$ echo 'nohup sh /opt/octavia-interface-start.sh > /var/log/octavia-interface-start.log 2>&1 &' >> /etc/rc.d/rc.local
$ chmod +x /etc/rc.d/rc.local

openvswitch方式

創建管理端口

ovs-vsctl --may-exist add-port br-int o-hm0 \
  -- set Interface o-hm0 type=internal \
  -- set Interface o-hm0 external-ids:iface-status=active \
  -- set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC \
  -- set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID

ip link set dev o-hm0 address $MGMT_PORT_MAC
ip link set dev o-hm0 up
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

# 獲取剛才設置的 ip
$ dhclient -v o-hm0 -cf /etc/dhcp/octavia

Internet Systems Consortium DHCP Client 4.2.5
Copyright 2004-2013 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/o-hm0/fa:16:3e:3c:17:ee
Sending on   LPF/o-hm0/fa:16:3e:3c:17:ee
Sending on   Socket/fallback
DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 6 (xid=0x5cee3ddf)
DHCPREQUEST on o-hm0 to 255.255.255.255 port 67 (xid=0x5cee3ddf)
DHCPOFFER from 172.16.0.100
DHCPACK from 172.16.0.100 (xid=0x5cee3ddf)
bound to 172.16.0.2 -- renewal in 33571 seconds.

$ ip a s dev o-hm0
13: o-hm0@o-bhm0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:3c:17:ee brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.2/24 brd 172.16.0.255 scope global dynamic o-hm0
       valid_lft 86292sec preferred_lft 86292sec
    inet6 fe80::f816:3eff:fe3c:17ee/64 scope link 
       valid_lft forever preferred_lft forever

# 使用 dhclient 獲取到 ip 地址后,會設置一條默認路由 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0
# 這會和本機默認網關路由沖突,可能會導致本機連接外部網絡,建議刪除
# 並且還會設置本機 dns 地址為 172.16.0.0/24 網段的 dns 服務器地址,也需要改回來
[root@controller01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 o-hm0
0.0.0.0         10.10.10.2      0.0.0.0         UG    100    0        0 ens33
10.10.10.0      0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.10.20.0      0.0.0.0         255.255.255.0   U     0      0        0 brq5ba4cced-83
10.10.30.0      0.0.0.0         255.255.255.0   U     102    0        0 ens35
169.254.169.254 172.16.0.100    255.255.255.255 UGH   0      0        0 o-hm0
172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 o-hm0
[root@controller01 ~]# route del default gw 172.16.0.1

[root@controller01 ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search openstacklocal
nameserver 172.16.0.101
nameserver 172.16.0.100
[root@controller01 ~]# echo 'nameserver 10.10.10.2' > /etc/resolv.conf

[root@controller01 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=128 time=40.5 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=128 time=42.3 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=128 time=40.8 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=4 ttl=128 time=40.8 ms
^C
--- www.a.shifen.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 40.586/41.609/43.340/1.098 ms

設置開機啟動

$ vi /opt/octavia-interface-start.sh
#!/bin/bash

set -x

#MAC=$MGMT_PORT_MAC
#PORT_ID=$MGMT_PORT_ID

# MAC 為 $MGMT_PORT_MAC ,PORT_ID 為 $MGMT_PORT_ID,具體含義看前面
MAC="fa:16:3e:3c:17:ee"
PORT_ID="6d83909a-33cd-43aa-8d3f-baaa3bf87daf"

sleep 120s

ovs-vsctl --may-exist add-port br-int o-hm0 \
  -- set Interface o-hm0 type=internal \
  -- set Interface o-hm0 external-ids:iface-status=active \
  -- set Interface o-hm0 external-ids:attached-mac=$MAC \
  -- set Interface o-hm0 external-ids:iface-id=$PORT_ID

ip link set dev o-hm0 address $MAC
ip link set dev o-hm0 up
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

dhclient -v o-hm0 -cf /etc/dhcp/octavia
route del default gw 172.16.0.1
echo 'nameserver 10.10.10.2' > /etc/resolv.conf

$ chmod +x /opt/octavia-interface-start.sh
$ echo 'nohup sh /opt/octavia-interface-start.sh > /var/log/octavia-interface-start.log 2>&1 &' >> /etc/rc.d/rc.local
$ chmod +x /etc/rc.d/rc.local

controller02執行

cd ~
. ~/octavia-openrc

# openstack 租戶 tunnel 網絡,可設置隨意設置
OCTAVIA_MGMT_SUBNET=172.16.0.0/24
OCTAVIA_MGMT_SUBNET_START=172.16.0.100
OCTAVIA_MGMT_SUBNET_END=172.16.0.254
# 172.16.0.1網關,172.16.0.2 controller01,172.16.0.3 controller02,172.16.0.4 controller03
OCTAVIA_MGMT_PORT_IP=172.16.0.3
  
# 獲取子網id
SUBNET_ID=$(openstack subnet show lb-mgmt-subnet -f value -c id)
PORT_FIXED_IP="--fixed-ip subnet=$SUBNET_ID,ip-address=$OCTAVIA_MGMT_PORT_IP"

# 創建controller02使用的端口
MGMT_PORT_ID=$(openstack port create --security-group \
  lb-health-mgr-sec-grp --device-owner Octavia:health-mgr \
  --host=$(hostname) -c id -f value --network lb-mgmt-net \
  $PORT_FIXED_IP octavia-health-manager-listen-port)

# 端口 mac 地址
MGMT_PORT_MAC=$(openstack port show -c mac_address -f value $MGMT_PORT_ID)

echo "OCTAVIA_MGMT_PORT_IP: $OCTAVIA_MGMT_PORT_IP
SUBNET_ID: $SUBNET_ID
PORT_FIXED_IP: $PORT_FIXED_IP
MGMT_PORT_ID: $MGMT_PORT_ID
MGMT_PORT_MAC: $MGMT_PORT_MAC"

linuxbridge方式

創建管理端口

ip link add o-hm0 type veth peer name o-bhm0
NETID=$(openstack network show lb-mgmt-net -c id -f value)
BRNAME=brq$(echo $NETID|cut -c 1-11)
brctl addif $BRNAME o-bhm0
ip link set o-bhm0 up

ip link set dev o-hm0 address $MGMT_PORT_MAC
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

# 獲取剛才設置的 ip
$ dhclient -v o-hm0 -cf /etc/dhcp/octavia

Internet Systems Consortium DHCP Client 4.2.5
Copyright 2004-2013 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/o-hm0/fa:16:3e:7c:57:19
Sending on   LPF/o-hm0/fa:16:3e:7c:57:19
Sending on   Socket/fallback
DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 5 (xid=0x669eb967)
DHCPREQUEST on o-hm0 to 255.255.255.255 port 67 (xid=0x669eb967)
DHCPOFFER from 172.16.0.102
DHCPACK from 172.16.0.102 (xid=0x669eb967)
bound to 172.16.0.3 -- renewal in 34807 seconds.

$ ip a s dev o-hm0
13: o-hm0@o-bhm0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:7c:57:19 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.3/24 brd 172.16.0.255 scope global dynamic o-hm0
       valid_lft 86380sec preferred_lft 86380sec
    inet6 fe80::f816:3eff:fe7c:5719/64 scope link 
       valid_lft forever preferred_lft forever

# 使用 dhclient 獲取到 ip 地址后,會設置一條默認路由 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0
# 這會和本機默認網關路由沖突,可能會導致本機連接外部網絡,建議刪除
# 並且還會設置本機 dns 地址為 172.16.0.0/24 網段的 dns 服務器地址,也需要改回來
[root@controller01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 o-hm0
0.0.0.0         10.10.10.2      0.0.0.0         UG    100    0        0 ens33
10.10.10.0      0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.10.20.0      0.0.0.0         255.255.255.0   U     0      0        0 brq5ba4cced-83
10.10.30.0      0.0.0.0         255.255.255.0   U     102    0        0 ens35
169.254.169.254 172.16.0.100    255.255.255.255 UGH   0      0        0 o-hm0
172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 o-hm0
[root@controller01 ~]# route del default gw 172.16.0.1

[root@controller01 ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search openstacklocal
nameserver 172.16.0.101
nameserver 172.16.0.100
[root@controller01 ~]# echo 'nameserver 10.10.10.2' > /etc/resolv.conf

[root@controller01 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=128 time=40.5 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=128 time=42.3 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=128 time=40.8 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=4 ttl=128 time=40.8 ms
^C
--- www.a.shifen.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 40.586/41.609/43.340/1.098 ms

設置開機啟動

$ vi /opt/octavia-interface-start.sh
#!/bin/bash

set -x

#MAC=$MGMT_PORT_MAC
#BRNAME=$BRNAME

# MAC 為 $MGMT_PORT_MAC ,BRNAME 為 $BRNAME,具體含義看前面
MAC="fa:16:3e:7c:57:19"
BRNAME="brqbcfafa57-ff"

# 剛開機時可能 linuxbriage 創建的網卡還不存在,所以 sleep 一下
sleep 120s

ip link add o-hm0 type veth peer name o-bhm0
brctl addif $BRNAME o-bhm0
ip link set o-bhm0 up
ip link set o-hm0 up
ip link set dev o-hm0 address $MAC
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

dhclient -v o-hm0 -cf /etc/dhcp/octavia
route del default gw 172.16.0.1
echo 'nameserver 10.10.10.2' > /etc/resolv.conf

$ chmod +x /opt/octavia-interface-start.sh
$ echo 'nohup sh /opt/octavia-interface-start.sh > /var/log/octavia-interface-start.log 2>&1 &' >> /etc/rc.d/rc.local
$ chmod +x /etc/rc.d/rc.local

openvswitch方式

創建管理端口

ovs-vsctl --may-exist add-port br-int o-hm0 \
  -- set Interface o-hm0 type=internal \
  -- set Interface o-hm0 external-ids:iface-status=active \
  -- set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC \
  -- set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID

ip link set dev o-hm0 address $MGMT_PORT_MAC
ip link set dev o-hm0 up
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

# 獲取剛才設置的 ip
$ dhclient -v o-hm0 -cf /etc/dhcp/octavia

Internet Systems Consortium DHCP Client 4.2.5
Copyright 2004-2013 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/o-hm0/fa:16:3e:3c:17:ee
Sending on   LPF/o-hm0/fa:16:3e:3c:17:ee
Sending on   Socket/fallback
DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 6 (xid=0x5cee3ddf)
DHCPREQUEST on o-hm0 to 255.255.255.255 port 67 (xid=0x5cee3ddf)
DHCPOFFER from 172.16.0.100
DHCPACK from 172.16.0.100 (xid=0x5cee3ddf)
bound to 172.16.0.3 -- renewal in 33571 seconds.

$ ip a s dev o-hm0
13: o-hm0@o-bhm0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:3c:17:ee brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.3/24 brd 172.16.0.255 scope global dynamic o-hm0
       valid_lft 86292sec preferred_lft 86292sec
    inet6 fe80::f816:3eff:fe3c:17ee/64 scope link 
       valid_lft forever preferred_lft forever

# 使用 dhclient 獲取到 ip 地址后,會設置一條默認路由 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0
# 這會和本機默認網關路由沖突,可能會導致本機連接外部網絡,建議刪除
# 並且還會設置本機 dns 地址為 172.16.0.0/24 網段的 dns 服務器地址,也需要改回來
[root@controller01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 o-hm0
0.0.0.0         10.10.10.2      0.0.0.0         UG    100    0        0 ens33
10.10.10.0      0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.10.20.0      0.0.0.0         255.255.255.0   U     0      0        0 brq5ba4cced-83
10.10.30.0      0.0.0.0         255.255.255.0   U     102    0        0 ens35
169.254.169.254 172.16.0.100    255.255.255.255 UGH   0      0        0 o-hm0
172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 o-hm0
[root@controller01 ~]# route del default gw 172.16.0.1

[root@controller01 ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search openstacklocal
nameserver 172.16.0.101
nameserver 172.16.0.100
[root@controller01 ~]# echo 'nameserver 10.10.10.2' > /etc/resolv.conf

[root@controller01 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=128 time=40.5 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=128 time=42.3 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=128 time=40.8 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=4 ttl=128 time=40.8 ms
^C
--- www.a.shifen.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 40.586/41.609/43.340/1.098 ms

設置開機啟動

$ vi /opt/octavia-interface-start.sh
#!/bin/bash

set -x

#MAC=$MGMT_PORT_MAC
#PORT_ID=$MGMT_PORT_ID

# MAC 為 $MGMT_PORT_MAC ,PORT_ID 為 $MGMT_PORT_ID,具體含義看前面
MAC="fa:16:3e:7c:57:19"
PORT_ID="19964a42-8e06-4d87-9408-ce2348cfdf43"

sleep 120s

ovs-vsctl --may-exist add-port br-int o-hm0 \
  -- set Interface o-hm0 type=internal \
  -- set Interface o-hm0 external-ids:iface-status=active \
  -- set Interface o-hm0 external-ids:attached-mac=$MAC \
  -- set Interface o-hm0 external-ids:iface-id=$PORT_ID

ip link set dev o-hm0 address $MAC
ip link set dev o-hm0 up
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

dhclient -v o-hm0 -cf /etc/dhcp/octavia
route del default gw 172.16.0.1
echo 'nameserver 10.10.10.2' > /etc/resolv.conf

$ chmod +x /opt/octavia-interface-start.sh
$ echo 'nohup sh /opt/octavia-interface-start.sh > /var/log/octavia-interface-start.log 2>&1 &' >> /etc/rc.d/rc.local
$ chmod +x /etc/rc.d/rc.local

controller03執行

cd ~
. ~/octavia-openrc

# openstack 租戶 tunnel 網絡,可設置隨意設置
OCTAVIA_MGMT_SUBNET=172.16.0.0/24
OCTAVIA_MGMT_SUBNET_START=172.16.0.100
OCTAVIA_MGMT_SUBNET_END=172.16.0.254
# 172.16.0.1網關,172.16.0.2 controller01,172.16.0.3 controller02,172.16.0.4 controller03
OCTAVIA_MGMT_PORT_IP=172.16.0.4
  
# 獲取子網id
SUBNET_ID=$(openstack subnet show lb-mgmt-subnet -f value -c id)
PORT_FIXED_IP="--fixed-ip subnet=$SUBNET_ID,ip-address=$OCTAVIA_MGMT_PORT_IP"

# 創建controller03使用的端口
MGMT_PORT_ID=$(openstack port create --security-group \
  lb-health-mgr-sec-grp --device-owner Octavia:health-mgr \
  --host=$(hostname) -c id -f value --network lb-mgmt-net \
  $PORT_FIXED_IP octavia-health-manager-listen-port)

# 端口 mac 地址
MGMT_PORT_MAC=$(openstack port show -c mac_address -f value $MGMT_PORT_ID)

echo "OCTAVIA_MGMT_PORT_IP: $OCTAVIA_MGMT_PORT_IP
SUBNET_ID: $SUBNET_ID
PORT_FIXED_IP: $PORT_FIXED_IP
MGMT_PORT_ID: $MGMT_PORT_ID
MGMT_PORT_MAC: $MGMT_PORT_MAC"

linuxbridge方式

創建管理端口

ip link add o-hm0 type veth peer name o-bhm0
NETID=$(openstack network show lb-mgmt-net -c id -f value)
BRNAME=brq$(echo $NETID|cut -c 1-11)
brctl addif $BRNAME o-bhm0
ip link set o-bhm0 up

ip link set dev o-hm0 address $MGMT_PORT_MAC
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

# 獲取剛才設置的 ip
$ dhclient -v o-hm0 -cf /etc/dhcp/octavia

Internet Systems Consortium DHCP Client 4.2.5
Copyright 2004-2013 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/o-hm0/fa:16:3e:c5:d8:78
Sending on   LPF/o-hm0/fa:16:3e:c5:d8:78
Sending on   Socket/fallback
DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 7 (xid=0x6279ad33)
DHCPREQUEST on o-hm0 to 255.255.255.255 port 67 (xid=0x6279ad33)
DHCPOFFER from 172.16.0.101
DHCPACK from 172.16.0.101 (xid=0x6279ad33)
bound to 172.16.0.4 -- renewal in 36511 seconds.

$ ip a s dev o-hm0
13: o-hm0@o-bhm0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:c5:d8:78 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.4/24 brd 172.16.0.255 scope global dynamic o-hm0
       valid_lft 86387sec preferred_lft 86387sec
    inet6 fe80::f816:3eff:fec5:d878/64 scope link 
       valid_lft forever preferred_lft forever

# 使用 dhclient 獲取到 ip 地址后,會設置一條默認路由 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0
# 這會和本機默認網關路由沖突,可能會導致本機連接外部網絡,建議刪除
# 並且還會設置本機 dns 地址為 172.16.0.0/24 網段的 dns 服務器地址,也需要改回來
[root@controller01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 o-hm0
0.0.0.0         10.10.10.2      0.0.0.0         UG    100    0        0 ens33
10.10.10.0      0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.10.20.0      0.0.0.0         255.255.255.0   U     0      0        0 brq5ba4cced-83
10.10.30.0      0.0.0.0         255.255.255.0   U     102    0        0 ens35
169.254.169.254 172.16.0.100    255.255.255.255 UGH   0      0        0 o-hm0
172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 o-hm0
[root@controller01 ~]# route del default gw 172.16.0.1

[root@controller01 ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search openstacklocal
nameserver 172.16.0.101
nameserver 172.16.0.100
[root@controller01 ~]# echo 'nameserver 10.10.10.2' > /etc/resolv.conf

[root@controller01 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=128 time=40.5 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=128 time=42.3 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=128 time=40.8 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=4 ttl=128 time=40.8 ms
^C
--- www.a.shifen.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 40.586/41.609/43.340/1.098 ms

設置開機啟動

$ vi /opt/octavia-interface-start.sh
#!/bin/bash

set -x

#MAC=$MGMT_PORT_MAC
#BRNAME=$BRNAME

# MAC 為 $MGMT_PORT_MAC ,BRNAME 為 $BRNAME,具體含義看前面
MAC="fa:16:3e:c5:d8:78"
BRNAME="brqbcfafa57-ff"

# 剛開機時可能 linuxbriage 創建的網卡還不存在,所以 sleep 一下
sleep 120s

ip link add o-hm0 type veth peer name o-bhm0
brctl addif $BRNAME o-bhm0
ip link set o-bhm0 up
ip link set o-hm0 up
ip link set dev o-hm0 address $MAC
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

dhclient -v o-hm0 -cf /etc/dhcp/octavia
route del default gw 172.16.0.1
echo 'nameserver 10.10.10.2' > /etc/resolv.conf

$ chmod +x /opt/octavia-interface-start.sh
$ echo 'nohup sh /opt/octavia-interface-start.sh > /var/log/octavia-interface-start.log 2>&1 &' >> /etc/rc.d/rc.local
$ chmod +x /etc/rc.d/rc.local

openvswitch方式

創建管理端口

ovs-vsctl --may-exist add-port br-int o-hm0 \
  -- set Interface o-hm0 type=internal \
  -- set Interface o-hm0 external-ids:iface-status=active \
  -- set Interface o-hm0 external-ids:attached-mac=$MGMT_PORT_MAC \
  -- set Interface o-hm0 external-ids:iface-id=$MGMT_PORT_ID

ip link set dev o-hm0 address $MGMT_PORT_MAC
ip link set dev o-hm0 up
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

# 獲取剛才設置的 ip
$ dhclient -v o-hm0 -cf /etc/dhcp/octavia

Internet Systems Consortium DHCP Client 4.2.5
Copyright 2004-2013 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/o-hm0/fa:16:3e:3c:17:ee
Sending on   LPF/o-hm0/fa:16:3e:3c:17:ee
Sending on   Socket/fallback
DHCPDISCOVER on o-hm0 to 255.255.255.255 port 67 interval 6 (xid=0x5cee3ddf)
DHCPREQUEST on o-hm0 to 255.255.255.255 port 67 (xid=0x5cee3ddf)
DHCPOFFER from 172.16.0.100
DHCPACK from 172.16.0.100 (xid=0x5cee3ddf)
bound to 172.16.0.2 -- renewal in 33571 seconds.

$ ip a s dev o-hm0
13: o-hm0@o-bhm0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:3c:17:ee brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.2/24 brd 172.16.0.255 scope global dynamic o-hm0
       valid_lft 86292sec preferred_lft 86292sec
    inet6 fe80::f816:3eff:fe3c:17ee/64 scope link 
       valid_lft forever preferred_lft forever

# 使用 dhclient 獲取到 ip 地址后,會設置一條默認路由 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0
# 這會和本機默認網關路由沖突,可能會導致本機連接外部網絡,建議刪除
# 並且還會設置本機 dns 地址為 172.16.0.0/24 網段的 dns 服務器地址,也需要改回來
[root@controller01 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.0.1      0.0.0.0         UG    0      0        0 o-hm0
0.0.0.0         10.10.10.2      0.0.0.0         UG    100    0        0 ens33
10.10.10.0      0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.10.20.0      0.0.0.0         255.255.255.0   U     0      0        0 brq5ba4cced-83
10.10.30.0      0.0.0.0         255.255.255.0   U     102    0        0 ens35
169.254.169.254 172.16.0.100    255.255.255.255 UGH   0      0        0 o-hm0
172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 o-hm0
[root@controller01 ~]# route del default gw 172.16.0.1

[root@controller01 ~]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search openstacklocal
nameserver 172.16.0.101
nameserver 172.16.0.100
[root@controller01 ~]# echo 'nameserver 10.10.10.2' > /etc/resolv.conf

[root@controller01 ~]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=128 time=40.5 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=128 time=42.3 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=128 time=40.8 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=4 ttl=128 time=40.8 ms
^C
--- www.a.shifen.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 40.586/41.609/43.340/1.098 ms

設置開機啟動

$ vi /opt/octavia-interface-start.sh
#!/bin/bash

set -x

#MAC=$MGMT_PORT_MAC
#PORT_ID=$MGMT_PORT_ID

# MAC 為 $MGMT_PORT_MAC ,PORT_ID 為 $MGMT_PORT_ID,具體含義看前面
MAC="fa:16:3e:c5:d8:78"
PORT_ID="c7112971-1252-4bc5-8ae6-67bda4153376"

sleep 120s

ovs-vsctl --may-exist add-port br-int o-hm0 \
  -- set Interface o-hm0 type=internal \
  -- set Interface o-hm0 external-ids:iface-status=active \
  -- set Interface o-hm0 external-ids:attached-mac=$MAC \
  -- set Interface o-hm0 external-ids:iface-id=$PORT_ID

ip link set dev o-hm0 address $MAC
ip link set dev o-hm0 up
iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT

dhclient -v o-hm0 -cf /etc/dhcp/octavia
route del default gw 172.16.0.1
echo 'nameserver 10.10.10.2' > /etc/resolv.conf

$ chmod +x /opt/octavia-interface-start.sh
$ echo 'nohup sh /opt/octavia-interface-start.sh > /var/log/octavia-interface-start.log 2>&1 &' >> /etc/rc.d/rc.local
$ chmod +x /etc/rc.d/rc.local

修改配置文件

全部控制節點執行,注意 bind_host;

# 備份/etc/octavia/octavia.conf配置文件
cp /etc/octavia/octavia.conf{,.bak}
egrep -v '^$|^#' /etc/octavia/octavia.conf.bak > /etc/octavia/octavia.conf
openstack-config --set /etc/octavia/octavia.conf DEFAULT transport_url  rabbit://openstack:123456@controller01:5672,openstack:123456@controller02:5672,openstack:123456@controller03:5672
openstack-config --set /etc/octavia/octavia.conf database connection  mysql+pymysql://octavia:123456@10.10.10.10/octavia

openstack-config --set /etc/octavia/octavia.conf api_settings bind_host 10.10.10.31
openstack-config --set /etc/octavia/octavia.conf api_settings bind_port 9876
openstack-config --set /etc/octavia/octavia.conf api_settings auth_strategy keystone

openstack-config --set /etc/octavia/octavia.conf health_manager bind_port 5555
openstack-config --set /etc/octavia/octavia.conf health_manager bind_ip $OCTAVIA_MGMT_PORT_IP
# 高可用環境填多個
openstack-config --set /etc/octavia/octavia.conf health_manager controller_ip_port_list 172.16.0.2:5555,172.16.0.3:5555,172.16.0.4:5555

openstack-config --set /etc/octavia/octavia.conf keystone_authtoken www_authenticate_uri http://10.10.10.10:5000
openstack-config --set /etc/octavia/octavia.conf keystone_authtoken auth_url  http://10.10.10.10:5000
openstack-config --set /etc/octavia/octavia.conf keystone_authtoken memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/octavia/octavia.conf keystone_authtoken auth_type password
openstack-config --set /etc/octavia/octavia.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/octavia/octavia.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/octavia/octavia.conf keystone_authtoken project_name service
openstack-config --set /etc/octavia/octavia.conf keystone_authtoken username octavia
openstack-config --set /etc/octavia/octavia.conf keystone_authtoken password 123456

openstack-config --set /etc/octavia/octavia.conf certificates cert_generator local_cert_generator
openstack-config --set /etc/octavia/octavia.conf certificates ca_private_key_passphrase 123456
openstack-config --set /etc/octavia/octavia.conf certificates ca_private_key /etc/octavia/certs/server_ca.key.pem
openstack-config --set /etc/octavia/octavia.conf certificates ca_certificate /etc/octavia/certs/server_ca.cert.pem

openstack-config --set /etc/octavia/octavia.conf haproxy_amphora client_cert /etc/octavia/certs/client.cert-and-key.pem
openstack-config --set /etc/octavia/octavia.conf haproxy_amphora server_ca /etc/octavia/certs/server_ca.cert.pem
openstack-config --set /etc/octavia/octavia.conf haproxy_amphora key_path  /etc/octavia/.ssh/octavia_ssh_key
openstack-config --set /etc/octavia/octavia.conf haproxy_amphora base_path  /var/lib/octavia
openstack-config --set /etc/octavia/octavia.conf haproxy_amphora base_cert_dir  /var/lib/octavia/certs
openstack-config --set /etc/octavia/octavia.conf haproxy_amphora connection_max_retries  5500
openstack-config --set /etc/octavia/octavia.conf haproxy_amphora connection_retry_interval  5
openstack-config --set /etc/octavia/octavia.conf haproxy_amphora rest_request_conn_timeout  10
openstack-config --set /etc/octavia/octavia.conf haproxy_amphora rest_request_read_timeout  120

openstack-config --set /etc/octavia/octavia.conf oslo_messaging topic octavia_prov
openstack-config --set /etc/octavia/octavia.conf oslo_messaging rpc_thread_pool_size 2

openstack-config --set /etc/octavia/octavia.conf house_keeping load_balancer_expiry_age 3600
openstack-config --set /etc/octavia/octavia.conf house_keeping amphora_expiry_age 3600

openstack-config --set /etc/octavia/octavia.conf service_auth auth_url http://10.10.10.10:5000
openstack-config --set /etc/octavia/octavia.conf service_auth memcached_servers controller01:11211,controller02:11211,controller03:11211
openstack-config --set /etc/octavia/octavia.conf service_auth auth_type password
openstack-config --set /etc/octavia/octavia.conf service_auth project_domain_name default
openstack-config --set /etc/octavia/octavia.conf service_auth user_domain_name default
openstack-config --set /etc/octavia/octavia.conf service_auth project_name service
openstack-config --set /etc/octavia/octavia.conf service_auth username octavia
openstack-config --set /etc/octavia/octavia.conf service_auth password 123456

AMP_IMAGE_OWNER_ID=$(openstack project show service -c id -f value)
AMP_SECGROUP_LIST=$(openstack security group show lb-mgmt-sec-grp -c id -f value)
AMP_BOOT_NETWORK_LIST=$(openstack network show lb-mgmt-net -c id -f value)

openstack-config --set /etc/octavia/octavia.conf controller_worker client_ca /etc/octavia/certs/client_ca.cert.pem
openstack-config --set /etc/octavia/octavia.conf controller_worker amp_image_tag amphora
openstack-config --set /etc/octavia/octavia.conf controller_worker amp_flavor_id 200
openstack-config --set /etc/octavia/octavia.conf controller_worker amp_image_owner_id $AMP_IMAGE_OWNER_ID
openstack-config --set /etc/octavia/octavia.conf controller_worker amp_secgroup_list $AMP_SECGROUP_LIST
openstack-config --set /etc/octavia/octavia.conf controller_worker amp_boot_network_list $AMP_BOOT_NETWORK_LIST
openstack-config --set /etc/octavia/octavia.conf controller_worker amp_ssh_key_name octavia_ssh_key
openstack-config --set /etc/octavia/octavia.conf controller_worker network_driver allowed_address_pairs_driver
openstack-config --set /etc/octavia/octavia.conf controller_worker compute_driver compute_nova_driver
openstack-config --set /etc/octavia/octavia.conf controller_worker amphora_driver amphora_haproxy_rest_driver
openstack-config --set /etc/octavia/octavia.conf controller_worker workers 2
openstack-config --set /etc/octavia/octavia.conf controller_worker loadbalancer_topology ACTIVE_STANDBY

初始化數據庫

任意控制節點執行;

octavia-db-manage --config-file /etc/octavia/octavia.conf upgrade head

啟動服務

全部控制節點執行;

systemctl enable octavia-api.service octavia-health-manager.service octavia-housekeeping.service octavia-worker.service
systemctl restart octavia-api.service octavia-health-manager.service octavia-housekeeping.service octavia-worker.service
systemctl status octavia-api.service octavia-health-manager.service octavia-housekeeping.service octavia-worker.service

升級dashboard開啟

全部控制節點執行;

git clone https://github.com/openstack/octavia-dashboard.git -b stable/train
cd /root/octavia-dashboard
python setup.py install
cd /root/octavia-dashboard/octavia_dashboard/enabled
cp _1482_project_load_balancer_panel.py /usr/share/openstack-dashboard/openstack_dashboard/enabled/
cd /usr/share/openstack-dashboard
echo yes|./manage.py collectstatic
./manage.py compress
systemctl restart httpd
systemctl status httpd

創建 loadbalancer

image-20211122204630997

image-20211122204734065

image-20211122204754298

image-20211122204826071

image-20211122204848243

image-20211122204911400

稍等一會,系統會自動創建兩個 amphora-x64-haproxy 的實例,並且會分配后端服務的 vpc 的網絡地址,如果后端服務器屬於多個vpc,則會分配多個 vpc 的網絡地址

image-20211122205037844

image-20211122210517854

  • 如果需要ssh登陸到實例中,可以在任意控制節點:
[root@controller01 ~]# ssh -i /etc/octavia/.ssh/octavia_ssh_key cloud-user@172.16.0.162
The authenticity of host '172.16.0.162 (172.16.0.162)' can't be established.
ECDSA key fingerprint is SHA256:kAbm5G1FbZPZEmWGNbzvcYYubxeKlr6l456XEVr886o.
ECDSA key fingerprint is MD5:3c:87:63:e3:cc:e9:90:f6:33:5a:06:73:1e:6d:b7:82.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.16.0.162' (ECDSA) to the list of known hosts.
[cloud-user@amphora-65ef42f2-ef2d-4e7d-bd48-a0de84a12e3a ~]$ 
[cloud-user@amphora-65ef42f2-ef2d-4e7d-bd48-a0de84a12e3a ~]$ sudo su -
[root@amphora-65ef42f2-ef2d-4e7d-bd48-a0de84a12e3a ~]# 
  • 測試還發現:使用管理員把 amphora 實例手動刪除后並不會自動重新創建

image-20211122205202340

綁定一個浮動IP,測試是否能ssh過去

image-20211122205428673

image-20211122205533628

其他七層代理的具體設置就不贅述了,可自行研究

其他組件

其他CeilometerHeatTroveVPNaaSFWaaS等組件,后期有空再研究。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM