[ Openstack ] Openstack-Mitaka 高可用之 網絡服務(Neutron)


 目錄

    Openstack-Mitaka 高可用之 概述
    Openstack-Mitaka 高可用之 環境初始化
    Openstack-Mitaka 高可用之 Mariadb-Galera集群部署
    Openstack-Mitaka 高可用之 Rabbitmq-server 集群部署
    Openstack-Mitaka 高可用之 memcache
    Openstack-Mitaka 高可用之 Pacemaker+corosync+pcs高可用集群
    Openstack-Mitaka 高可用之 認證服務(keystone)
    OpenStack-Mitaka 高可用之 鏡像服務(glance)
    Openstack-Mitaka 高可用之 計算服務(Nova)
    Openstack-Mitaka 高可用之 網絡服務(Neutron)
    Openstack-Mitaka 高可用之 Dashboard
    Openstack-Mitaka 高可用之 啟動一個實例
    Openstack-Mitaka 高可用之 測試

 

 Openstack neutron 簡介

Openstack Networking(neutron),允許創建、插入接口設備,這些設備由其他openstack服務管理。

neutron包含的組件:
    (1)neutron-server
        接收和路由APi請求到合適的openstack網絡插件,以達到預想的目的。
    (2)openstack網絡插件和代理
        插拔端口,創建網絡和子網,以及提供IP地址,這些插件和代理依賴供應商和技術而不同。例如:Linux Bridge、 Open vSwitch
    (3)消息隊列
        大多數的openstack networking安裝都會用到,用於在neutron-server和各種各樣的代理進程間路由信息。也為某些特定的插件扮演數據庫的角色
        
    openstack網絡主要和openstack計算交互,以提供網絡連接到它的實例。

 

 網絡工作模式及概念

   虛擬化網絡:

            [ KVM ] 四種簡單的網絡模型
            [ KVM 網絡虛擬化 ] Openvswitch

這兩篇有詳細描述虛擬化網絡的實現原理和方法。

 

 安裝並配置控制節點

以下操作在控制節點:

首先創建neutron庫
[root@controller1 ~]# mysql -ugalera -pgalera -h 192.168.0.10
MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.09 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'    IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'    IDENTIFIED BY 'neutron';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> Bye

 

創建服務證書:

[root@controller1 ~]# . admin-openrc
[root@controller1 ~]# openstack user create --domain default --password-prompt neutron        #密碼為 neutron
[root@controller1 ~]# openstack role add --project service --user neutron admin
[root@controller1 ~]# openstack service create --name neutron   --description "OpenStack Networking" network

創建網絡服務API端點:

[root@controller1 ~]# openstack endpoint create --region RegionOne   network public http://controller:9696
[root@controller1 ~]# openstack endpoint create --region RegionOne   network internal http://controller:9696
[root@controller1 ~]# openstack endpoint create --region RegionOne   network admin http://controller:9696

 

配置網絡選項:
這里提供了兩種網絡選項
    (1)只支持實例連接到公有網絡(外部網絡)。沒有私有網絡(個人網絡),路由器以及浮動IP地址。只有admin或者其他特權用戶才可以管理公有網絡。
    (2)選項2在選項1的基礎上多了layer-3服務,支持實例連接到私有網絡。demo或者其他沒有特權的用戶可以管理自己的私有網絡,包含連接到公網和私網的路由器管理。另外,浮動IP地址可以讓實例使用私有網絡連接到外部網絡,這里的外部指物理網絡,例如互聯網。
   

這里,使用選項2來構建openstack網絡,如果選項2配置完成,選項1也就會了。

 

 Openstack 私有網絡配置

安裝組件:
三個controller節點都需要安裝:

# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

配置服務組件:
Networking服務器組件配置包括:數據庫、認證機制、消息隊列、拓撲變化通知和插件

主要配置如下幾個文件:

[root@controller1 ~]# cd /etc/neutron/
[root@controller1 neutron]# ll neutron.conf plugins/ml2/ml2_conf.ini plugins/ml2/linuxbridge_agent.ini l3_agent.ini dhcp_agent.ini metadata_agent.ini 
-rw-r----- 1 root neutron  8741 Dec  3 19:26 dhcp_agent.ini
-rw-r----- 1 root neutron 64171 Dec  3 19:23 neutron.conf
-rw-r----- 1 root neutron  8490 Dec  3 19:25 plugins/ml2/linuxbridge_agent.ini
-rw-r----- 1 root neutron  8886 Dec  3 19:25 plugins/ml2/ml2_conf.ini
-rw-r----- 1 root neutron 11215 Dec  3 19:42 l3_agent.ini
-rw-r----- 1 root neutron 7089 Dec  3 19:42 metadata_agent.ini
neutron.conf 配置如下:
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
transport_url = rabbit://openstack:openstack@controller1,openstack:openstack@controller2,openstack:openstack@controller3
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
bind_host = 192.168.0.11 # 藍色部分為dhcp和router高可用設置
dhcp_agents_per_network = 3
l3_ha = true max_l3_agents_per_router = 3
min_l3_agents_per_router = 2
l3_ha_net_cidr = 169.254.192.0/18
…
[database]
connection = mysql+pymysql://neutron:neutron@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
neutron.conf:memcached_servers = controller1:11211,controller2:11211,controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
…
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp


 plugins/ml2/ml2_conf.ini 配置如下:
…
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
…
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = True


plugins/ml2/linuxbridge_agent.ini 配置如下:
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eno33554992        # 外網網卡名
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = True
local_ip = 192.168.0.11        # 這里使用的管理地址,做vxlan隧道,每個節點填寫本地管理地址
l2_population = True

l3_agent.ini配置:
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge = # 藍色部分為網絡高可用切換設置
ha_confs_path = $state_path/ha_confs
ha_vrrp_auth_type = PASS
ha_vrrp_auth_password = 
ha_vrrp_advert_int = 2
[AGENT]

dhcp_agent.ini 配置如下:
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
[AGENT]

metadata_agent.ini 配置如下:
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
[AGENT]

以上配置需要注意的是 plugins/ml2/linuxbridge_agent.ini 兩個注釋說明:
physical_interface_mappings = provider:eno33554992        # 訪問外網的網卡名,每台controller節點IP是不同的

local_ip = 192.168.0.11        # 管理地址,每台controller節點的管理IP是不同的

將以上六個文件拷貝到其他controller節點,拷貝一定不要遺漏文件,注意以下三項:

neutron.conf
bind_host = 192.168.0.xx

plugins/ml2/linuxbridge_agent.in
physical_interface_mappings = provider:eno33554992
local_ip = 192.168.0.11

 

以下操作建議手動每個節點逐個修改,避免造成混亂

[root@controller1 neutron]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = controller1
metadata_proxy_shared_secret = METADATA_SECRET
[AGENT]
[cache]

Controller2、controller3節點修改同上,nova_metadata_ip改成對應的主機名

為計算服務配置網絡功能:

建議逐個修改,避免配置文件混亂:

[root@controller1 ~]# vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

 

網絡初始化腳本需要一個超鏈接,如下:
在每個controller節點執行:

# ln -vs /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
‘/etc/neutron/plugin.ini’ -> ‘/etc/neutron/plugins/ml2/ml2_conf.ini’

同步數據庫只需要在一台controller節點上執行即可:

[root@controller1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
打印的都是 info 信息。

重啟計算服務,每個controller節點執行

# systemctl restart openstack-nova-api.service

執行完成,再次確認nova各項服務是否正常

各項服務正常。

啟動neutron各項服務,建議一項一項啟動,監測日志有無報錯信息

 

[root@controller1 ~]# systemctl start neutron-server.service
[root@controller1 ~]# systemctl start neutron-linuxbridge-agent.service
[root@controller1 ~]# systemctl start neutron-dhcp-agent.service
[root@controller1 ~]# systemctl start neutron-metadata-agent.service
[root@controller1 ~]# systemctl start neutron-l3-agent.service

全部啟動成功,日志打印都是 info 信息,設置開機啟動,前面的服務都是設置為開機啟動的。
[root@controller1 ~]# systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service neutron-l3-agent.service

neutron各項服務只有 neutron-server需要監聽端口為:9696,配置到haproxy中。

 

目前 haproxy 監聽的配置如下:

listen galera_cluster
    mode tcp
    bind 192.168.0.10:3306
    balance source
    option mysql-check user haproxy
    server controller1 192.168.0.11:3306 check inter 2000 rise 3 fall 3 backup
    server controller2 192.168.0.12:3306 check inter 2000 rise 3 fall 3 
    server controller3 192.168.0.13:3306 check inter 2000 rise 3 fall 3 backup

listen memcache_cluster
    mode tcp
    bind 192.168.0.10:11211
    balance source
    server controller1 192.168.0.11:11211 check inter 2000 rise 3 fall 3 backup
    server controller2 192.168.0.12:11211 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:11211 check inter 2000 rise 3 fall 3 backup

listen dashboard_cluster
    mode tcp
    bind 192.168.0.10:80
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:80 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:80 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:80 check inter 2000 rise 3 fall 3
    
listen keystone_admin_cluster
    mode tcp
    bind 192.168.0.10:35357
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:35357 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:35357 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:35357 check inter 2000 rise 3 fall 3
listen keystone_public_internal_cluster
    mode tcp
    bind 192.168.0.10:5000
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:5000 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:5000 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:5000 check inter 2000 rise 3 fall 3

listen glance_api_cluster
    mode tcp
    bind 192.168.0.10:9292
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:9292 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:9292 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:9292 check inter 2000 rise 3 fall 3
listen glance_registry_cluster
    mode tcp
    bind 192.168.0.10:9191
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:9191 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:9191 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:9191 check inter 2000 rise 3 fall 3

listen nova_compute_api_cluster
    mode tcp
    bind 192.168.0.10:8774
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:8774 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:8774 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:8774 check inter 2000 rise 3 fall 3
listen nova_metadata_api_cluster
    mode tcp
    bind 192.168.0.10:8775
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:8775 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:8775 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:8775 check inter 2000 rise 3 fall 3
listen nova_vncproxy_cluster
    mode tcp
    bind 192.168.0.10:6080
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:6080 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:6080 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:6080 check inter 2000 rise 3 fall 3

listen neutron_api_cluster
    mode tcp
    bind 192.168.0.10:9696
    balance source
    option tcplog
    option httplog
    server controller1 192.168.0.11:9696 check inter 2000 rise 3 fall 3
    server controller2 192.168.0.12:9696 check inter 2000 rise 3 fall 3
    server controller3 192.168.0.13:9696 check inter 2000 rise 3 fall 3

啟動成功后,拷貝到其他controller節點:

[root@controller1 ~]# scp /etc/haproxy/haproxy.cfg controller2:/etc/haproxy/
haproxy.cfg                                                                                                                100% 6607     6.5KB/s   00:00    
[root@controller1 ~]# scp /etc/haproxy/haproxy.cfg controller3:/etc/haproxy/
haproxy.cfg                                                                                                                100% 6607     6.5KB/s   00:00

 

 安裝配置計算節點

以下操作都是在compute節點上:

[root@compute1 ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y

配置通用組件:

Networking 通用組件的配置包括認證機制、消息隊列和插件

[root@compute1 ~]# vim /etc/neutron/neutron.conf 
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller1,openstack:openstack@controller2,openstack:openstack@controller3
auth_strategy = keystone
…
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
neutron.conf:memcached_servers = controller1:11211,controller2:11211,controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
…
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp


配置私有網絡:
[root@compute1 ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eno33554992
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = True
local_ip = 192.168.0.31
l2_population = True

為計算服務添加網絡功能:
[root@compute1 ~]# vim /etc/nova/nova.conf
…
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
…

配置完成,重啟計算服務API

[root@compute1 ~]# systemctl restart openstack-nova-compute.service

啟動Linuxbridge服務並開機啟動
[root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service
[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service

 

 驗證網絡服務

在任意controller節點上執行:

 

 

 網絡服務正常,再次確認計算服務

 

計算服務正常。neutron配置成功。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM