openstack--6--控制節點和計算節點安裝配置neutron


Neutron相關介紹


 

早期的時候是沒有neutron,早期所使用的網絡的nova-network,經過版本改變才有個neutron。
quantum是因為商標和別的公司重名了,又改成的Neutron

 

OpenStack Networking

網絡:在實際的物理環境下,我們使用交換機或者集線器把多個計算機連接起來形成了網絡。在Neutron的世界里,網絡也是將多個不同的雲主機連接起來。
子網:在實際的物理環境下,在一個網絡中。我們可以將網絡划分成多為邏輯子網。在Neutron的世界里,子網也是隸屬於網絡下的。
端口:在實際的物理環境下,每個子網或者網絡,都有很多的端口,比如交換機端口來供計算機連接。在Neutron的世界端口也是隸屬於子網下,雲主機的網卡會對應到一個端口上。
路由器:在實際的網絡環境下,不同網絡或者不同邏輯子網之間如果需要進行通信,需要通過路由器進行路由。在Neutron的實際里路由也是這個作用。用來連接不同的網絡或者子網。

 

Neutron相關組件

不管是linux bridge 還是ovs,都要連接數據庫,連接數據庫的代碼都是一樣的。
就這樣搞了個ML2, 在ML2下面才是linux bridge和ovs,linux bridge和ovs都是開源的
其它商業插件,ML2也支持
dhcp agent是分配ip地址的
L3-agent 是做3層網絡的,路由的
LBAAS 是負載均衡的

 

 

宿主機和虛擬機在一個網絡,叫單一扁平網絡,在官方文檔里叫提供者網絡。比如下圖

單一平面網絡的缺點:
存在單一網絡瓶頸,缺乏可伸縮性。
缺乏合適的多租戶隔離。

 

 

網絡介紹

配置網絡選項,分公共網絡和私有網絡
部署網絡服務使用公共網絡和私有網絡兩種架構中的一種來部署網絡服務。
公共網絡:采用盡可能簡單的架構進行部署,只支持實例連接到公有網絡(外部網絡)。沒有私有網絡(個人網絡),路由器以及浮動IP地址。
只有admin或者其他特權用戶才可以管理公有網絡。
私有網絡:在公共網絡的基礎上多了layer3服務,支持實例連接到私有網絡

 本次實驗使用公共網絡

 

控制節點安裝配置Neutron


 

1、 控制節點安裝組件

[root@linux-node1 ~]# yum install openstack-neutron openstack-neutron-ml2   openstack-neutron-linuxbridge ebtables
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * epel: mirror01.idc.hinet.net
 * extras: mirrors.163.com
 * updates: mirrors.163.com
Package 1:openstack-neutron-8.3.0-1.el7.noarch already installed and latest version
Package 1:openstack-neutron-ml2-8.3.0-1.el7.noarch already installed and latest version
Package 1:openstack-neutron-linuxbridge-8.3.0-1.el7.noarch already installed and latest version
Package ebtables-2.0.10-15.el7.x86_64 already installed and latest version
Nothing to do
[root@linux-node1 ~]# 

  

 

2、控制節點配置部分---數據庫

編輯/etc/neutron/neutron.conf 文件並完成如下操作:
在 [database] 部分,配置數據庫訪問:

[database]
...
connection = mysql+pymysql://neutron:neutron@192.168.56.11/neutron

 

 

neutron改完數據庫連接配置之后,並不需要立即同步數據庫,還需要繼續配置

 

3、控制節點配置部分---keystone

在[DEFAULT]和[keystone_authtoken]部分,配置認證服務訪問:

[DEFAULT]
...
auth_strategy = keystone

 

[keystone_authtoken]模塊配置
加入下面參數
auth_uri = http://192.168.56.11:5000
auth_url = http://192.168.56.11:35357
memcached_servers = 192.168.56.11:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

 

 

4、控制節點配置部分---RabbitMQ

在 [DEFAULT] 和 [oslo_messaging_rabbit]部分,配置 RabbitMQ 消息隊列的連接:

[DEFAULT]
...
rpc_backend = rabbit

 

 

[oslo_messaging_rabbit]模塊下面配置
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack

 

 

 

 

 

5、控制節點配置部分---Neutron核心配置

在[DEFAULT]部分,啟用ML2插件並禁用其他插件,等號后面不寫,就表示禁用其它插件的意思

[DEFAULT]
...
core_plugin = ml2
service_plugins =

 

6、 控制節點配置部分---結合nova的配置

在[DEFAULT]和[nova]部分,配置網絡服務來通知計算節點的網絡拓撲變化
打開這兩行的注釋
意思是端口狀態發生改變,通知nova

[DEFAULT]
...
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True

 

 

[nova]模塊下面配置(Neutron配置文件有nova模塊)

 因為它需要使用nova的用戶連接keystone做一些操作

auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

 

  

7、控制節點配置部分---結合鎖路徑配置

 在 [oslo_concurrency] 部分,配置鎖路徑:

[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp

 

 

8、控制節點檢查主配置文件
控制節點neutron主配置文件的配置完畢
[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/neutron/neutron.conf
27:auth_strategy = keystone
30:core_plugin = ml2
33:service_plugins =
137:notify_nova_on_port_status_changes = true
141:notify_nova_on_port_data_changes = true
511:rpc_backend = rabbit
684:connection = mysql+pymysql://neutron:neutron@192.168.56.11/neutron
762:auth_uri = http://192.168.56.11:5000
763:auth_url = http://192.168.56.11:35357
764:memcached_servers = 192.168.56.11:11211
765:auth_type = password
766:project_domain_name = default
767:user_domain_name = default
768:project_name = service
769:username = neutron
770:password = neutron
939:auth_url = http://192.168.56.11:35357
940:auth_type = password
941:project_domain_name = default
942:user_domain_name = default
943:region_name = RegionOne
944:project_name = service
945:username = nova
946:password = nova
1059:lock_path = /var/lib/neutron/tmp
1210:rabbit_host = 192.168.56.11
1216:rabbit_port = 5672
1228:rabbit_userid = openstack
1232:rabbit_password = openstack
[root@linux-node1 ~]# 

  

 

9、控制節點配置 Modular Layer 2 (ML2) 插件

ML2是2層網絡的配置,ML2插件使用Linuxbridge機制來為實例創建layer-2虛擬網絡基礎設施
編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件並完成以下操作:
在[ml2]部分,啟用flat和VLAN網絡:
vim /etc/neutron/plugins/ml2/ml2_conf.ini

取消注釋,刪除local,改成如下
啟用flat和VLAN網絡,其余的可以不刪除,雖然用不到

 

 在[ml2]部分,禁用私有網絡,設置為空就表示不使用

[ml2]
...
tenant_network_types =

 

在[ml2]部分,啟用Linuxbridge機制:
這個的作用是你告訴neutron使用哪幾個插件創建網絡,此時是linuxbridge

[ml2]
...
mechanism_drivers = linuxbridge

它是個列表,你可以寫多個,比如再添加個openvswitch

 

 在[ml2]部分,啟用端口安全擴展驅動:

[ml2]
...
extension_drivers = port_security

在[ml2_type_flat]部分,配置公共虛擬網絡為flat網絡,官方文檔寫的改為provider,我們改為flat_networks = public

[ml2_type_flat]
...
flat_networks = provider

 

在[securitygroup]部分,啟用 ipset 增加安全組規則的高效性:

[securitygroup]
...
enable_ipset = True

 

10、控制節點檢查ML2配置文件
至此控制節點,ML2的配置更改完畢,如下
[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/neutron/plugins/ml2/ml2_conf.ini
107:type_drivers = flat,vlan,gre,vxlan,geneve
112:tenant_network_types = 
116:mechanism_drivers = linuxbridge,openvswitch
121:extension_drivers = port_security
153:flat_networks = public
230:enable_ipset = true
[root@linux-node1 ~]# 

 

11、控制節點配置Linuxbridge代理

Linuxbridge代理為實例建立layer-2虛擬網絡並且處理安全組規則。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並且完成以下操作:
在[linux_bridge]部分,將公共虛擬網絡和公共物理網絡接口對應起來:
將PUBLIC_INTERFACE_NAME替換為底層的物理公共網絡接口

[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

 

在[vxlan]部分,禁止VXLAN覆蓋網絡: 

[vxlan]
enable_vxlan = False

 

在 [securitygroup]部分,啟用安全組並配置 Linuxbridge iptables firewall driver:

[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

查看更改了哪些配置

[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
138:physical_interface_mappings = public:eth0
151:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
156:enable_security_group = true
171:enable_vxlan = False
[root@linux-node1 ~]# 

  

 

12、控制節點配置DHCP代理

 

編輯/etc/neutron/dhcp_agent.ini文件並完成下面的操作:
在[DEFAULT]部分,配置Linuxbridge驅動接口,DHCP驅動並啟用隔離元數據,這樣在公共網絡上的實例就可以通過網絡來訪問元數據

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True

查看更改了哪些配置
第一行是底層接口的配置
第二行dnsmasq是一個小的dhcp開源項目
第三行是刷新路由用的
[root@linux-node1 ~]# grep -n '^[a-Z]' /etc/neutron/dhcp_agent.ini
23:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
39:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
48:enable_isolated_metadata = True
[root@linux-node1 ~]# 

  

13、控制節點配置元數據代理 

編輯/etc/neutron/metadata_agent.ini文件並完成以下操作:
在[DEFAULT] 部分,配置元數據主機以及共享密碼:

[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

用你為元數據代理設置的密碼替換 METADATA_SECRET。下面的zyx是自定義的共享密鑰
這個共享密鑰,在nova里還要配置一遍,你要保持一致的

 

查看更改了哪些配置

[root@linux-node1 ~]# grep -n '^[a-Z]'  /etc/neutron/metadata_agent.ini
22:nova_metadata_ip = 192.168.56.11
34:metadata_proxy_shared_secret = zyx
[root@linux-node1 ~]# 

  

14、在控制節點的nova上面配置neutron

下面配置的是neutron的keystone的認證地址。9696是neutron-server的端口
編輯/etc/nova/nova.conf文件並完成以下操作:
在[neutron]部分,配置訪問參數,啟用元數據代理並設置密碼:

url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

然后打開下面並配置如下
service_metadata_proxy = True
metadata_proxy_shared_secret = zyx

 

15、控制節點配置超鏈接

網絡服務初始化腳本需要一個超鏈接 /etc/neutron/plugin.ini指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini 如果超鏈接不存在,使用下面的命令創建它:

[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@linux-node1 ~]# 

  

16、控制節點同步數據庫
同步數據庫命令如下
 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
 
同步過程如下,它用到了2個配置文件
[root@linux-node1 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
No handlers could be found for logger "oslo_config.cfg"
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Running upgrade for neutron ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> kilo, kilo_initial
INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py
INFO  [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam
INFO  [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes
INFO  [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework
INFO  [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac
INFO  [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage
INFO  [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash
INFO  [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers
INFO  [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool
INFO  [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes
INFO  [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations
INFO  [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port
INFO  [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone
INFO  [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool
INFO  [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table
INFO  [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone
INFO  [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone
INFO  [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope
INFO  [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration
INFO  [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings
INFO  [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network
INFO  [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data
INFO  [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data
INFO  [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy
INFO  [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table
INFO  [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support
INFO  [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources
INFO  [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table
INFO  [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule.
INFO  [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac
INFO  [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables
INFO  [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal
INFO  [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys
INFO  [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver
INFO  [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables
INFO  [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table
INFO  [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration
INFO  [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring
INFO  [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables
INFO  [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy
INFO  [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external
INFO  [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc
  OK
[root@linux-node1 ~]# 

  

17、控制節點重啟nova服務以及啟動neutron服務

重啟計算nova-api 服務,在控制節點上操作:

[root@linux-node1 ~]# systemctl restart openstack-nova-api.service
[root@linux-node1 ~]# 

啟動以下neutron相關服務,並設置開機啟動

操作命令如下

 systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service


 systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

執行過程如下

[root@linux-node1 ~]# systemctl enable neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service  neutron-metadata-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.
[root@linux-node1 ~]#


[root@linux-node1 ~]# systemctl start neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service
[root@linux-node1 ~]# 

  

官方文檔提到下面,我們用不到,不用操作,這里以刪除線標識

對於網絡選項2,同樣啟用layer-3服務並設置其隨系統自啟動

# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service

  

查看監聽,多了9696端口
[root@linux-node1 ~]# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      21243/python2       
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      21243/python2       
tcp        0      0 0.0.0.0:9191            0.0.0.0:*               LISTEN      1157/python2        
tcp        0      0 0.0.0.0:25672           0.0.0.0:*               LISTEN      1154/beam.smp       
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      1175/httpd          
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1613/mysqld         
tcp        0      0 0.0.0.0:11211           0.0.0.0:*               LISTEN      1162/memcached      
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      1158/python2        
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1175/httpd          
tcp        0      0 0.0.0.0:4369            0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1757/dnsmasq        
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1159/sshd           
tcp        0      0 0.0.0.0:15672           0.0.0.0:*               LISTEN      1154/beam.smp       
tcp        0      0 0.0.0.0:35357           0.0.0.0:*               LISTEN      1175/httpd          
tcp        0      0 0.0.0.0:9696            0.0.0.0:*               LISTEN      21324/python2       
tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      13967/python2       
tcp6       0      0 :::5672                 :::*                    LISTEN      1154/beam.smp       
tcp6       0      0 :::22                   :::*                    LISTEN      1159/sshd           
[root@linux-node1 ~]# 

  

18、控制節點創建服務實體和注冊端點

在keystone上創建服務和注冊端點

創建neutron服務實體:

[root@linux-node1 ~]# source admin-openstack.sh 
[root@linux-node1 ~]# openstack service create --name neutron  --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | bc2a5e5fbfed4d6f8d36c972438bd6d8 |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+
[root@linux-node1 ~]# 

  

創建網絡服務API端點
 
創建public端點
[root@linux-node1 ~]# openstack endpoint create --region RegionOne network public http://192.168.56.11:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | fed066647ffa40d1879bf438215cf0c2 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | bc2a5e5fbfed4d6f8d36c972438bd6d8 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://192.168.56.11:9696        |
+--------------+----------------------------------+
[root@linux-node1 ~]# 

 

創建internal端點

[root@linux-node1 ~]# openstack endpoint create --region RegionOne  network internal http://192.168.56.11:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 1ee57aced55540929045be8bef12617b |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | bc2a5e5fbfed4d6f8d36c972438bd6d8 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://192.168.56.11:9696        |
+--------------+----------------------------------+
[root@linux-node1 ~]# 

  

創建admin端點

[root@linux-node1 ~]# openstack endpoint create --region RegionOne  network admin http://192.168.56.11:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 37686201b1ec4dbd894ab4229f7aa202 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | bc2a5e5fbfed4d6f8d36c972438bd6d8 |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://192.168.56.11:9696        |
+--------------+----------------------------------+
[root@linux-node1 ~]# 

  

檢查,看到下面3行,說明沒問題,右邊alive是笑臉狀態。表示正常
[root@linux-node1 ~]# neutron agent-list
+-----------------------+--------------------+----------------------+-------------------+-------+----------------+------------------------+
| id                    | agent_type         | host                 | availability_zone | alive | admin_state_up | binary                 |
+-----------------------+--------------------+----------------------+-------------------+-------+----------------+------------------------+
| 6da34bbb-edc8-40e5-a8 | DHCP agent         | linux-node1.nmap.com | nova              | :-)   | True           | neutron-dhcp-agent     |
| 85-61c1d80937b7       |                    |                      |                   |       |                |                        |
| 92d765f2-3519-458f-93 | Linux bridge agent | linux-node1.nmap.com |                   | :-)   | True           | neutron-linuxbridge-   |
| 82-148287317e16       |                    |                      |                   |       |                | agent                  |
| bede206f-aabf-4265    | Metadata agent     | linux-node1.nmap.com |                   | :-)   | True           | neutron-metadata-agent |
| -8e4f-bb1ad53af906    |                    |                      |                   |       |                |                        |
+-----------------------+--------------------+----------------------+-------------------+-------+----------------+------------------------+
[root@linux-node1 ~]# 

  

 

 

計算節點安裝和配置neutron
早期版本nova-compute可以直接連接數據庫,那么存在一個問題,任何一個計算節點被入侵了。那么數據庫整個就危險了。后來就出現了個nova-condutor,它作為中間訪問的

1、安裝組件

[root@linux-node2 ~]# yum install openstack-neutron-linuxbridge ebtables ipset -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * epel: mirrors.tuna.tsinghua.edu.cn
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Package 1:openstack-neutron-linuxbridge-8.3.0-1.el7.noarch already installed and latest version
Package ebtables-2.0.10-15.el7.x86_64 already installed and latest version
Package ipset-6.19-6.el7.x86_64 already installed and latest version
Nothing to do
[root@linux-node2 ~]# 

 

計算節點要改2個文件配置通用組件和配置網絡選項

配置通用組件
Networking 通用組件的配置包括認證機制、消息隊列和插件。
/etc/neutron/neutron.conf

配置網絡選項
配置Linuxbridge代理
/etc/neutron/plugins/ml2/linuxbridge_agent.ini

文檔連接可以參照
https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-compute-install-option1.html

 

因為計算節點和控制節點neutron配置相似,可以在控制節點配置文件基礎上完善下

[root@linux-node1 ~]# rsync -avz /etc/neutron/neutron.conf root@192.168.56.12:/etc/neutron/
root@192.168.56.12's password: 
sending incremental file list
neutron.conf

sent 86 bytes  received 487 bytes  163.71 bytes/sec
total size is 53140  speedup is 92.74
[root@linux-node1 ~]# 

  

2、計算節點更改配置

刪除mysql的配置,並注釋這行

把下面nova的下面配置刪除

 

改成如下

 

把下面兩行注釋掉

注釋下面2行

 

3、查看更改后的配置

[root@linux-node2 ~]# vim /etc/neutron/neutron.conf 
[root@linux-node2 ~]# grep -n '^[a-Z]' /etc/neutron/neutron.conf
27:auth_strategy = keystone
511:rpc_backend = rabbit
762:auth_uri = http://192.168.56.11:5000
763:auth_url = http://192.168.56.11:35357
764:memcached_servers = 192.168.56.11:11211
765:auth_type = password
766:project_domain_name = default
767:user_domain_name = default
768:project_name = service
769:username = neutron
770:password = neutron
1051:lock_path = /var/lib/neutron/tmp
1202:rabbit_host = 192.168.56.11
1208:rabbit_port = 5672
1220:rabbit_userid = openstack
1224:rabbit_password = openstack
[root@linux-node2 ~]# 

  

4、計算節點更改nova主配置文件

編輯/etc/nova/nova.conf文件並完成下面的操作:
在[neutron] 部分,配置訪問參數:

[neutron]
...
url = http://192.168.56.11:9696
auth_url = http://192.168.56.11:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
下面更改的地方,控制節點也改過,可以復制過來
下面這是控制節點

復制這些行

下面這里是計算節點,粘貼
[root@linux-node2 ~]# vim /etc/nova/nova.conf
[root@linux-node2 ~]#

 

 

5、計算節點配置Linuxbridge代理

(1)Linuxbridge代理為實例建立layer-2虛擬網絡並且處理安全組規則。
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件並且完成以下操作:
在[linux_bridge]部分,將公共虛擬網絡和公共物理網絡接口對應起來:

[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

將PUBLIC_INTERFACE_NAME 替換為底層的物理公共網絡接口

 

(2)在[vxlan]部分,禁止VXLAN覆蓋網絡: 

[vxlan]
enable_vxlan = False

  

(2)在 [securitygroup]部分,啟用安全組並配置 Linuxbridge iptables firewall driver: 

[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
 
 
由於上面3處的配置和控制節點的一模一樣,直接拷貝控制節點文件到此替換即可
[root@linux-node1 ~]# rsync -avz /etc/neutron/plugins/ml2/linuxbridge_agent.ini root@192.168.56.12:/etc/neutron/plugins/ml2/
root@192.168.56.12's password: 
sending incremental file list

sent 56 bytes  received 12 bytes  27.20 bytes/sec
total size is 7924  speedup is 116.53
[root@linux-node1 ~]# 

  

6、在計算節點檢查linuxbridge_agent配置文件
[root@linux-node2 ~]# grep -n '^[a-Z]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
138:physical_interface_mappings = public:eth0
151:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
156:enable_security_group = true
171:enable_vxlan = False
[root@linux-node2 ~]# 

  

7、重啟nova服務並啟動neutron服務

因為改動了nova主配置文件,需要重啟nova服務

同時啟動neutron服務,並設置開機啟動

[root@linux-node2 ~]# systemctl restart openstack-nova-compute.service
[root@linux-node2 ~]# systemctl enable neutron-linuxbridge-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@linux-node2 ~]# systemctl start neutron-linuxbridge-agent.service
[root@linux-node2 ~]# 

 

8、控制節點檢查

看到多了個計算節點Linux bridge agent  

[root@linux-node1 ~]# source admin-openstack.sh 
[root@linux-node1 ~]# neutron agent-list
+-----------------------+--------------------+----------------------+-------------------+-------+----------------+------------------------+
| id                    | agent_type         | host                 | availability_zone | alive | admin_state_up | binary                 |
+-----------------------+--------------------+----------------------+-------------------+-------+----------------+------------------------+
| 1451fc85-d7db-4034-8f | Linux bridge agent | linux-node2.nmap.com |                   | :-)   | True           | neutron-linuxbridge-   |
| c5-38375b442826       |                    |                      |                   |       |                | agent                  |
| 6da34bbb-edc8-40e5-a8 | DHCP agent         | linux-node1.nmap.com | nova              | :-)   | True           | neutron-dhcp-agent     |
| 85-61c1d80937b7       |                    |                      |                   |       |                |                        |
| 92d765f2-3519-458f-93 | Linux bridge agent | linux-node1.nmap.com |                   | :-)   | True           | neutron-linuxbridge-   |
| 82-148287317e16       |                    |                      |                   |       |                | agent                  |
| bede206f-aabf-4265    | Metadata agent     | linux-node1.nmap.com |                   | :-)   | True           | neutron-metadata-agent |
| -8e4f-bb1ad53af906    |                    |                      |                   |       |                |                        |
+-----------------------+--------------------+----------------------+-------------------+-------+----------------+------------------------+
[root@linux-node1 ~]# 

  

下面映射,可以理解為給網卡起個別名,便於區分用途
同時你的物理網卡名字必須是eth0,對應上配置文件。或者說配置文件對用上實際的網卡名

[root@linux-node2 ~]# grep physical_interface_mappings /etc/neutron/plugins/ml2/linuxbridge_agent.ini 
physical_interface_mappings = public:eth0
[root@linux-node2 ~]# 

  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM