openstack服務器更換IP地址,環境部署,設置訪問和安全規則


一、修改網卡IP地址

eno3
10.0.101.23

eno4
10.0.102.23

enp68s0f0
10.0.0.23

enp68s0f1
10.10.0.23

 

二、啟動ceph存儲

#啟動mon
#controller1
ceph-mon --id=mon1

#controller2
ceph-mon --id=mon2

#controller3
ceph-mon --id=mon3

#啟動osd

#controller1
ceph-osd --id=0
ceph-osd --id=1
ceph-osd --id=2

#controller2
ceph-osd --id=3
ceph-osd --id=4
ceph-osd --id=5

#controller3
ceph-osd --id=6
ceph-osd --id=7
ceph-osd --id=8

 

三、啟動mariadb集群

[root@controller1 haproxy]# galera_new_cluster
Job for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
[root@controller1 haproxy]# tail /var/log/mariadb/mariadb.log
2018-03-21 12:16:18 140168333977920 [Note] WSREP: GCache history reset: f84f94a1-2c38-11e8-8ede-96f87262fb85:0 -> f84f94a1-2c38-11e8-8ede-96f87262fb85:-1
2018-03-21 12:16:18 140168333977920 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
2018-03-21 12:16:18 140168333977920 [Note] WSREP: wsrep_sst_grab()
2018-03-21 12:16:18 140168333977920 [Note] WSREP: Start replication
2018-03-21 12:16:18 140168333977920 [Note] WSREP: 'wsrep-new-cluster' option used, bootstrapping the cluster
2018-03-21 12:16:18 140168333977920 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2018-03-21 12:16:18 140168333977920 [ERROR] WSREP: It may not be safe to bootstrap the cluster from this node. It was not the last one to leave the cluster and may not contain all the updates. To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1 .
2018-03-21 12:16:18 140168333977920 [ERROR] WSREP: wsrep::connect(gcomm://controller1,controller2,controller3) failed: 7
2018-03-21 12:16:18 140168333977920 [ERROR] Aborting

#解決辦法
[root@controller1 haproxy]# cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: d6aea58b-2cbe-11e8-9c9d-b72d8fdd0931
seqno: -1
safe_to_bootstrap: 0

把safe_to_bootstrap: 0 #修改成safe_to_bootstrap: 1

#再啟動集群
[root@controller1 haproxy]# galera_new_cluster

 

2、修改配置文件

cat /var/lib/mysql/grastate.dat

3、添加防火牆規則

iptables -A INPUT -p tcp -i feth1 --dport 3306 -j ACCEPT

iptables -A INPUT -p tcp -i feth1 --dport 4567 -j ACCEPT

iptables-save

 

四、修改httpd和nova配置文件

vi /etc/httpd/conf/httpd.conf

Listen 10.0.102.21:80  #修改為 eno4網卡外網IP地址


vi /etc/nova/nova.conf

server_listen=10.0.102.21

 #修改nova配置文件,需要重啟:

[root@controller1 nova]# systemctl restart libvirtd openstack-nova-compute neutron-linuxbridge-agent httpd
[root@controller1 nova]# systemctl status libvirtd openstack-nova-compute neutron-linuxbridge-agent httpd

 

五、啟動ha,pacemaker

systemctl restart haproxy.service
systemctl restart pacemaker

crm_mon

 

六、啟動其它服務

systemctl restart httpd memcached xinetd rabbitmq-server
systemctl restart openstack-glance-api openstack-glance-registry
rabbitmqctl cluster_status
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service openstack-nova-spicehtml5proxy.service libvirtd
systemctl restart openstack-nova-compute
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl restart pacemaker

 

七、更換dashboard打開網站連接

#controller1,controller2,controller3

#controller1
[root@controller1 ~]# vi  /etc/httpd/conf/httpd.conf
Listen 10.10.0.21:80
Listen 192.168.141.21:80   #增加controller1的第三塊網卡(外網)ip地址

#controller2
[root@controller1 ~]# vi  /etc/httpd/conf/httpd.conf
Listen 10.10.0.22:80
Listen 192.168.141.22:80   #增加controller1的第三塊網卡(外網)ip地址

#controller3
[root@controller1 ~]# vi  /etc/httpd/conf/httpd.conf
Listen 10.10.0.23:80
Listen 192.168.141.23:80   #增加controller1的第三塊網卡(外網)ip地址

#重啟httpd服務

[root@controller2 ~]# systemctl stop haproxy
[root@controller2 ~]# systemctl restart httpd
[root@controller2 ~]# systemctl restart pacemaker
[root@controller2 ~]# crm_mon

Stack: corosync
Current DC: controller1 (version 1.1.16-1.fc25-94ff4df) - partition with quorum
Last updated: Thu May 31 16:01:31 2018
Last change: Thu May 31 08:36:31 2018 by root via cibadmin on controller1

3 nodes configured
5 resources configured

Online: [ controller1 controller2 controller3 ]

Active resources:

vip (ocf::heartbeat:IPaddr2): Started controller1
Clone Set: lb-haproxy-clone [lb-haproxy]
Started: [ controller1 ]
volume (systemd:openstack-cinder-volume): Started controller1

 

八、設置訪問和安全規則

 

 

九、更改默認值

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM