openstack服务器更换IP地址,环境部署,设置访问和安全规则


一、修改网卡IP地址

eno3
10.0.101.23

eno4
10.0.102.23

enp68s0f0
10.0.0.23

enp68s0f1
10.10.0.23

 

二、启动ceph存储

#启动mon
#controller1
ceph-mon --id=mon1

#controller2
ceph-mon --id=mon2

#controller3
ceph-mon --id=mon3

#启动osd

#controller1
ceph-osd --id=0
ceph-osd --id=1
ceph-osd --id=2

#controller2
ceph-osd --id=3
ceph-osd --id=4
ceph-osd --id=5

#controller3
ceph-osd --id=6
ceph-osd --id=7
ceph-osd --id=8

 

三、启动mariadb集群

[root@controller1 haproxy]# galera_new_cluster
Job for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
[root@controller1 haproxy]# tail /var/log/mariadb/mariadb.log
2018-03-21 12:16:18 140168333977920 [Note] WSREP: GCache history reset: f84f94a1-2c38-11e8-8ede-96f87262fb85:0 -> f84f94a1-2c38-11e8-8ede-96f87262fb85:-1
2018-03-21 12:16:18 140168333977920 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
2018-03-21 12:16:18 140168333977920 [Note] WSREP: wsrep_sst_grab()
2018-03-21 12:16:18 140168333977920 [Note] WSREP: Start replication
2018-03-21 12:16:18 140168333977920 [Note] WSREP: 'wsrep-new-cluster' option used, bootstrapping the cluster
2018-03-21 12:16:18 140168333977920 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
2018-03-21 12:16:18 140168333977920 [ERROR] WSREP: It may not be safe to bootstrap the cluster from this node. It was not the last one to leave the cluster and may not contain all the updates. To force cluster bootstrap with this node, edit the grastate.dat file manually and set safe_to_bootstrap to 1 .
2018-03-21 12:16:18 140168333977920 [ERROR] WSREP: wsrep::connect(gcomm://controller1,controller2,controller3) failed: 7
2018-03-21 12:16:18 140168333977920 [ERROR] Aborting

#解决办法
[root@controller1 haproxy]# cat /var/lib/mysql/grastate.dat
# GALERA saved state
version: 2.1
uuid: d6aea58b-2cbe-11e8-9c9d-b72d8fdd0931
seqno: -1
safe_to_bootstrap: 0

把safe_to_bootstrap: 0 #修改成safe_to_bootstrap: 1

#再启动集群
[root@controller1 haproxy]# galera_new_cluster

 

2、修改配置文件

cat /var/lib/mysql/grastate.dat

3、添加防火墙规则

iptables -A INPUT -p tcp -i feth1 --dport 3306 -j ACCEPT

iptables -A INPUT -p tcp -i feth1 --dport 4567 -j ACCEPT

iptables-save

 

四、修改httpd和nova配置文件

vi /etc/httpd/conf/httpd.conf

Listen 10.0.102.21:80  #修改为 eno4网卡外网IP地址


vi /etc/nova/nova.conf

server_listen=10.0.102.21

 #修改nova配置文件,需要重启:

[root@controller1 nova]# systemctl restart libvirtd openstack-nova-compute neutron-linuxbridge-agent httpd
[root@controller1 nova]# systemctl status libvirtd openstack-nova-compute neutron-linuxbridge-agent httpd

 

五、启动ha,pacemaker

systemctl restart haproxy.service
systemctl restart pacemaker

crm_mon

 

六、启动其它服务

systemctl restart httpd memcached xinetd rabbitmq-server
systemctl restart openstack-glance-api openstack-glance-registry
rabbitmqctl cluster_status
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service openstack-nova-spicehtml5proxy.service libvirtd
systemctl restart openstack-nova-compute
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl restart pacemaker

 

七、更换dashboard打开网站连接

#controller1,controller2,controller3

#controller1
[root@controller1 ~]# vi  /etc/httpd/conf/httpd.conf
Listen 10.10.0.21:80
Listen 192.168.141.21:80   #增加controller1的第三块网卡(外网)ip地址

#controller2
[root@controller1 ~]# vi  /etc/httpd/conf/httpd.conf
Listen 10.10.0.22:80
Listen 192.168.141.22:80   #增加controller1的第三块网卡(外网)ip地址

#controller3
[root@controller1 ~]# vi  /etc/httpd/conf/httpd.conf
Listen 10.10.0.23:80
Listen 192.168.141.23:80   #增加controller1的第三块网卡(外网)ip地址

#重启httpd服务

[root@controller2 ~]# systemctl stop haproxy
[root@controller2 ~]# systemctl restart httpd
[root@controller2 ~]# systemctl restart pacemaker
[root@controller2 ~]# crm_mon

Stack: corosync
Current DC: controller1 (version 1.1.16-1.fc25-94ff4df) - partition with quorum
Last updated: Thu May 31 16:01:31 2018
Last change: Thu May 31 08:36:31 2018 by root via cibadmin on controller1

3 nodes configured
5 resources configured

Online: [ controller1 controller2 controller3 ]

Active resources:

vip (ocf::heartbeat:IPaddr2): Started controller1
Clone Set: lb-haproxy-clone [lb-haproxy]
Started: [ controller1 ]
volume (systemd:openstack-cinder-volume): Started controller1

 

八、设置访问和安全规则

 

 

九、更改默认值

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM