三節點搭建openstack-Mitaka版本


前言:

  現在的雲計算平台已經非常火,也非常的穩定了。像阿里雲平台,百度雲平台等等,今天咱們基於openstack來搭建一個雲平台

注意:

  本次平台搭建為三節點搭建(沒有外部存儲節點,所有存儲為本地存儲的方式)

 

一:網絡:

  1.管理網絡:192.168.1.0/24

  2.數據網絡:1.1.1.0/24

 

二:操作系統

  CentOS Linux release 7.3.1611 (Core) 

三:內核

  3.10.0-514.el7.x86_64

 

四:版本信息:

  openstack版本mitaka

注意:

  在修改配置文件的時候 一定要注意不要再某條配置文件后面添加注釋。可以上面與下面

  相關配置一定要在標題的后追加,不要再原有的基礎上修改

 

效果圖:

  

本博客主要是搭建前期的環境,后續的一些內部操作將會在后面的博客中陸續更新

 

准備環境:

  為三台主機添加hosts解析文件,為每台機器設置主機名,關閉firewalld,sellinux,設置靜態IP

  為計算節點添加兩塊網卡,為網絡節點添加兩塊網卡

  

  自定義yum源

    所有節點執行:

    yum makecache && yum install vim net-tools -y&& yum update -y

      關閉yum自動更新

    修改/ect/yum/yum-cron.conf將download_updates = yes改為no即可

  

  也可以使用網絡yum源: 

  下載網絡yum源(在所有節點執行)

    yum install centos-release-openstack-mitaka -y

     制作yum緩存並更新系統

 

  預裝包(在所有節點執行)

    yum install python-openstackclient -y

    yum install openstack-selinux -y

 

  部署時間服務(在所有節點執行)

    yum install chrony -y 

 

控制節點:

    修改配置:

    /etc/chrony.conf

    server ntp.staging.kycloud.lan iburst

    allow 192.168.1.0/24

 

啟動服務:

    systemctl enable chronyd.service

    systemctl start chronyd.service

 

其余節點:

  

    修改配置:

    /etc/chrony.conf

    server 192.168.1.142 iburst

 

啟動服務

    systemctl enable chronyd.service

    systemctl start chronyd.service

 

驗證:

    每台機器執行:

    chronyc sources

    在S那一列包含*號,代表同步成功(可能需要花費幾分鍾去同步,時間務必同步)

 

控制節點操作

 

安裝數據庫

  yum install mariadb mariadb-server python2-PyMySQL -y

編輯: /etc/my.cnf.d/openstack.cnf[mysqld] bind-address = 192.168.1.142#控制節點IP default-storage-engine = innodb innodb_file_per_table max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
啟動服務
systemctl enable mariadb.service
systemctl start mariadb.service

  

安裝mogodb

  yum install mongodb-server mongodb -y

 

 

編輯:/etc/mongod.confbind_ip = 192.168.1.142 smallfiles = true

 

保存退出后啟動服務 並添加開機自啟動
systemctl enable mongod.service
systemctl start mongod.service

 

部署消息列隊
  yum install rabbitmq-server -y

 

啟動服務並開機自啟 systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service
新建rabbitmq用戶名與密碼(這里面我所有的密碼都用lhc001) rabbitmqctl add_user openstack lhc001 為新建的用戶openstack設定權限: rabbitmqctl set_permissions openstack ".*" ".*" ".*"

 

安裝memcache緩存token
  yum install memcached python-memcached -y

啟動並添加到開機自啟動
systemctl enable memcached.service
systemctl start memcached.service

 

部署keystone服務

對數據庫的操作 CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller01' \ IDENTIFIED BY 'lhc001'; #遠程登錄的要寫上這個不然會報錯 flush privileges;

 

安裝keystone
  yum install openstack-keystone httpd mod_wsgi -y

 

編輯/etc/keystone/keystone.conf [DEFAULT] admin_token = lhc001 [database] connection = mysql+pymysql://keystone:lhc001@controller01/keystone  [token] provider = fernet    

 

同步修改數據到數據庫中
  su -s /bin/sh -c "keystone-manage db_sync" keystone

 

初始化fernet keys
  keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

 

配置apache服務

編輯:/etc/httpd/conf/httpd.conf ServerName controller01

 

硬鏈接/usr/share/keystone/wsgi-keystone.conf到/etc/httpd/conf.d/下

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/wsgi-keystone.conf

 

重新啟動httpd

systemctl restart httpd

 

 

創建服務實體和訪問站點

實現配置管理員環境變量,用於獲取后面創建的權限 export OS_TOKEN=lhc001 export OS_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3    

 

基於上一步給的權限,創建認證服務實體(目錄服務)

openstack service create \ --name keystone --description "OpenStack Identity" identity

 

基於上一步建立的服務實體,創建訪問該實體的三個api端點

openstack endpoint create --region RegionOne \ identity public http://controller01:5000/v3  openstack endpoint create --region RegionOne \ identity internal http://controller01:5000/v3  openstack endpoint create --region RegionOne \ identity admin http://controller01:35357/v3  
    

 

 

 

 創建域,租戶,用戶,角色,把四個元素關聯到一起

 建立一個公共的域名: 

openstack domain create --description "Default Domain" default    
管理員:admin openstack project create --domain default \ --description "Admin Project" admin openstack user create --domain default \ --password-prompt admin openstack role create admin openstack role add --project admin --user admin admin
普通用戶:demo openstack project create --domain default \ --description "Demo Project" demo openstack user create --domain default \ --password-prompt demo openstack role create user openstack role add --project demo --user demo user
為后續的服務創建統一租戶service openstack project create --domain default \ --description "Service Project" service

 

 

驗證關聯

 

驗證操作

編輯:/etc/keystone/keystone-paste.ini 在[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3] 三個地方 移走:admin_token_auth 

 

新建客戶端腳本文件

管理員:admin-openrc

export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=lhc001 export OS_AUTH_URL=http://controller01:35357/v3
export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

 

普通用戶demo:demo-openrc

export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=lhc001 export OS_AUTH_URL=http://controller01:5000/v3
export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2

 

 

退出控制台 重新登錄
source admin-openrc
openstack token issue

 

部署鏡像服務 

數據庫操作 

mysql -u root -p CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller01' \ IDENTIFIED BY 'lhc001'; #同上面keystone對數據庫的操作一樣 flush privileges;

 

 

keystone認證操作:
上面提到過:所有后續項目的部署都統一放到一個租戶service里,然后需要為每個項目建立用戶,建管理員角色,建立關聯

openstack user create --domain default --password-prompt glance

 

 

關聯角色

openstack role add --project service --user glance admin
建立服務實體 openstack service create --name glance \ --description "OpenStack Image" image
建端點 openstack endpoint create --region RegionOne \ image public http://controller01:9292
 openstack endpoint create --region RegionOne \ image internal http://controller01:9292
 openstack endpoint create --region RegionOne \ image admin http://controller01:9292

 

 

安裝glance軟件
  yum install openstack-glance -y

這次的環境統一都是用本地存儲,但是無論什么存儲,都要在啟動glance之前建立,不然啟動時glance搜索不到,雖然不會報錯,但是對於后面的一些操作會報錯。所以為了省略麻煩 還是先提前建立好

mkdir /var/lib/glance/images/ chown glance. /var/lib/glance/images/

 

編輯/etc/glance/glacne-api.conf
 [database]
 connection = mysql+pymysql://glance:lhc001@controller01/glance
[keystone_authtoken] auth_url = http://controller01:5000
memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = lhc001 [paste_deploy] flavor = keystone [glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/

 

編輯/etc/glance/glacne-registry

這里的registry配置的數據庫是用來檢索鏡像元數據用的
[database] connection
= mysql+pymysql://glance:lhc001@controller01/glance 在之前的版本中,glance-api怎么配置registry就怎么配置。現在變成了v3版本 就不需要那么繁瑣,直接在glance-registry中添加一條數據庫接口,其它一概不用

 

 

同步數據庫:輸出不是報錯 su -s /bin/sh -c "glance-manage db_sync" glance

 

啟動並添加到開機自啟動 systemctl enable openstack-glance-api.service \ openstack-glance-registry.service systemctl start openstack-glance-api.service \ openstack-glance-registry.service

 

 

驗證操作

查看openstack image list 輸出為空 然后執行鏡像上傳 openstack image create "cirros" \ --file cirros-0.3.4-x86_64-disk.img \ --disk-format qcow2 --container-format bare \ --public

 

 部署compute服務

再部署任何組件時都有對keystone的數據庫中進行添加用戶的操作,即所有組件共用一個數據庫服務

compute對數據庫的操作 CREATE DATABASE nova_api;  CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller01' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller01' \ IDENTIFIED BY 'lhc001'; flush privileges;

 

 

keystone部分的相關操作

 

openstack user create --domain default \ --password-prompt nova 創建一個nova用戶 輸入用戶名密碼 openstack role add --project service --user nova admin 用戶與角色項目關聯 openstack service create --name nova \ --description "OpenStack Compute" compute 創建實體

 

 

openstack endpoint create --region RegionOne \ compute public http://controller01:8774/v2.1/%\(tenant_id\)s  openstack endpoint create --region RegionOne \ compute internal http://controller01:8774/v2.1/%\(tenant_id\)s  openstack endpoint create --region RegionOne \ compute admin http://controller01:8774/v2.1/%\(tenant_id\)s   三個endpoint 

 

 

安裝軟件包:

yum install openstack-nova-api openstack-nova-conductor \

  openstack-nova-console openstack-nova-novncproxy \

  openstack-nova-scheduler -y

編輯/etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata rpc_backend = rabbit auth_strategy = keystone #下面的為管理ip my_ip = 192.168.1.142 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_databases] connection = mysql+pymysql://nova:lhc001@controller01/nova_api  [database] connection = mysql+pymysql://nova:lhc001@controller01/nova  [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [keystone_authtoken] auth_url = http://controller01:5000 memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = lhc001 [vnc] #下面的為管理ip vncserver_listen = 192.168.1.142 #下面的為管理ip vncserver_proxyclient_address = 192.168.1.142 [oslo_concurrency] lock_path = /var/lib/nova/tmp  

 

 

同步數據庫 有輸出不是報錯

su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage db sync" nova

 

 

啟動並添加到開機自啟動 systemctl enable openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service \ openstack-nova-consoleauth.service openstack-nova-scheduler.service \ openstack-nova-conductor.service openstack-nova-novncproxy.service

 接下來就要配置計算節點了 控制節點就暫時先放一放

 

配置計算節點 

安裝軟件包:
yum install openstack-nova-compute libvirt-daemon-lxc -y

修改配置:
編輯/etc/nova/nova.conf

[DEFAULT] rpc_backend = rabbit auth_strategy = keystone #計算節點管理網絡ip my_ip = 192.168.1.141 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [vnc] enabled = True vncserver_listen = 0.0.0.0 #計算節點管理網絡ip vncserver_proxyclient_address = 192.168.1.141 #控制節點管理網絡ip novncproxy_base_url = http://192.168.1.142:6080/vnc_auto.html
 [glance] api_servers = http://controller01:9292
 [oslo_concurrency] lock_path = /var/lib/nova/tmp
啟動程序 systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service

 

 

 驗證操作:

 控制節點測試

 

網絡部署 

還是在控制節點進行數據庫操作

CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'lhc001'; GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller01' \ IDENTIFIED BY 'lhc001'; flush privileges;

 

 

keystone對於neutron的操作

創建用戶建立關聯 openstack user create --domain default --password-prompt neutron openstack role add --project service --user neutron admin

 

 創建實體服務與三個endpoint

openstack service create --name neutron \ --description "OpenStack Networking" network openstack endpoint create --region RegionOne \ network public http://controller01:9696
 openstack endpoint create --region RegionOne \ network internal http://controller01:9696
 openstack endpoint create --region RegionOne \ network admin http://controller01:9696

 

 安裝neutron組件

yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which  -y

 

配置服務組件
編輯 /etc/neutron/neutron.conf文件

[DEFAULT] core_plugin = ml2 service_plugins = router #下面配置:啟用重疊IP地址功能 allow_overlapping_ips = True rpc_backend = rabbit auth_strategy = keystone notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [database] connection = mysql+pymysql://neutron:lhc001@controller01/neutron
 [keystone_authtoken] auth_url = http://controller01:5000
memcached_servers = controller01:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = lhc001 [nova] auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 編輯/etc/neutron/plugins/ml2/ml2_conf.ini文件

[ml2] type_drivers = flat,vlan,vxlan,gre tenant_network_types = vxlan mechanism_drivers = openvswitch,l2population extension_drivers = port_security [ml2_type_flat] flat_networks = provider [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = True

 

 編輯/etc/nova/nova.conf文件

[neutron] url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = lhc001 service_metadata_proxy = True

 

 

創建鏈接
  ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 數據庫同步:會有輸出 並不是報錯

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
 --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

 重新啟動nova

   systemctl restart openstack-nova-api.service

啟動neutron並加入到開機自啟
  systemctl enable neutron-server.service
  systemctl start neutron-server.service

 

配置網絡節點

編輯 /etc/sysctl.conf net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0

 

 

執行下列命令,立即生效
  sysctl -p

 

安裝軟件包
  yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

配置組件
編輯/etc/neutron/neutron.conf文件

[DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = True rpc_backend = rabbit [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini文件

[ovs] #下面ip為網絡節點數據網絡ip local_ip=1.1.1.119 bridge_mappings=external:br-ex [agent] tunnel_types=gre,vxlan #l2_population=True prevent_arp_spoofing=True

 

 配置L3代理。編輯 /etc/neutron/l3_agent.ini文件

[DEFAULT] interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver external_network_bridge=br-ex

 

配置DHCP代理。編輯 /etc/neutron/dhcp_agent.ini文件

[DEFAULT] interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata=True

 

配置元數據代理。編輯 /etc/neutron/metadata_agent.ini文件

[DEFAULT] nova_metadata_ip=controller01 metadata_proxy_shared_secret=lhc001

 

啟動服務
網路節點:

systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service

systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service

注意上面標紅的服務,在查看服務狀態的時候ovs會提示啟動失敗,這個不用擔心,因為之前上面的配置文件中配置的數據管理IP與網橋都沒有創建,它的log中提示找不到,沒關系 后面會創建出來。

 

創建為網絡節點數據網絡ip

[root@network01 network-scripts]# cat ifcfg-ens37 TYPE="Ethernet" BOOTPROTO="static" IPADDR=1.1.1.119 NETMASK=255.255.255.0 NAME="ens37" DEVICE="ens37" ONBOOT="yes"

 

計算節點也是一樣,也要創建一個數據管理IP

cd /etc/sysconfig/network-scripts
cp ifcfg-ens33 ifcfg-ens37

[root@compute01 network-scripts]# cat ifcfg-ens37 TYPE="Ethernet" BOOTPROTO="static" IPADDR=1.1.1.117 NETMASK=255.255.255.0 NAME="ens37" DEVICE="ens37" ONBOOT="yes"

 

重啟兩節點網卡服務

  systemctl restart nwtwork

兩節點互ping 測試

 

回到網絡節點上創建網橋

建立網橋,br-ex網卡與外部網卡綁定,由於在實驗之前 我們添加了三塊網卡,但是我這台機器上能上網的只有一個IP地址,所以能用的網絡就兩個一個數據網絡,一個管理網絡,但是在真實環境中,是一定要用三塊網卡的

cp ifcfg-ens33 ifcfg-br-ex

[root@network01 network-scripts]# cat ifcfg-br-ex TYPE="Ethernet" BOOTPROTO="static" IPADDR=192.168.1.140 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 DNS1=192.168.1.1 NAME="br-ex" DEVICE="br-ex" ONBOOT="yes" NM_CONTROLLED=no    #這個一定要添加不然網橋會建立失敗

 

同理,既然已經將網橋建立了,那么ens33上面的IP地址就要拿掉

[root@network01 network-scripts]# cat ifcfg-ens33 TYPE="Ethernet" BOOTPROTO="static" DEFROUTE="yes" PEERDNS="yes" PEERROUTES="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="e82bc9e4-d28e-4363-aab4-89fda28da938" DEVICE="ens33" ONBOOT="yes" NM_CONTROLLED=no      #在真實網卡上也要添加

 

重啟網絡節點網卡

  systemctl restart nwtwork

這時候再次查看ovs服務的狀態(

systemctl status neutron-openvswitch-agent.service)就發現這個服務一定running了

 

到這里網絡節點就告一段落了

 

配置計算節點

編輯 /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 sysctl -p

 

 

計算節點安裝ovs等組件
  yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch -y

 

編輯 /etc/neutron/neutron.conf文件

[DEFAULT] rpc_backend = rabbit #auth_strategy = keystone [oslo_messaging_rabbit] rabbit_host = controller01 rabbit_userid = openstack rabbit_password = lhc001 [oslo_concurrency] lock_path = /var/lib/neutron/tmp

 

 

編輯 /etc/neutron/plugins/ml2/openvswitch_agent.ini

[ovs] #下面ip為計算節點數據網絡ip local_ip = 1.1.1.117 [agent] tunnel_types = gre,vxlan l2_population = True arp_responder = True prevent_arp_spoofing = True [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True

 

 

編輯 /etc/nova/nova.conf

[neutron] url = http://controller01:9696
auth_url = http://controller01:5000
auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = lhc001

 

啟動服務重啟nova systemctl enable neutron-openvswitch-agent.service systemctl start neutron-openvswitch-agent.service systemctl restart openstack-nova-compute.service

 

 

ok,接下來的事情就簡單多了,在控制節點部署一個dashboard服務

 

控制節點操作

安裝軟件包
  yum install openstack-dashboard -y

 

配置/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller01" ALLOWED_HOSTS = ['*', ] SESSION_ENGINE = 'django.contrib.sessions.backends.cache' #SESSION_ENGINE = 'django.contrib.sessions.backends.file' #這個配置文件沒有我們在最后面添加進去,輸入上面的那個配置在登錄頁面進行登錄的時候可能會報錯,如果報錯,修改成下面的配置 問題就會解決 CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller01:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "data-processing": 1.1, "identity": 3, "image": 2, "volume": 2, "compute": 2, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" TIME_ZONE = "UTC"

 

重新啟動服務
  systemctl enable httpd.service memcached.service
  systemctl restart httpd.service memcached.service

 

在瀏覽器段進行測試

 http://http://192.168.1.142/dashboard/

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM