kolla-ansible部署多節點OpenStack-Pike


一、准備工作:

系統:均為centos 7.5

這里以筆記本一台、vmware workstation做為實驗環境

建議虛擬機硬盤存儲為單個文件,因為在上傳和使用windows鏡像時文件太大容易造成虛擬機卡住而失敗

架構:

3個control節點、2個network節點、2個compute節點、1個storage節點、1個monitoring節點、1個deploy節點,每節點2核心4G + 1塊100G硬盤,storage節點額外增加一個600G硬盤

 

網絡:

每台機3塊網卡,

  第一塊網卡:NAT模式,用於下載安裝包,設置好IP可以上網

  第二塊網卡:僅主機模式,用作API網絡、VM網絡(tenant 網絡),之所有選擇VMnet1(僅主機模式),是方便筆記本連接 horizon UI,需要設置IP

  第三塊網卡:NAT模式,用作External 網絡,用於虛擬機連接外部網絡,不用設置IP

 

control01

    ens33:192.168.163.21/24 gw:192.168.163.2

    ens37:192.168.41.21/24 

    ens38

control02

    ens33:192.168.163.22/24 gw:192.168.163.2

    ens37:192.168.41.22/24 

    ens38

control03

    ens33:192.168.163.30/24 gw:192.168.163.2

    ens37:192.168.41.30/24 

    ens38

network01

    ens33:192.168.163.23/24 gw:192.168.163.2

    ens37:192.168.41.23/24 

    ens38

network02

    ens33:192.168.163.27/24 gw:192.168.163.2

    ens37:192.168.41.27/24 

    ens38

compute01

    ens33:192.168.163.24/24 gw:192.168.163.2

    ens37:192.168.41.24/24 

    ens38

compute02

    ens33:192.168.163.28/24 gw:192.168.163.2

    ens37:192.168.41.28/24 

    ens38

monitoring01

    ens33:192.168.163.26/24 gw:192.168.163.2

    ens37:192.168.41.26/24 

    ens38

storage01

    ens33:192.168.163.25/24 gw:192.168.163.2

    ens37:192.168.41.25/24 

    ens38

deploy

    ens33:192.168.163.29/24 gw:192.168.163.2

    ens37:192.168.41.29/24 

    ens38

 

每台機上綁定host:

192.168.41.21 control01
192.168.41.22 control02

192.168.41.30 control03
192.168.41.23 network01

192.168.41.27 network02
192.168.41.24 compute01

192.168.41.28 compute02
192.168.41.25 monitoring01
192.168.41.26 storage01
192.168.41.29 deploy



存儲節點:

要啟動cinder存儲服務,需先添加一塊新的硬盤,然后創建pv、vg
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb     #vg名取名為 cinder-volumes,這里主要跟 kolla配置文件里vg名一致


只允許vm實例訪問塊存儲卷,對LVM可能出現異常做設置


vi /etc/lvm/lvm.conf 
修改 devices 下面的,有多少塊硬盤就寫多少塊,如果不想使用系統盤,則不寫a|sda

devices {
...

filter = [ "a|sda|", "a|sdb|", "r|.*|" ]

重啟lvm服務
systemctl restart lvm2-lvmetad.service

 

所有節點: 

配置國內pipy源

mkdir ~/.pip

cat << EOF > ~/.pip/pip.conf

[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple/
[install]
trusted-host=pypi.tuna.tsinghua.edu.cn

EOF

 

安裝pip

yum -y install epel-release

yum -y install python-pip

如果pip install 出現問題可以試試命令

pip install setuptools==33.1.1

 

二、所有節點安裝docker

一定要先啟用EPEL的repo源
yum -y install python-devel libffi-devel gcc openssl-devel git python-pip qemu-kvm qemu-img

安裝docker
1)下載RPM包
2)安裝Docker 1.12.6,創建安裝1.12.6比較穩定
yum localinstall -y docker-engine-1.12.6-1.el7.centos.x86_64.rpm docker-engine-selinux-1.12.6-1.el7.centos.noarch.rpm

也可以參考官方文檔安裝:

https://docs.docker.com/engine/installation/linux/centos/

yum install -y yum-utils \ device-mapper-persistent-data \ lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum install docker-ce #或 yum install docker-ce-1.12.6

 

配置Docker共享掛載
mkdir /etc/systemd/system/docker.service.d 
tee /etc/systemd/system/docker.service.d/kolla.conf << 'EOF' 
[Service] 
MountFlags=shared 
EOF

設置訪問私有的Docker倉庫

公共的:https://hub.docker.com/u/kolla/,但下載比較慢
編輯  vim /usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd --insecure-registry 192.168.41.29:4000
重啟服務
systemctl daemon-reload && systemctl enable docker && systemctl restart docker

 

三、在deploy上安裝私有鏡像倉庫、安裝kolla、安裝openstack

下載kolla官方提供的鏡像
wget     原來官方的下載地址http://tarballs.openstack.org/kolla/images/centos-source-registry-pike.tar.gz已經轉到到https://hub.docker.com/u/kolla/,4G左右

百度雲盤:https://pan.baidu.com/s/4oAdAJjk

mkdir -p /data/registry
tar -zxvf  centos-source-registry-pike-5.0.1.tar.gz -C /data/registry
這樣就把kolla的docker鏡像文件放到Regisitry服務器上。

 

Registry 服務器
默認docker的registry是使用5000端口,對於OpenStack來說,有端口沖突,所以改成4000
docker run -d -v /data/registry:/var/lib/registry -p 4000:5000 --restart=always --name registry registry



測試是否成功:

# curl -k localhost:4000/v2/_catalog
# curl -k localhost:4000/v2/lokolla/centos-source-fluentd/tags/list {"name":"lokolla/centos-source-fluentd","tags":["5.0.1"]}

Ansible
Kolla項目的Mitaka版本要求ansible版本低於2.0,Newton版本以后的就只支持2.x以上的版本。
yum install -y ansible

 

設置免密登錄

ssh-keygen

ssh-copy-id control01

ssh-copy-id control02

ssh-copy-id control03

ssh-copy-id network01

...

 

安裝kolla

 

升級pip:
pip install -U pip -i https://pypi.tuna.tsinghua.edu.cn/simple
安裝docker模塊:pip install docker

 

安裝kolla-ansible 
cd /home
git clone -b stable/pike https://github.com/openstack/kolla-ansible.git
cd kolla-ansible
pip install . -i https://pypi.tuna.tsinghua.edu.cn/simple


復制相關文件
cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/ 
cp /usr/share/kolla-ansible/ansible/inventory/* /home/

如果是在虛擬機里再啟動虛擬機,那么需要把virt_type=qemu,默認是kvm
mkdir -p /etc/kolla/config/nova 
cat << EOF > /etc/kolla/config/nova/nova-compute.conf 
[libvirt] 
virt_type=qemu 
cpu_mode = none 
EOF

生成密碼文件
kolla-genpwd
編輯 vim /etc/kolla/passwords.yml
keystone_admin_password: admin123
這是登錄Dashboard,admin使用的密碼,根據需要進行修改。

 

編輯/etc/kolla/globals.yml  文件

grep -Ev "^$|^[#;]" /etc/kolla/globals.yml
---
kolla_install_type: "source"
openstack_release: "5.0.1"
kolla_internal_vip_address: "192.168.41.20"
docker_registry: "192.168.41.29:4000"
docker_namespace: "lokolla"
network_interface: "ens37"
api_interface: "{{ network_interface }}"
neutron_external_interface: "ens38"
enable_cinder: "yes"
enable_cinder_backend_iscsi: "yes"
enable_cinder_backend_lvm: "yes"
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:

因為控制節點有多個,所以要啟動haproxy,默認enable_haproxy: "yes"、enable_heat: "yes"

 

編輯 /home/multinode 文件

[control]
# These hostname must be resolvable from your deployment host
control01
control02
control03

# The above can also be specified as follows:
#control[01:03] ansible_user=kolla

# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
network01
network02

[compute]
compute01
compute02

[monitoring]
monitoring01

# When compute nodes and control nodes use different interfaces,
# you can specify "api_interface" and other interfaces like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1

[storage]
storage01

[deployment]
localhost ansible_connection=local

 

網關配置:

/home/kolla-ansible/ansible/roles/neutron/templates/ml2_conf.ini.j2 

 

安裝OpenStack

提前拉取鏡像:

kolla-ansible -i ./multinode pull
檢查:kolla-ansible prechecks -i /home/multinode
部署:kolla-ansible deploy -i /home/multinode

驗證部署
kolla-ansible post-deploy
這樣就創建 /etc/kolla/admin-openrc.sh 文件

. /etc/kolla/admin-openrc.sh


安裝OpenStack client端
pip install --ignore-installed python-openstackclient

 

 

總結:

多節點openstack集群:

haproxy 容器會運行在network節點上,

私有網絡網關、外網網關、路由器 會運行在network節點上

容器運行情況:

control01:

horizon
heat_engine
heat_api_cfn
heat_api
neutron_server
nova_novncproxy
nova_consoleauth
nova_conductor
nova_scheduler
nova_api
placement_api
cinder_scheduler
cinder_api
glance_registry
glance_api
keystone
rabbitmq
mariadb
cron
kolla_toolbox
fluentd
memcached

 

control02:

horizon
heat_engine
heat_api_cfn
heat_api
neutron_server
nova_novncproxy
nova_consoleauth
nova_conductor
nova_scheduler
nova_api
placement_api
cinder_scheduler
cinder_api
glance_registry
keystone
rabbitmq
mariadb
cron
kolla_toolbox
fluentd
memcached

 

network01:

neutron_metadata_agent
neutron_l3_agent
neutron_dhcp_agent
neutron_openvswitch_agent
openvswitch_vswitchd
openvswitch_db
keepalived
haproxy
cron
kolla_toolbox
fluentd

 

network02:

neutron_metadata_agent
neutron_l3_agent
neutron_dhcp_agent
neutron_openvswitch_agent
openvswitch_vswitchd
openvswitch_db
keepalived
haproxy
cron
kolla_toolbox
fluentd

 

compute01、compute02:

neutron_openvswitch_agent
openvswitch_vswitchd
openvswitch_db
nova_compute
nova_libvirt
nova_ssh
iscsid
cron
kolla_toolbox
fluentd

 

storage01:

cinder_backup
cinder_volume
tgtd
iscsid
cron
kolla_toolbox
fluentd

 

 私有網絡網關、外網網關、路由器

 先安裝 yum install bridge-utils -y

 

 

 

 

 

 

各節點上服務常用目錄:

/etc/kolla    服務配置目錄

/var/lib/docker/volumes/kolla_logs/_data  服務日志目錄

/var/lib/docker/volumes   服務數據映射的目錄

 

haproxy配置:

grep -v "^$" /etc/kolla/haproxy/haproxy.cfg 
global
  chroot /var/lib/haproxy
  user haproxy
  group haproxy
  daemon
  log 192.168.41.23:5140 local1
  maxconn 4000
  stats socket /var/lib/kolla/haproxy/haproxy.sock
defaults
  log global
  mode http
  option redispatch
  option httplog
  option forwardfor
  retries 3
  timeout http-request 10s
  timeout queue 1m
  timeout connect 10s
  timeout client 1m
  timeout server 1m
  timeout check 10s
listen stats
   bind 192.168.41.23:1984
   mode http
   stats enable
   stats uri /
   stats refresh 15s
   stats realm Haproxy\ Stats
   stats auth openstack:oa8hvXNwWT3h33auKwn2RcMdt0Q0IWxljLgz97i1
listen rabbitmq_management
  bind 192.168.41.20:15672
  server control01 192.168.41.21:15672 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:15672 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:15672 check inter 2000 rise 2 fall 5
listen keystone_internal
  bind 192.168.41.20:5000
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  server control01 192.168.41.21:5000 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:5000 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:5000 check inter 2000 rise 2 fall 5
listen keystone_admin
  bind 192.168.41.20:35357
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  server control01 192.168.41.21:35357 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:35357 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:35357 check inter 2000 rise 2 fall 5
listen glance_registry
  bind 192.168.41.20:9191
  server control01 192.168.41.21:9191 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:9191 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:9191 check inter 2000 rise 2 fall 5
listen glance_api
  bind 192.168.41.20:9292
  timeout client 6h
  timeout server 6h
  server control01 192.168.41.21:9292 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:9292 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:9292 check inter 2000 rise 2 fall 5
listen nova_api
  bind 192.168.41.20:8774
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  server control01 192.168.41.21:8774 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:8774 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:8774 check inter 2000 rise 2 fall 5
listen nova_metadata
  bind 192.168.41.20:8775
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  server control01 192.168.41.21:8775 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:8775 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:8775 check inter 2000 rise 2 fall 5
listen placement_api
  bind 192.168.41.20:8780
  http-request del-header X-Forwarded-Proto
  server control01 192.168.41.21:8780 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:8780 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:8780 check inter 2000 rise 2 fall 5
listen nova_novncproxy
  bind 192.168.41.20:6080
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  server control01 192.168.41.21:6080 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:6080 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:6080 check inter 2000 rise 2 fall 5
listen neutron_server
  bind 192.168.41.20:9696
  server control01 192.168.41.21:9696 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:9696 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:9696 check inter 2000 rise 2 fall 5
listen horizon
  bind 192.168.41.20:80
  balance source
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  server control01 192.168.41.21:80 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:80 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:80 check inter 2000 rise 2 fall 5
listen cinder_api
  bind 192.168.41.20:8776
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  server control01 192.168.41.21:8776 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:8776 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:8776 check inter 2000 rise 2 fall 5
listen heat_api
  bind 192.168.41.20:8004
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  server control01 192.168.41.21:8004 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:8004 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:8004 check inter 2000 rise 2 fall 5
listen heat_api_cfn
  bind 192.168.41.20:8000
  http-request del-header X-Forwarded-Proto if { ssl_fc }
  server control01 192.168.41.21:8000 check inter 2000 rise 2 fall 5
  server control02 192.168.41.22:8000 check inter 2000 rise 2 fall 5
  server control03 192.168.41.30:8000 check inter 2000 rise 2 fall 5
# (NOTE): This defaults section deletes forwardfor as recommended by:
#         https://marc.info/?l=haproxy&m=141684110710132&w=1
defaults
  log global
  mode http
  option redispatch
  option httplog
  retries 3
  timeout http-request 10s
  timeout queue 1m
  timeout connect 10s
  timeout client 1m
  timeout server 1m
  timeout check 10s
listen mariadb
  mode tcp
  timeout client 3600s
  timeout server 3600s
  option tcplog
  option tcpka
  option mysql-check user haproxy post-41
  bind 192.168.41.20:3306
  server control01 192.168.41.21:3306 check inter 2000 rise 2 fall 5 
  server control02 192.168.41.22:3306 check inter 2000 rise 2 fall 5 backup
  server control03 192.168.41.30:3306 check inter 2000 rise 2 fall 5 backup

 

cat /etc/kolla/haproxy/config.json 
{
    "command": "/usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid",
    "config_files": [
        {
            "source": "/var/lib/kolla/config_files/haproxy.cfg",
            "dest": "/etc/haproxy/haproxy.cfg",
            "owner": "root",
            "perm": "0600"
        },
        {
            "source": "/var/lib/kolla/config_files/haproxy.pem",
            "dest": "/etc/haproxy/haproxy.pem",
            "owner": "root",
            "perm": "0600",
            "optional": true
        }
    ]
}

 

[root@network01 ~]# cat /etc/kolla/keepalived/keepalived.conf 
vrrp_script check_alive {
    script "/check_alive.sh"
    interval 2
    fall 2
    rise 10
}

vrrp_instance kolla_internal_vip_51 {
    state BACKUP
    nopreempt
    interface ens37
    virtual_router_id 51
    priority 1
    advert_int 1
    virtual_ipaddress {
        192.168.41.20 dev ens37
    }
    authentication {
        auth_type PASS
        auth_pass jPYBCht9ne37XTvQdeUxgh5xAdpG9vQp0gsB0jTk
    }
    track_script {
        check_alive
    }
}

 

[root@network02 ~]# cat /etc/kolla/keepalived/keepalived.conf 
vrrp_script check_alive {
    script "/check_alive.sh"
    interval 2
    fall 2
    rise 10
}

vrrp_instance kolla_internal_vip_51 {
    state BACKUP
    nopreempt
    interface ens37
    virtual_router_id 51
    priority 2
    advert_int 1
    virtual_ipaddress {
        192.168.41.20 dev ens37
    }
    authentication {
        auth_type PASS
        auth_pass jPYBCht9ne37XTvQdeUxgh5xAdpG9vQp0gsB0jTk
    }
    track_script {
        check_alive
    }
}

 

cat /check_alive.sh 
#!/bin/bash

# This will return 0 when it successfully talks to the haproxy daemon via the socket
# Failures return 1

echo "show info" | socat unix-connect:/var/lib/kolla/haproxy/haproxy.sock stdio > /dev/null

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM