OpenStack+Ceph实战练习


1 环境信息

主机名称 IP地址 磁盘数量 角色 操作系统版本
openstack 10.166.43.99 系统盘即可 allinone OpenStack CentOS7.4
ceph-node01 10.166.43.100 3块(包含系统盘) monitor,mgr,osd CentOS7.4
ceph-node02 10.166.43.101 3块(包含系统盘) monitor,mgr,osd CentOS7.4
ceph-node03 10.166.43.102 3块(包含系统盘) monitor,mgr,osd CentOS7.4

说明:

  (1)本次实验部署均需接入公网(如阿里云);

  (2)不可用于用于生产环境;

  (3)本次实验环境为虚拟机环境;

  (4)OpenStack版本为T版;

  (5)ceph版本为nautilus;

 

2 初始化虚拟机

在其他安装了ansible服务的节点提前编辑好了ansible剧本,可以批量完成该操作,主要干了一下几件事情:

  (1)配置yum源为阿里云

  (2)下载epel源

  (3)关闭防火墙和selinux

  (4)配置DNS服务器

  (5)安装其他非必要软件

playbook内容如下:

---
- hosts: all
  remote_user: root
  

  tasks:
   - name: make backup dir for CentOS's yum config
     file: name=/etc/yum.repos.d/bak state=directory
   - name: move config to bak
     shell: mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak/
   - name: config dns
     shell: echo "nameserver 114.114.114.114" > /etc/resolv.conf
   - name : disabled firewalld
     service: name=firewalld state=stopped enabled=false
   - name: disabled selinux
     shell: setenforce 0;sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
   - name: download ali repo
     shell: curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
   - name: generate cache
     shell: yum makecache
   - name: deploy vim
     yum: name=vim state=present
   - name: deploy net tools
     yum: name=net-tools.x86_64 state=present
   - name: deploy bash-completion
     yum: name=bash-completion state=present
   - name: deploy wget
     yum: name=wget state=present
   - name: deploy lrzsz
     yum: name=lrzsz state=present 
   - name: download epel
     shell: wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
   - name: download telnet
     yum: name=telnet state=present
   - name: download tree
     yum: name=tree state=present
   - name: download nmap
     yum: name=nmap state=present
   - name: download sysstat
     yum: name=sysstat state=present
   - name: download dos2unix
     yum: name=dos2unix state=present
   - name: download bind-utils
     yum: name=bind-utils state=present

3 部署ceph

采用ceph-deploy工具进行部署。

3.1 部署前的准备工作

选定其中一台ceph节点作为初始节点(ceph-node01),配置免密登录,是因为ceph-deploy工具会远程登录其他节点进行操作。

[root@ceph-node01 ~]# ssh-keygen
[root@ceph-node01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub ceph-node02
[root@ceph-node01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub ceph-node03

配置本地hosts,使得通过主机名可以解析IP地址。

[root@ceph-node01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.166.43.100 ceph-node01
10.166.43.101 ceph-node02
10.166.43.102 ceph-node03

 

在每一台ceph节点新增一个yum源,配置完成后生成缓存(yum makecache),若不配置ceph-deploy会默认到国外的网站拉取,可能导致失败。

[root@ceph-node01 ~]# cat /etc/yum.repos.d/ceph.repo 
[norch]
name=norch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0

[x86_64]
name=x86_64
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0

 配置时间同步,本次采用的chronyd服务。

ceph-node01作为server,其他节点作为client。

server端配置,需要找到此行,并取消注释,修改CIDR为您服务器所在的。

[root@ceph-node01 ceph-deploy]# cat /etc/chrony.conf | grep allow 
allow 10.166.43.0/24

client端配置,注释掉原有的server,新加如下:

[root@openstack ~]# cat /etc/chrony.conf | grep ^server 
server 10.166.43.100 iburst

重启各节点chronyd服务,并加入开机自启动。

[root@openstack ~]# systemctl restart chronyd.service  && systemctl enable chronyd

查看同步情况:

[root@openstack ~]# chronyc sources -v
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 10.166.43.100                 3   6    17    24    -22us[  -95us] +/-   15ms

 

 

该服务和NTP服务会有冲突,如果采用了NTP请关闭服务,并取消开机自启动,否则NTP服务无法启动。

3.2 安装ceph-deploy

在ceph-node01节点安装ceph-deploy,ceph-deploy依赖python-setuptools。

[root@ceph-node01 ~]# yum install python-setuptools ceph-deploy -y 

创建目录,此后大部分操作都必须在该路径下操作

[root@ceph-node01 ~]# mkdir ceph-deploy && cd ceph-deploy

new第一个节点,必须在上一步创建的目录内执行。

[root@ceph-node01 ceph-deploy]# ceph-deploy new ceph-node01

在这里可以通过-h参数获取帮助信息,如果您的服务器有两张网卡,那么在此处就可以带上--cluster-network--public-network,并且--public-network如果不带的话,后边可能会报错,当然报错也是可以解决的。

[root@ceph-node01 ceph-deploy]# ceph-deploy new -h 
usage: ceph-deploy new [-h] [--no-ssh-copykey] [--fsid FSID]
                       [--cluster-network CLUSTER_NETWORK]
                       [--public-network PUBLIC_NETWORK]
                       MON [MON ...]

Start deploying a new cluster, and write a CLUSTER.conf and keyring for it.

positional arguments:
  MON                   initial monitor hostname, fqdn, or hostname:fqdn pair

optional arguments:
  -h, --help            show this help message and exit
  --no-ssh-copykey      do not attempt to copy SSH keys
  --fsid FSID           provide an alternate FSID for ceph.conf generation
  --cluster-network CLUSTER_NETWORK
                        specify the (internal) cluster network
  --public-network PUBLIC_NETWORK
                        specify the public network for a cluster

 3.2 手动安装各软件包

需要在所有ceph-node节点执行。

yum install ceph ceph-mon ceph-mgr ceph-radosgw ceph-md -y

直接采用ceph-deploy install $hostname,可能由于网络原因安装失败,建议手动部署以上组件

3.3 部署monitor

初始化monitor节点,并获取keys

[root@ceph-node01 ceph-deploy]# ceph-deploy mon create-initial

将获取到的keys文件推送至各个节点

[root@ceph-node01 ceph-deploy]# ceph-deploy admin ceph-node01 ceph-node02 ceph-node03

查看ceph状态,可以看到一个mon已经成功加入集群。

[root@ceph-node01 ceph-deploy]# ceph -s 
  cluster:
    id:     aeb862ef-d415-40d9-a82e-dc923a49913f
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-node01 (age 81s)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

3.4 部署mgr

初始化mgr节点

ceph-deploy]# ceph-deploy mgr create ceph-node01
[root@ceph-node01 ceph-deploy]# ceph -s 
  cluster:
    id:     aeb862ef-d415-40d9-a82e-dc923a49913f
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-node01 (age 10m)
    mgr: ceph-node01(active, since 2s)
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:  

3.5 添加OSD节点

盘符需要根据自身环境做调整,lsblk查看。

ceph-deploy]# ceph-deploy osd create ceph-node01 --data /dev/vdb 
ceph-deploy]# ceph-deploy osd create ceph-node02 --data /dev/vdb
ceph-deploy]# ceph-deploy osd create ceph-node03 --data /dev/vdb

[root@ceph-node01 ceph-deploy]# ceph -s 
  cluster:
    id:     aeb862ef-d415-40d9-a82e-dc923a49913f
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-node01 (age 15h)
    mgr: ceph-node01(active, since 15h)
    osd: 3 osds: 3 up (since 12s), 3 in (since 12s)
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 297 GiB / 300 GiB avail
    pgs:  

3.6 扩展monitor和mgr

扩展monitor

ceph-deploy]# ceph-deploy mon add ceph-node02

可能会出现以下报错:

[ceph-node02][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph-node02][WARNIN] ceph-node02 is not defined in `mon initial members`
[ceph-node02][WARNIN] monitor ceph-node02 does not exist in monmap
[ceph-node02][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors
[ceph-node02][WARNIN] monitors may not be able to form quorum
[ceph-node02][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-node02.asok mon_status
[ceph-node02][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph-node02][WARNIN] monitor: mon.ceph-node02, might not be running yet

是由于在new第一个节点时未定义public_network,可以在ceph.conf末尾加入如下配置,CIDR需根据自己环境修改:

public_network=10.166.43.0/24

将修改的配置文件推送至集群各节点。

ceph-deploy]# ceph-deploy --overwrite-conf config push ceph-node01 ceph-node02 ceph-node03

重新执行添加monitor。

ceph-deploy]# ceph-deploy mon add ceph-node02
ceph-deploy]# ceph-deploy mon add ceph-node03

可以通过ceph quorum_status -f json-pretty查看monitor状态,-f指定显示格式。

[root@ceph-node01 ceph-deploy]# ceph mon stat
e3: 3 mons at {ceph-node01=[v2:10.166.43.100:3300/0,v1:10.166.43.100:6789/0],ceph-node02=[v2:10.166.43.101:3300/0,v1:10.166.43.101:6789/0],ceph-node03=[v2:10.166.43.102:3300/0,v1:10.166.43.102:6789/0]}, election epoch 16, leader 0 ceph-node01, quorum 0,1,2 ceph-node01,ceph-node02,ceph-node03
[root@ceph-node01 ceph-deploy]# ceph mon dump 
dumped monmap epoch 3
epoch 3
fsid aeb862ef-d415-40d9-a82e-dc923a49913f
last_changed 2021-01-14 09:38:05.080218
created 2021-01-13 17:52:35.839767
min_mon_release 14 (nautilus)
0: [v2:10.166.43.100:3300/0,v1:10.166.43.100:6789/0] mon.ceph-node01
1: [v2:10.166.43.101:3300/0,v1:10.166.43.101:6789/0] mon.ceph-node02
2: [v2:10.166.43.102:3300/0,v1:10.166.43.102:6789/0] mon.ceph-node03
[root@ceph-node01 ceph-deploy]# ceph quorum_status 
{"election_epoch":16,"quorum":[0,1,2],"quorum_names":["ceph-node01","ceph-node02","ceph-node03"],"quorum_leader_name":"ceph-node01","quorum_age":95417,"monmap":{"epoch":3,"fsid":"aeb862ef-d415-40d9-a82e-dc923a49913f","modified":"2021-01-14 09:38:05.080218","created":"2021-01-13 17:52:35.839767","min_mon_release":14,"min_mon_release_name":"nautilus","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus"],"optional":[]},"mons":[{"rank":0,"name":"ceph-node01","public_addrs":{"addrvec":[{"type":"v2","addr":"10.166.43.100:3300","nonce":0},{"type":"v1","addr":"10.166.43.100:6789","nonce":0}]},"addr":"10.166.43.100:6789/0","public_addr":"10.166.43.100:6789/0"},{"rank":1,"name":"ceph-node02","public_addrs":{"addrvec":[{"type":"v2","addr":"10.166.43.101:3300","nonce":0},{"type":"v1","addr":"10.166.43.101:6789","nonce":0}]},"addr":"10.166.43.101:6789/0","public_addr":"10.166.43.101:6789/0"},{"rank":2,"name":"ceph-node03","public_addrs":{"addrvec":[{"type":"v2","addr":"10.166.43.102:3300","nonce":0},{"type":"v1","addr":"10.166.43.102:6789","nonce":0}]},"addr":"10.166.43.102:6789/0","public_addr":"10.166.43.102:6789/0"}]}}

扩展mgr节点

ceph-deploy]# ceph-deploy mgr create ceph-node02 ceph-node03

3.7 扩展osd

ceph-deploy]# ceph-deploy osd create ceph-node01 --data /dev/vdc 
ceph-deploy]# ceph-deploy osd create ceph-node02 --data /dev/vdc
ceph-deploy]# ceph-deploy osd create ceph-node03 --data /dev/vdc

 4 部署OpenStack

由于是测试环境,所以采用了最简单的packstack进行安装部署,部署的OpenStack版本为T版。

4.1 安装packstack

在第一章节已经完成阿里云源的配置,以及其他基本配置。

安装rdo源

yum update -y
reboot
yum install -y https://rdoproject.org/repos/rdo-release.rpm  
yum clean all && yum makecache fast  
yum install -y centos-release-openstack-train  
yum install -y openstack-packstack   

4.2 生成应答文件

packstack --gen-answer-file=file 

编辑应答文件,修改需要安装的服务,比如本次只安装了:nova neutron cinder mariadb galnce neutron horizon

[root@openstack ~]# egrep -v "^#|^$" file 
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_SERVICE_WORKERS=%{::processorcount}
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_AODH_INSTALL=n
CONFIG_PANKO_INSTALL=n
CONFIG_SAHARA_INSTALL=n
CONFIG_HEAT_INSTALL=n
CONFIG_MAGNUM_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=10.166.43.99
CONFIG_COMPUTE_HOSTS=10.166.43.99
CONFIG_NETWORK_HOSTS=10.166.43.99
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAMES=
CONFIG_STORAGE_HOST=10.166.43.99
CONFIG_SAHARA_HOST=10.166.43.99
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_SAT6_SERVER=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_SAT6_ORG=
CONFIG_RH_SAT6_KEY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SSL_CERT_SUBJECT_C=--
CONFIG_SSL_CERT_SUBJECT_ST=State
CONFIG_SSL_CERT_SUBJECT_L=City
CONFIG_SSL_CERT_SUBJECT_O=openstack
CONFIG_SSL_CERT_SUBJECT_OU=packstack
CONFIG_SSL_CERT_SUBJECT_CN=openstack
CONFIG_SSL_CERT_SUBJECT_MAIL=admin@openstack
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=10.166.43.99
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=10.166.43.99
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=2d57a8cd15da4f3b
CONFIG_KEYSTONE_DB_PW=c1a74b44bf52404e
CONFIG_KEYSTONE_FERNET_TOKEN_ROTATE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=46a3e26d9bce4e328fcb227460368f6a
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=71b37ae40f1b4c31
CONFIG_KEYSTONE_DEMO_PW=5521cf025cec48f2
CONFIG_KEYSTONE_API_VERSION=v3
CONFIG_KEYSTONE_TOKEN_FORMAT=FERNET
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://10.166.43.99
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=a369ddebc3d64cf5
CONFIG_GLANCE_KS_PW=d285bef1841c4c3b
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=cebe926473604b76
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=beab8539632b49be
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUME_NAME=cinder-volumes
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_CINDER_SOLIDFIRE_LOGIN=
CONFIG_CINDER_SOLIDFIRE_PASSWORD=
CONFIG_CINDER_SOLIDFIRE_HOSTNAME=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=7633b505df1547d6
CONFIG_NOVA_KS_PW=b330549051944483
CONFIG_NOVA_MANAGE_FLAVORS=y
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=ssh
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_PCI_ALIAS=
CONFIG_NOVA_PCI_PASSTHROUGH_WHITELIST=
CONFIG_NOVA_LIBVIRT_VIRT_TYPE=%{::default_hypervisor}
CONFIG_NEUTRON_KS_PW=62cdc7adfae94233
CONFIG_NEUTRON_DB_PW=2aa4f80cacda41c8
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=eaed8f08e9bf48c4
CONFIG_NEUTRON_METERING_AGENT_INSTALL=y
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=geneve,flat
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=geneve
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
CONFIG_NEUTRON_ML2_VNI_RANGES=10:100
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_ML2_SRIOV_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=
CONFIG_NEUTRON_OVS_EXTERNAL_PHYSNET=extnet
CONFIG_NEUTRON_OVS_TUNNEL_IF=
CONFIG_NEUTRON_OVS_TUNNEL_SUBNETS=
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_NEUTRON_OVN_BRIDGE_MAPPINGS=extnet:br-ex
CONFIG_NEUTRON_OVN_BRIDGE_IFACES=
CONFIG_NEUTRON_OVN_BRIDGES_COMPUTE=
CONFIG_NEUTRON_OVN_EXTERNAL_PHYSNET=extnet
CONFIG_NEUTRON_OVN_TUNNEL_IF=
CONFIG_NEUTRON_OVN_TUNNEL_SUBNETS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=a520e0307fe743b7b3a9fe77bebc3563
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=PW_PLACEHOLDER
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=0f1a7637df8d43cc
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=a98d3b3d64f3439d
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CFN_INSTALL=y
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.0/24
CONFIG_PROVISION_DEMO_ALLOCATION_POOLS=[]
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_PROPERTIES=
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_UEC_IMAGE_NAME=cirros-uec
CONFIG_PROVISION_UEC_IMAGE_KERNEL_URL=https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-kernel
CONFIG_PROVISION_UEC_IMAGE_RAMDISK_URL=https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-initramfs
CONFIG_PROVISION_UEC_IMAGE_DISK_URL=https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
CONFIG_TEMPEST_HOST=
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.0/24
CONFIG_PROVISION_TEMPEST_FLAVOR_NAME=m1.nano
CONFIG_PROVISION_TEMPEST_FLAVOR_DISK=1
CONFIG_PROVISION_TEMPEST_FLAVOR_RAM=128
CONFIG_PROVISION_TEMPEST_FLAVOR_VCPUS=1
CONFIG_PROVISION_TEMPEST_FLAVOR_ALT_NAME=m1.micro
CONFIG_PROVISION_TEMPEST_FLAVOR_ALT_DISK=1
CONFIG_PROVISION_TEMPEST_FLAVOR_ALT_RAM=128
CONFIG_PROVISION_TEMPEST_FLAVOR_ALT_VCPUS=1
CONFIG_RUN_TEMPEST=n
CONFIG_RUN_TEMPEST_TESTS=smoke
CONFIG_PROVISION_OVS_BRIDGE=y
CONFIG_GNOCCHI_DB_PW=PW_PLACEHOLDER
CONFIG_GNOCCHI_KS_PW=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=765959d893764756
CONFIG_CEILOMETER_KS_PW=PW_PLACEHOLDER
CONFIG_CEILOMETER_SERVICE_NAME=httpd
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_ENABLE_CEILOMETER_MIDDLEWARE=n
CONFIG_REDIS_HOST=10.166.43.99
CONFIG_REDIS_PORT=6379
CONFIG_AODH_KS_PW=PW_PLACEHOLDER
CONFIG_AODH_DB_PW=PW_PLACEHOLDER
CONFIG_PANKO_DB_PW=PW_PLACEHOLDER
CONFIG_PANKO_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_MAGNUM_DB_PW=PW_PLACEHOLDER
CONFIG_MAGNUM_KS_PW=PW_PLACEHOLDER

修改网络插件为openvswitch,默认为OVN,OVN不知LBaas等部分服务。

[root@openstack ~]# egrep -v "^#|^$" file | grep openvs
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_L2_AGENT=openvswitch

4.3 执行安装

根据网络情况,20-120分钟不等。

packstack --answer-file=file

5 OpenStack对接ceph

参考地址:https://docs.ceph.com/en/latest/rbd/rbd-openstack/

 在OpenStack节点安装ceph相关包。

yum install python-rbd ceph-common -y

登录ceph-node01节点,并切换至ceph-deploy目录。

为OpenStack创建存储池,都是块存储服务,创建池对应的pg_num需要计算,可以参考连接:https://www.cnblogs.com/dengchj/p/10003534.html#3-%E4%BF%AE%E6%94%B9pg%E5%92%8Cpgp

ceph osd pool create volumes 64 64 
ceph osd pool create images 64 64 
ceph osd pool create backups 64 64
ceph osd pool create ceph-pool 64 64


 rbd pool init volumes
 rbd pool init images
 rbd pool init backups
 rbd pool init ceph-pool

 

复制ceph.conf至OpenStack节点

ssh root@10.166.43.99 sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf

为Nova / Cinder和Glance创建一个新用户

[root@ceph-node01 ceph-deploy]# ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' mgr 'profile rbd pool=images'
[client.glance]
    key = AQDn7f9fdjhDCxAAIYF8/a0XcqLIimkNieLqdg==

[root@ceph-node01 ceph-deploy]# ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=ceph-pool, profile rbd-read-only pool=images' mgr 'profile rbd pool=volumes, profile rbd pool=ceph-pool'
[client.cinder]
    key = AQBj7v9fHt+1JhAAZ4OgwECUktD1Cu4RkdHXoQ==

[root@ceph-node01 ceph-deploy]# ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' mgr 'profile rbd pool=backups'
[client.cinder-backup]
    key = AQCA7v9f1Uq7ChAArZ8XTxi+Uk+vdlLDxctKzA==

将生成的keys复制至OpenStack节点对应位置,并修改权限。

ceph auth get-or-create client.glance | ssh root@10.166.43.99 sudo tee /etc/ceph/ceph.client.glance.keyring
ssh root@10.166.43.99 sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring


ceph auth get-or-create client.cinder | ssh root@10.166.43.99 sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh root@10.166.43.99 sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring


ceph auth get-or-create client.cinder-backup | ssh root@10.166.43.99 sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh root@10.166.43.99 sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

如果nova-compute节点单独部署,则需要执行一下操作,否则不用。

ceph auth get-or-create client.cinder | ssh root@10.166.43.99 sudo tee /etc/ceph/ceph.client.cinder.keyring

ceph auth get-key client.cinder | ssh root@10.166.43.99 tee client.cinder.key

在运行nova-compute服务的节点上执行:

生成临时密钥文件

ceph auth get-key client.cinder | ssh root@10.166.43.99 tee client.cinder.key

将密钥添加至libvirt,并删除临时密钥副本。

[root@openstack ~]# uuidgen 
5beffde4-1ebb-4891-9557-89997415a545

[root@openstack ~]# cat > secret.xml <<EOF
> <secret ephemeral='no' private='no'>
>   <uuid>5beffde4-1ebb-4891-9557-89997415a545</uuid>
>   <usage type='ceph'>
>     <name>client.cinder secret</name>
>   </usage>
> </secret>
> EOF
[root@openstack ~]# virsh secret-define --file secret.xml
Secret 5beffde4-1ebb-4891-9557-89997415a545 created

[root@openstack ~]# virsh secret-set-value --secret 5beffde4-1ebb-4891-9557-89997415a545 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Secret value set

5.1 对接glance组件

注意:部分配置glance本来已经存在,需要找到后修改,或者注释掉原有配置,重新添加,否则会导致glance-api服务异常。

vim /etc/glance/glance-api.conf

[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

5.2 对接cinder组件

/etc/cinder/cinder.conf

[DEFAULT]
enabled_backends = ceph
glance_api_version = 2

在文件默认追加,一定是文件末尾。

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
rbd_user = cinder

 

5.3 备份未配置

如果需要可以根据ceph官网进行对接。

5.4 对接nova服务

/etc/nova/nova.conf
[libvirt]
rbd_user = cinder
rbd_secret_uuid = 5beffde4-1ebb-4891-9557-89997415a545

5.5 重启服务

service openstack-glance-api restart
service openstack-nova-compute restart
service openstack-cinder-volume restart
#service openstack-cinder-backup restart
service openstack-glance-api status
service openstack-nova-compute status 
service openstack-cinder-volume status

 

6 验证

在OpenStack dashboard创建volume和image,查看ceph下是否存在。

 

 

 

 

 

 

在ceph节点验证:

[root@ceph-node01 ceph-deploy]# rbd ls volumes
volume-4823bd6b-15e6-4d1b-a1f0-41b0799ef583
volume-b24ea214-2515-4c0c-b556-3fcd459281ee

 [root@ceph-node01 ceph-deploy]# rbd ls images
 a9314ba2-955f-481f-842a-9920bc3ceb62

 

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM