Openstack實驗筆記


Openstack實驗筆記

制作人:全心全意

Openstack:提供可靠的雲部署方案及良好的擴展性
Openstack簡單的說就是雲操作系統,或者說是雲管理平台,自身並不提供雲服務,只是提供部署和管理平台
架構圖:
http://m.qpic.cn/psb?/V12uCjhD3ATBKt/Mf6rnJXoRGXLpebCzPTUfETy68mVidyW.VTA2AbQxE0!/b/dDUBAAAAAAAA&bo=swFuAQAAAAARB.0!&rf=viewer_4
	
	Keystone作為Openstack的核心模塊,為Nova(計算),Glance(鏡像),Swift(對象存儲),Cinder(塊存儲),Neutron(網絡)以及Horizon(Dashboard)提供認證服務
	Glance:openstack的鏡像服務組件,主要提供了一個虛擬機鏡像文件的存儲、查詢和檢索服務,通過提供一個虛擬磁盤映像目錄和存儲庫,為Nova的虛擬機提供鏡像服務,現在有v1和v2兩個版本

物理硬件配置(最低)
	控制節點:
		1-2個cpu
		8G內存
		2個網卡
	計算節點:
		2-4個cpu
		8G內存
		2個網卡
	塊節點:
		1-2個cpu
		4G內存
		1個網卡
		最少2個磁盤
	對象節點:
		1-2個cpu
		4G內存
		1個網卡
		最少2個磁盤

網絡拓撲圖:(實驗中,管理、存儲和本地網絡合並)
http://m.qpic.cn/psb?/V12uCjhD3ATBKt/r30ELjijnHAaYX*RMZe4vhwVNcix4zUb2pNnovlYZ7E!/b/dL8AAAAAAAAA&bo=xgKqAQAAAAADB00!&rf=viewer_4


安裝
控制節點:quan		172.16.1.211	172.16.1.221
計算節點:quan1		172.16.1.212	172.16.1.222
存儲節點:storage	172.16.1.213	172.16.1.223
對象存儲節點1:object01	172.16.1.214	172.16.1.224
對象存儲節點2:object02	172.16.1.215	172.16.1.225



准備工作:
	關閉防火牆
	關閉selinux
	關閉NetworkManager

安裝ntp服務:
	yum -y install chrony(所有主機)
	修改配置文件:允許網段中的主機訪問
	allow 172.16.1.0/24
	
	systemctl enable chronyd.service 
	systemctl start chronyd.service
	
	其它節點:
	vi /etc/chrony.conf
	server quan iburst
	
	#注意:使用原始的centos網絡源
	yum install epel-release
	yum install centos-release-openstack-queens
	
	yum install openstack-selinux
	yum install python-openstackclient

安裝數據庫
	控制(quan)節點安裝數據庫
	yum install -y mariadb mariadb-server python2-PyMySQL
	vi /etc/my.cnf.d/openstack.cnf
	bind-address=172.16.1.211
	default-storage-engine=innodb
	innodb_file_per_table=on
	max_connections=4096
	collation-server=utf8_general_ci
	character-set-server=utf8

	啟動數據庫,並設置開機啟動
	systemctl enable mariadb.service && systemctl start mariadb.service

	初始化數據庫
	mysql_secure_installation


控制節點(quan)安裝消息隊列(端口:5672)
	yum install rabbitmq-server -y

	服務啟動,並設置開機啟動
	systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service

	添加openstack用戶
	rabbitmqctl add openstack openstack

	為openstack用戶添加讀寫權限
	rabbitmqctl set_permissions openstack ".*" ".*" ".*"


控制節點(quan)安裝memcached緩存(端口:11211)
	yum -y install memcached python-memcached

	vi /etc/sysconfig/memcached
	OPTIONS="-l 127.0.0.1,::1,quan"

	服務啟動,並設置開機啟動
	systemctl enable memcached.service && systemctl start memcached.service


控制節點(quan)安裝etcd服務(key-value存儲系統)
	yum -y install etcd

	vi /etc/etcd/etcd.conf
	#[Member]
	ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
	ETCD_LISTEN_PEER_URLS="http://quan:2380"
	ETCD_LISTEN_CLIENT_URLS="http://quan:2379"
	ETCD_NAME="quan"
	#[Clustering]
	ETCD_INITIAL_ADVERTISE_PEER_URLS="http://quan:2380"
	ETCD_ADVERTISE_CLIENT_URLS="http://quan:2379"
	ETCD_INITIAL_CLUSTER="quan=http://quan:2380"
	ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
	ETCD_INITIAL_CLUSTER_STATE="new"

	服務啟動,並設置開機啟動
	systemctl enable etcd.service && systemctl start etcd.service

Keystone組件
Keystone作為Openstack的核心模塊,為Nova(計算),Glance(鏡像),Swift(對象存儲),Cinder(塊存儲),Neutron(網絡)以及Horizon(Dashboard)提供認證服務
基本概念:
	User:用戶,代表可以通過keystone進行訪問的人或程序。User通過認證信息(credentials,如密碼,API Keys等)進行驗證。
	Tenant:租戶,各個服務中的一些可以訪問的資源集合。例如,在Nova中一個tenant可以是一些機器,在Swift和Glance中一個tenant可以是一些鏡像存儲,在Neutron中一個tenant可以是一些網絡資源。Users默認的總是綁定到某些tenant上。
	Role:角色,Roles代表一組用戶可以訪問的資源權限,例如Nova中的虛擬機、Glance中的鏡像。Users可以被添加到任意一個全局的或租戶的角色中。在全局的role中,用戶的role權限作用於所有的租戶,即可以對所有的租戶執行role規定的權限,在租戶內的role中,用戶僅能在當前租戶內執行role規定的權限。
	Service:服務,如Nove、Glance、Swift。根據User、Tenant和Role三個概念,一個服務可以確定當前用戶是否具有訪問其資源的權限,但是當一個user嘗試着訪問其租戶內的service時,他必須知道這個service是否存在以及如何訪問這個service,這里通常使用一些不同的名稱表示不同的服務。
	Endpoint:端點,可以理解為是一個服務暴露出的訪問點
	Token:訪問資源的鑰匙。通過Keystone驗證后的返回值,在之后與其它服務器交互中只需要攜帶Token值即可,每個Token都有一個有效期。
	
	各概念之間的關系
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/PJAecZuZ1C44VKDjcsKLYotu5KOz3RNZwumR07nBIug!/b/dDUBAAAAAAAA&bo=BAIsAQAAAAADBwk!&rf=viewer_4
	1、租戶下,管理者一堆用戶(人,或程序)
	2、每個用戶都有自己的credentials(憑證)用戶名+密碼或者用戶名+API key,或其它憑證
	3、用戶在訪問其他資源(計算、存儲)之前,需要用自己的credential去請求keystone服務,獲得驗證信息(主要是Token信息)和服務信息(服務目錄和它們的endpoint)
	4、用戶拿着Token信息,就可以去訪問資源了
	
keystone在Openstack中的工作流程圖
http://m.qpic.cn/psb?/V12uCjhD3ATBKt/ptROtuhyzh7Mq3vSVz3Ut1TtGDXuBbYf*WbN8UZdWDE!/b/dLgAAAAAAAAA&bo=igIRAgAAAAADB7k!&rf=viewer_4
	
搭建keystone
	創建數據庫
	mysql -uroot -popenstack
	create database keystone;
	grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'openstack';
	grant all privileges on keystone.* to 'keystone'@'%' identified by 'openstack';
	
	安裝
	yum -y install openstack-keystone httpd mod_wsgi
	vi /etc/keystone/keystone.conf
	[database]
	connection = mysql+pymysql://keystone:openstack@quan/keystone	#數據庫連接 用戶名:密碼@主機名/數據庫名
	[token]
	provider=fernet
	
	初始化keystone數據庫
	su -s /bin/sh -c "keystone-manage db_sync" keystone
	
	初始化femet密鑰存儲庫
	keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
	keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
	
	創建keystone的服務端口(會在endpoint中生成數據)
	keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://quan:35357/v3/ --bootstrap_internal-url http://quan:5000/v3/ --bootstrap-public-url http://quan:5000/v3/ --bootstrap-region-id RegionOne
	
	配置http服務
	vi /etc/httpd/conf/httpd.conf
	ServerName quan
	
	創建軟鏈接
	ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
	
	服務啟動,並設置開機啟動
	systemctl enable httpd.service && systemctl start httpd.service
	
	創建管理員賬號
	vim admin-openrc
	export OS_USERNAME=admin
	export OS_PASSWORD=openstack
	export OS_PROJECT_NAME=admin
	export OS_USER_DOMAIN_NAME=Default
	export OS_PROJECT_DOMAIN_NAME=Default
	export OS_AUTH_URL=http://quan:35357/v3
	export OS_IDENTITY_API_VERSION=3
	
	導入管理員賬號
	source admin-openrc
	
	創建域/項目/用戶/和角色
	創建項目
	openstack project create --domain default --description "Service Project" service
	openstack project create --domain default --description "Demo Project" demo
	
	創建用戶(demo),並指定其密碼
	openstack user create --domain default --password-prompt demo
	
	創建角色(user)
	openstack role create user
	
	將demo添加的user角色中
	openstack role add --project demo --user demo user
	
	
	驗證
	解除之前的環境變量
	unset OS_AUTH_URL OS_PASSWORD
	
	執行下面的命令,輸入admin的密碼
	openstack --os-auth-url http://quan:35357/v3 \
	--os-project-domain-name Default \
	--os-user-domain-name Default \
	--os-project-name admin \
	--os-username admin token issue
	
	執行下面的命令,輸入demo用戶的密碼
	openstack --os-auth-url http://quan:5000/v3 \
	--os-project-domain-name Default \
	--os-user-domain-name Default \
	--os-project-name demo \
	--os-username demo token issue
	
	創建openstack客戶端腳本環境
	創建管理員賬號
	vim admin-openrc
	export OS_USERNAME=admin		
	export OS_PASSWORD=openstack
	export OS_PROJECT_NAME=admin	
	export OS_USER_DOMAIN_NAME=Default
	export OS_PROJECT_DOMAIN_NAME=Default
	export OS_AUTH_URL=http://quan:35357/v3
	export OS_IDENTITY_API_VERSION=3	#指定認證服務版本
	export OS_IMAGE_API_VERSION=2	#指定鏡像服務版本
	
	創建demo用戶賬號
	vim demo-openrc
	export OS_USERNAME=demo		
	export OS_PASSWORD=openstack
	export OS_PROJECT_NAME=demo
	export OS_USER_DOMAIN_NAME=Default
	export OS_PROJECT_DOMAIN_NAME=Default
	export OS_AUTH_URL=http://quan:35357/v3
	export OS_IDENTITY_API_VERSION=3	#指定認證服務版本
	export OS_IMAGE_API_VERSION=2	#指定鏡像服務版本
	
	導入管理員賬號
	source admin-openrc
	驗證管理員
	openstack token issue
	
	導入demo用戶
	source demo-openrc
	驗證demo用戶
	openstack token issue
	

glance組件
	Glance:openstack的鏡像服務組件,主要提供了一個虛擬機鏡像文件的存儲、查詢和檢索服務,通過提供一個虛擬磁盤映像目錄和存儲庫,為Nova的虛擬機提供鏡像服務,現在有v1和v2兩個版本
	
	Glance的架構圖:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/mkXPMrNM9RL.NizLwc22Vm*FHkAc2NWh9668JHk4zS0!/b/dLYAAAAAAAAA&bo=RQHZAAAAAAADB78!&rf=viewer_4
	
	鏡像服務組件
	Glance-api:是一個對外的API接口,能夠接受外部的API鏡像請求。默認端口是9292
	glance-registry:用於存儲、處理、獲取Image Metadate。默認端口的9191
	glance-db:在Openstack中使用MySQL來支撐,用於存放Image Metadate。通過glance-registry保存在MySQL Database
	Image Store:用於存儲鏡像文件。通過Strore Backend后端存儲接口來與glance-api聯系。通過這個接口,glance可以從Image Store獲取鏡像文件再交由Nove用於創建虛擬機
	
	Glance通過Store Adapter(存儲適配器)支持多種Image Store方案,支持swift、file system、s3、sheepdog、rbd、cinder等。
	
	Glance支持的Image格式
	raw:非結構化的鏡像格式
	vhd:一種通用的虛擬機磁盤格式,可用於Vmware、Xen、VirtualBox等
	vmdk:Vmware的虛擬機磁盤格式
	vdi:VirtualBox、QEMU等支持的虛擬機磁盤格式
	qcow2:一種支持QEMU並且可以動態擴展的磁盤格式(默認使用)
	aki:Amazon Kernel鏡像
	ari:Amazon Ramdisk鏡像
	ami:Amazon虛擬機鏡像
	
	Glance的訪問權限
	public:公共的,可以被所有的Tenant使用
	Private:私有的/項目的,只能被Image Owner所在的Tenant使用
	Shared:共享的,一個非公共的Image可以共享給指定的Tenant,通過member-*操作來實現
	Protected:受保護的,不能被刪除
	
	狀態類型
	Queued:沒有上傳Image數據,只存有該鏡像的元數據
	Saving:正在上傳Image
	Active:正常狀態
	Deleted/pending_delete:已刪除/等待刪除的Image
	Killed:Image元數據不正確,等待被刪除
	
搭建glance
	創建數據庫
	mysql -uroot -popenstack
	create database glance;
	grant all privileges on glance.* to 'glance'@'localhost' identified by 'openstack';
	grant all privileges on glance.* to 'glance'@'%' identified by 'openstack';
	
	創建glance用戶,並在service項目中添加管理員角色
	source admin_openrc
	openstack user create --domain default --password-prompt glance  #輸入其密碼
	
	openstack role add --project service --user glance admin
	
	openstack user list  #可查看創建的用戶
	
	創建glance服務
	openstack service create --name glance --description "OpenStack Image" image
	
	openstack endpoint create --region RegionOne image public http://quan:9292
	openstack endpoint create --region RegionOne image internal http://quan:9292
	openstack endpoint create --region RegionOne image admin http://quan:9292
	
	安裝相關包並配置
	yum -y install openstack-glance
	
	vi /etc/glance/glance-api.conf
	[database]
	connection = mysql+pymysql://glance:openstack@quan/glance
	
	[keystone_authtoken]
	auth_uri=http://quan:5000
	auth-url=http://quan:35357
	memcached_servers=quan:11211
	auth_type=password
	project_domain_name=default
	user_domain_name=default
	project_name=service
	username = glance
	password = openstack
	
	[paste_deploy]
	flavor = keystone
	
	[glance_store]
	stores = file,http
	default_store = file
	filesystem_store_datadir = /var/lib/glance/images/
	
	
	vi /etc/glance/glance-registry.conf
	[database]
	connection = mysql+pymysql://glance:openstack@quan/glance
	[keystone_authtoken]
	auth_uri=http://quan:5000
	auth-url=http://quan:35357
	memcached_servers=quan:11211
	auth_type=password
	project_domain_name=default
	user_domain_name=default
	project_name=service
	username = glance
	password = openstack
	
	[paste_deploy]
	flavor = keystone
	
	初始化數據庫
	su -s /bin/sh -c "glance-manage db_sync" glance
	
	服務啟動,並設置開機啟動
	systemctl enable openstack-glance-api.service openstack-glance-registry.service && systemctl start openstack-glance-api.service openstack-glance-registry.service
	
	驗證
	source admin-openrc 
	
	下載實驗鏡像
	wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
	創建鏡像:
	openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
	
	查看已存在的鏡像
	openstack image list
	
	查看鏡像的詳細信息
	openstack image show (#鏡像id)
	

Nova組件
	Nova:openstack中最核心的組件。openstack的其它組件歸根結底是為Nova組件服務的,基於用戶需求為VM提供計算資源管理
	Nova架構圖:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/bKTJmZis5k..ds6fjUYXv8KDu9EzeaB4WYyV883uAq8!/b/dL8AAAAAAAAA&bo=*QE1AQAAAAADB.o!&rf=viewer_4
	
	目前的Nova主要由API、Compute、Conductor、Scheduler四個核心服務組成,它們之間通過AMQP通信,API是進入Nova的HTTP接口。Compute是VMM(虛擬機管理器)交互來運行虛擬機並管理虛擬機的生命周期(通常一個主機一個Compute服務)。Scheduler從可用池中選擇最合適的節點來創建虛擬機實例。Conductor主要用於和數據庫進行交互。
	
	Nova邏輯模塊
	Nova API:HTTP服務,用於接收和處理客戶端發送的HTTP請求
	Nova Cell:Nova Cell子服務的目的是為了便於實現橫向擴展和大規模的部署,同時不增加數據庫和RPC消息中間件的復雜度。在Nova Scheduler服務的主機調度基礎上實現了區域調度
	Nova Cert:用於管理證書,為了兼容AWS,AWS提供了一整套的基礎設施和應用程序服務,使得幾乎所有的應用程序在雲上運行。
	Nova Comput:Nova組件中最核心的服務,實現虛擬機管理的功能。實現了在計算節點上創建、啟動、暫停、關閉和刪除虛擬機、虛擬機在不同的計算節點間遷移、虛擬機安全控制、管理虛擬機磁盤鏡像以及快照等功能。
	Nova Conductor:RPC服務,主要提供數據庫查詢功能,以前的openstack版本中,Nova Compute子服務中定義了許多的數據庫查詢方法。但是,由於Nova Compute子服務需要在每個計算節點上啟動,一旦某個計算節點被攻擊,就將完全獲得數據庫的訪問權限。有了Nova Compute子服務之后,便可在其中實現數據庫訪問權限的控制
	Nova Scheduler:Nova調度子服務。當客戶端向Nova服務器發起創建虛擬機的請求時,決定將虛擬機創建在哪個節點上。
	
	Nova Console、Nova Consoleauth、Nova VNCProxy,Nova控制台子服務。功能是實現客戶端通過代理服務器遠程訪問虛擬機實例的控制界面。
	
	nova啟動虛擬機的過程圖:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/iy2efxOLLowl3RvoIcZ6d7KNZ3jcdOI7zY5XroEBPVM!/b/dDQBAAAAAAAA&bo=xQJnAgAAAAADJ6A!&rf=viewer_4
	
	
	Nova Scheduler Filter的類型
	選擇一個虛擬機在哪個主機運行的方式有多種,nova支持的方式主要由以下三種:
	ChanceScheduler(隨機調度器):從所有nova-compute服務正常運行的節點中隨機選擇
	FilterScheduler(過濾調度器):根據指定的過濾條件以及權重挑選最佳節點
	CachingScheduler:FilterScheduler的一種,在FilterScheduler的基礎上,將主機資源的信息存到本地的內存中,然后通過后台的定時任務從數據庫中獲取最新的主機資源信息。
	
	Nova Scheduler的工作流程圖:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/LpB5fYBuLUgMASXWrH*Emw5qwkWHKM7slpof.lF21DY!/b/dEYBAAAAAAAA&bo=OQODAQAAAAADB5o!&rf=viewer_4
	FilterScheduler首先使用指定的Filters(過濾器)得到符合條件的主機,比如內存小於50%,然后對得到的主機重新計算權重並且排列,獲取最佳的一個。具體的Filter有以下幾種:
	1)RetryFilter:重試過濾,假設Host1、Host2、Host3過濾篩選出來了,Host1權重最高,被選中,由於某些原因VM在Host1上落地失敗,nova-scheduler會重新篩選新的host,Host1因為失敗不會入選。可通過scheduler_max_attempts=3設置重試的次數
	2)AvalilabilityZoneFilter可選域過濾,可以提供容災行和隔離服務,計算節點可以納入一個創建好的AZ中,創建VM的時候可以指定AZ,這樣虛擬機會落到指定的host中
	3)RamFilter:內存過濾,創建VM時會選擇flavor,不滿足flavor中內存要求的host會過濾掉。超量使用的設置:ram_allocation_ratio=3(如果計算節點有16G內存,那么openstack會認為有48G內存)
	4)CoreFilter:CPU core過濾,創建VM時會選擇flavor,不滿足flavor中core要求的host會過濾掉。CPU的超量設置:cpu_allocation_ratio=16.0(若計算節點為24core,那么openstack會認為348core)
	5)DiskFilter:磁盤容量過濾,創建VM時會選擇flavor,不滿足flavor中磁盤要求的host會過濾掉。Disk超量設置:disk_allocation_ratio=1.0(硬盤容量不建議調大)
	6)ComputeFilter:nova-compute服務過濾,創建VM時,若host的nova-compute服務不正常,就會被篩選掉
	7)ComputeCababilitiesFilter:根據計算節點的特性來篩選,例如x86_64
	8)ImagePropertiesFilter:根據所選的image的屬性來匹配計算節點,例如希望某個image只能運行在KVM的hypervisor上,可以通過"Hypervisor Type"屬性來指定。
	9)ServerGroupAntiAffinityFilter:盡量將Instance部署到不同的節點上。例如vm1,vm2,vm3,計算節點有Host1,Host2,Host3
		創建一個anti-affinity策略server group “group-1”
		nova server-group-create-policy anti-affinity group-1
		nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm1
		nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm2
		nova boot-image IMAGE_ID -flavor 1 -hint group-group1 vm3
	10)ServerGroupAffinityFilter:盡量將Instance部署到同一節點上。例如vm1,vm2,vm3,計算節點有Host1,Host2,Host3
		創建一個group-affinity策略server group “group-2”
		nova server-group-create-policy anti-affinity group-2
		nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm1
		nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm2
		nova boot-image IMAGE_ID -flavor 1 -hint group-group2 vm3
		
搭建nova組件
	搭建nova控制節點
	數據庫相關操作
	mysql -uroot -popenstack
	create database nova_api;
	create database nova;
	create database nova_cell0;
	grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'openstack';
	grant all privileges on nova_api.* to 'nova'@'%' identified by 'openstack';
	grant all privileges on nova.* to 'nova'@'localhost' identified by 'openstack';
	grant all privileges on nova.* to 'nova'@'%' identified by 'openstack';
	grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'openstack';
	grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'openstack';
	
	創建nova用戶,並在service項目中添加管理員角色
	source admin-openrc
	
	openstack user create --domain default --password-prompt nova	#創建nova用戶
	
	openstack role --project service --user nova admin	#將nova用戶加入到service項目管理員角色
	
	創建nova服務及端口
	openstack service create --name nova --description "OpenStack Compute" conpute
	
	openstack endpoint create --region RegionOne compute public http://quan:8774/v2.1
	openstack endpoint create --region RegionOne compute internal http://quan:8774/v2.1
	openstack endpoint create --region RegionOne compute admin http://quan:8774/v2.1
	
	創建placement用戶,並在service項目中添加管理員角色
	source admin-openrc
	
	openstack user create --domain default --password-prompt placement	#創建placement用戶
	
	openstack role --project service --user placement admin	#將placement用戶加入到service項目管理員角色

	創建placement服務及端口
	openstack service create --name placement --description "Placement API" placement
	
	openstack endpoint create --region RegionOne placement public http://quan:8778
	openstack endpoint create --region RegionOne placement internal http://quan:8778
	openstack endpoint create --region RegionOne placement admin http://quan:8778

	
	刪除端口的方法:
	查看端口:
		openstack endpoint list | grep placement
	根據id刪除端口
		openstack endpoint delete 端口id
	
	
	安裝相關包,並配置
	yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
	
	vi /etc/nova/nova.conf
	[DEFAULT]
	enabled_apis = osapi_compute,metadata
	transport_url = rabbit://openstack:openstack@quan
	my_ip = 172.16.1.221
	use_neutron = True
	firewall_driver = nova.virt.firewall.NoopFirewallDriver
	
	[api_database]
	connection = mysql+pymysql://nova:openstack@quan/nova_api
	
	[database]
	connection = mysql+pymysql://nova:openstack@quan/nova

	[api]
	auth_strategy = keystone
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = nova
	password = openstack
	
	[vnc]
	enabled = true
	vncserver_listen = 172.16.1.221
	vncserver_proxyclient_address = 172.16.1.221
	
	[glance]
	api_servers = http://quan:9292
	
	[oslo_concurrency]
	lock_path = /var/lib/nova/tmp
	
	[placement]
	os_region_name = RegionOne
	project_domain_name = Default
	project_name = service
	auth_type = password
	user_domain_name = Default
	auth_url = http://quan:35357/v3
	username = placement
	password = openstack
	
	
	vim /etc/httpd/conf.d/00-nova-placement-api.conf		#添加至末尾
	<Directory /usr/bin>
		<IfVersion >= 2.4>
			Require all granted
		</IfVersion>
		<IfVersion < 2.4>
			Order allow,deny
			Allow from all
		</IfVersion>
	</Directory>
	
	重啟httpd服務
	systemctl restart httpd
	
	修改配置文件(解決初始化nova_api數據庫表結構的bug)
	vi /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py
	在175行中加入"use_tpool"
	
	初始化nova_api數據庫表結構
	su -s /bin/sh -c "nova-manage api_db sync" nova
	
	注冊cell0數據庫
	su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
	
	創建cell1
	su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell --verbose" nova
	
	初始化nova數據庫
	su -s /bin/sh -c "nova-manage db sync" nova
	
	驗證cell0和cell1是否注冊
	nova-manage cell_v2 list_cells
	
	服務啟動,並設置開機啟動
	systemctl enable openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
	systemctl start openstack-nova-api openstack-nova-consoleauth openstack-nova-scheduler openstack-nova-conductor openstack-nova-novncproxy
	
	驗證
	openstack compute service list
	
	
	搭建nova計算節點
	安裝相關包並配置
	yum -y install openstack-nova-compute
	
	vim /etc/nova/nova.conf
	[DEFAULT]
	enabled_apis = osapi_compute,metadata
	transport_url = rabbit://openstack:openstack@quan
	my_ip = 172.16.1.222
	use_neutron = True
	firewall_driver = nova.virt.firewall.NoopFirewallDriver
	
	[api]
	auth_strategy = keystone
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = nova
	password = openstack
	
	[vnc]
	enabled = True
	vncserver_listen = 0.0.0.0
	vncserver_proxyclient_address = 172.16.1.222
	novncproxy_base_url = http://172.16.1.221:6080/vnc_auto.html
	
	[glance]
	api_servers = http://quan:9292
	
	[oslo_concurrency]
	lock_path = /var/lib/nova/tmp
	
	[placement]
	os_region_name = RegionOne
	project_domain_name = Default
	project_name = service
	auth_type = password
	user_domain_name = Default
	auth_url = http://quan:35357/v3
	username = placement
	password = openstack
	
	查看機器是否支持虛擬化
	egrep -c '(vmx|svm)' /proc/cpuinfo
	若返回0,修改/etc/nova/nova.conf
	vi /etc/nova/nova.conf
	[libvirt]
	virt_type = qemu
	
	服務啟動,並設置開機啟動
	systemctl enable libvirt openstack-nova-compute && systemctl start libvirt openstack-nova-compute
	
	
	將compute節點添加到cell數據庫(控制節點操作)
	source admin-openrc
	openstack compute service list --service nova-compute
	su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
	
	vi /etc/nova/nova.conf
	[scheduler]
	discover_hosts_in_cells_interval = 300
	
	驗證
	source admin-openrc
	openstack compute service list
	openstack catalog list
	openstack image list
	nova-status upgrade check
	
	
neutron組件
	Neutron是Openstack中的一個項目,在各接口設備之間提供網絡服務,而且受其它openstack服務管理,如Nova。Neutron為openstack雲提供了更靈活的划分物理網絡,在多租戶的環境下提供給每個租戶獨立的網絡環境。另外,Neutron提供API來實現這種目標。Neutron中的“網絡”是一個可以被用戶創建的對象,如果要和物理環境下的概念映射的話,這個對象相當於一個巨大的交換機,可以擁有無限多個動態可創建和銷毀的虛擬端口。
	
	Neutron提供的網絡虛擬化能力有:
	(1)二層到七層網絡的虛擬化:L2(virtual switch)、L3(virtual Router和LB)、L4-L7(virtual Firewall)等
	(2)網絡連通性:二層網絡和三層網絡
	(3)租戶隔離性
	(4)網絡安全性
	(5)網絡擴展性
	(6)REST API
	(7)跟高級的服務:如LBaas
	
	Neutron的架構圖:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/Ei6CaKeBs.55JXz9GIW8xuGBeMGe*rVaB*3D3cGQDsY!/b/dFIBAAAAAAAA&bo=vQLoAQAAAAADB3Q!&rf=viewer_4
	總的來說,創建一個Neutron網絡的過程如下:
	1、管理員拿到一組可在互聯網上尋址的IP地址,並且創建一個外部網絡和子網
	2、租戶創建一個網絡和子網
	3、租戶創建一個路由器並且連接租戶子網和外部網絡
	4、租戶創建虛擬機
	
	Neutron中的各種概念
	network:network是一個隔離的二層廣播域。Neutron支持多種類型的network,包括local,flat,VLAN,VxLAN和GRE
		local:local網絡與其它網絡和節點隔離。local網絡中的instance只能與同一節點上同一網絡的instance通信,local網絡主要用於單機測試
		flat:flat網絡是無vlan tagging的網絡。flat網絡中的instance能與位於同一網絡的instance通信,並且可以跨多個節點。
		vlan:vlan網絡是具有802.1q tagging的網絡。vlan是一個二層的廣播域,同一vlan中的instance可以通信,不同vlan只能通過router通信。vlan網絡可以跨節點,是應用最廣泛的網絡類型
		vxlan:vxlan是基於隧道技術的overlay網絡。vxlan網絡通過唯一的segmentation ID(也叫VNI)與其它vxlan網絡區分。vxlan中數據包會通過VNI封裝成UDP包進行傳輸。因為二層的包通過封裝在三層傳輸,能夠克服vlan和物理網絡基礎設施的限制。
		gre:gre是vxlan類似的一種overlay網絡。主要區別在於使用IP包而非UDP進行封裝。不同network之間在二層上是隔離的。
		
		network必須屬於某個Project(Tenant租戶),Project中可以創建多個network。network與Project之間是1對多的關系
	subnet:subject是一個IPv4或者IPv6地址段。instance的IP從subnet中分配。每個subnet需要定義IP地址的范圍和掩碼。
		subnet與network是1對多的關系。一個subnet只能屬於某個network;一個network可以有多個subnet,這些subnet可以是不同的IP段,但不能重疊。
			例:有效的配置
				network A
					subnet A-a:10.10.1.0/24 {"start":"10.10.1.1","end":"10.10.1.50"}
					subnet A-b:10.10.2.0/24 {"start":"10.10.2.1","end":"10.10.2.50"}
				無效的配置(因為subnet有重疊)
				network A
					subnet A-a:10.10.1.0/24 {"start":"10.10.1.1","end":"10.10.1.50"}
					subnet A-b:10.10.1.0/24 {"start":"10.10.1.51","end":"10.10.1.100"}
			注:這里判斷的不是IP地址是否重疊,而是子網是否重疊(10.10.1.0/24)
	port:port可以看做是虛擬交換機上的一個端口,port上定義了MAC地址和IP地址,當instance的虛擬網卡VIF(Virtual Interface)綁定到port時,port會將MAC和IP分配給VIF。port與subnet是1對多的關系。一個port必須屬於某個subnet,一個subnet可以有多個port。
			
	
	Neutron中的Plugin和agent
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/Gm3J*.Vh27nLny6oXfuZlh.yXNYx.YE3I*Mwoea.MH4!/b/dL4AAAAAAAAA&bo=pAKJAQAAAAADBww!&rf=viewer_4
	
搭建neutron

	linuxbridge+vxlan模式
	
	控制節點:
	數據庫相關操作
	mysql -uroot -popenstack
	create database neutron;
	grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'openstack';
	grant all privileges on neutron.* to 'neutron'@'%' identified by 'openstack';
	
	創建neutron用戶,並在service項目中添加管理員角色
	source admin_openrc
	
	openstack user create --domain default --password-prompt neutron
	
	openstack role add --project service --user neutron admin
	
	創建網絡服務及端口
	openstack service create --name neutron --description "Openstack Networking" network
	
	openstack endpoint create --region RegionOne network public http://quan:9696
	openstack endpoint create --region RegionOne network internal http://quan:9696
	openstack endpoint create --region RegionOne network admin http://quan:9696
	
	安裝相關包並配置
	yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
	
	vi /etc/neutron/neutron.conf
	[database]
	connection = mysql+pymysql://neutron:openstack@quan/neutron
	
	[DEFAULT]
	core_plugin=ml2
	service_plugins = router
	allow_overlapping_ips = true
	
	transport_url = rabbit://openstack:openstack@quan
	
	auth_strategy = keystone
	
	notify_nova_on_port_status_changes = true
	notify_nova_on_port_data_changes = true
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = neutron
	password = openstack
	
	[nova]
	auth_url = http://quan:35357
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = nova
	password = openstack
	
	[oslo_concurrency]
	lock_path = /var/lib/neutron/tmp
	
	
	vi /etc/neutron/plugins/ml2/ml2_conf.ini
	[ml2]
	type_drivers = flat,vlan,vxlan
	
	tenant_network_types = vxlan
	
	mechanism_drivers = linuxbridge,l2population
	
	extension_drivers = port_security
	
	[ml2_type_flat]
	flat_networks = provider
	
	[ml2_type_vxlan]
	vni_ranges = 1:1000
	
	[securitygroup]
	enable_ipset = true
	
	vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
	[linux_bridge]
	physical_interface_mappings = provider:ens34			#外部網卡設備
	
	[vxlan]
	enable_vxlan = true
	local_ip = 172.16.1.221
	l2_population = true
	
	[securitygroup]
	enable_security_group = true
	firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
	
	確認操作系統內核支持橋接
	echo "net.bridge.vridge-nf-call-iptables = 1" >> /etc/sysctl.conf
	echo "net.bridge.vridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
	sysctl -p  #若出現“No such file or directory”錯誤,執行下面的操作
		modinfo by_netfilter  #查看內核模塊信息
		modprobe by_netfilter	#加載內核模塊
		再次執行sysctl -p
	
	vi /etc/neutron/l3_agent.ini
	[DEFAULT]
	interface_driver = linuxbridge
	
	
	vi /etc/neutron/dhcp.agent.ini
	[DEFAULT]
	interface_driver = linuxbridge
	dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
	enable_isolated_metadata = true
	
	
	vi /etc/neutron/metadata_agent.ini
	[DEFAULT]
	nova_metadata_host = 172.16.1.221
	metadata_proxy_shared_secret = openstack
	
	
	vi /etc/nova/nova.conf
	[neutron]
	url = http://quan:9696
	auth_url = http://quan:35357
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = neutron
	password = openstack
	service_metadata_proxy = true
	metadata_proxy_shared_secret = openstack
	
	
	#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
	
	初始化neutron數據庫
	su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin/ml2/ml2_conf.ini upgrade head" neutron
	
	重啟nova服務
	systemctl restart openstack-nova-api
	
	服務啟動,並設置開機啟動
	systemctl enable neutron-server neutron-linuxbridge neutron-dhcp-agent neutron-metadata-agent
	systemctl start neutron-server neutron-linuxbridge neutron-dhcp-agent neutron-metadata-agent
	
	systemctl enable neutron-l3-agent && systemctl start neutron-l3-agent
	
	
	計算節點:
	安裝相關包並配置
	yum -y install openstack-neutron-linuxbridge ebtables ipset
	
	vi /etc/neutron/neutron.conf
	[DEFAULT]
	transport_url = rabbit://openstack:openstack@quan
	
	auth_strategy = keystone
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = neutron
	password = openstack
	
	[oslo_concurrency]
	lock_path = /var/lib/neutron/tmp
	
	
	vi /etc/neutron/plugin/ml2/linuxbridge_agent.ini
	[linux_bridge]
	physical_interface_mappings = provider:ens34
	
	[vxlan]
	enable_vxlan = true
	local_ip = 172.16.1.222
	l2_population = true
	
	[securitygroup]
	enable_security_group = true
	firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
	
	
	vi /etc/nova/nova.conf
	[neutron]
	url = http://quan:9696
	auth_url = http://quan:35357
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	region_name = RegionOne
	project_name = service
	username = neutron
	password = openstack
	
	重啟nova-compute服務
	systemctl restart openstack-nova-compute
	
	服務啟動,並設置開機啟動
	systemctl enable neutron-linuxbridge-agent && systemctl strat neutron-linuxbridge-agent
	
	驗證(控制節點)
	source admin-openrc
	openstack extension list --network
	openstack network agent list
	

horizon組件
	horizon:UI界面 (Dashboard)。OpenStack中各種服務的Web管理門戶,用於簡化用戶對服務的操作

搭建horizon
	安裝相關包並配置
	yum -y install openstack-dashboard
	
	vim /etc/openstack-dashboard/local_settings
	OPENSTACK_HOST = "quan"
	ALLOWED_HOSTS = ['*']
	
	SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
	
	CACHES = {
		'default':{
			'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',
			'LOCATION':'quan:11211',
		}
	}
	#注釋掉其它的cache
	
	OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" %OPENSTACK_HOST
	OPENSTACK_kEYSTONE_MULTIDOMAIN_SUPPORT = True
	
	OPENSTACK_API_VERSIONS = {
		"identity":3,
		"image":2,
		"volume":2,	
	}
	OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
	OPENSTACK_KEYSTONE_DEFAULT_ROLE= 'user'
	
	OPENSTACK_NEUTRON_NETWORK = {
		...
		'enable_quotas':True,
		'enable_distributed_router':True,
		'enable_ha_router':True,
		'enable_lb':True,
		'enable_firewall':True,
		'enable_vpn':Flase,
		'enable_fip_topology_check':True,
	}
	
	TIME_ZONE = "Asia/Chongqing"
	
	
	vi /etc/httpd/conf.d/openstack-dashboard.conf
	WSGIApplicationGroup %{GLOBAL}
	
	重啟相關服務
	systemctl restart httpd.service memcached.service
	
	訪問地址:http://172.16.1.221/dashboard/
	
	關閉domain驗證
	vi /etc/openstack-dashboard/local_settings
	#OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True		#注釋此行
	重啟相關服務
	systemctl restart httpd.service memcached.service
	
	用戶名:admin 密碼:openstack
	
	
通過命令行創建一個虛擬機的實例
	創建provider網絡(外部網絡)
	source admin-openrc
	
	openstack network create --share --external \
	--provider-physical-network provider \
	--provider-network-type flat provider
	
	openstack network create --network provider \				#創建外部子網(和物理網絡位於同一網絡)
	--allocation-pool start 172.16.1.231,end 172.16.1.240 \
	--dns-nameserver 8.8.4.4 --gateway 172.16.1.1 \
	--subnet-range 172.16.1.1/24 provider
	
	創建私有網絡self-services
	source demo-openrc
	
	openstack network create selfservice					#創建私有網絡
	
	openstack subnet create --network selfservice \			#創建私有網絡子網
	--dns-nameserver 8.8.4.4 --gateway 192.168.0.1 \
	--subnet-range 192.168.0.0/24 selfservice
	
	openstack router create router		#創建虛擬路由
	
	openstack router add subnet selfservice		#為路由添加子網
	
	openstack router set router --extemal-gateway provider		#設置路由的外部網關
	
	
	驗證
	source admin-openrc
	ip netns
	openstack port list --router router
	
	ping -c 網關ip
	
	創建flavor(啟動虛擬機的模板,cpu是幾個,內存是多少)
	openstack flavor --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
	
	查看創建的flavor
	source demo-openrc
	openstack flavor list
	
	生成秘鑰對
	source demo-openrc
	ssh-keygen -q -N ""
	openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
	openstack keypair list
	
	添加安全組規則
	openstack security group rule create --proto icmp default  #允許ping通
	openstack security group rule create --proto tcp --dst-port 22 default		#允許連接tcp22號端口
	
	查看驗證
	source demo-openrc
	openstack flavor list
	openstack image list
	openstack network list
	openstack security group list
	openstack security group rule list
	
	
	啟動一個實例
	創建一個虛擬機
	openstack server create --flavor m1.nano --image cirros(可以是id也可以是名稱) \
	--nic net-id SELFSERVICE_NET_ID --security-group default \
	--key-name mykey selfservice-instance(虛擬機名稱)
	
	查看虛擬機
	openstack server list	#查看擁有的虛擬機
	openstack server show (虛擬機id)	#查看虛擬機詳細信息
	
	通過界面綁定ip
	
	查看虛擬機控制台信息
	openstack console log show (虛擬機id)
	
	
cinder組件
	cinder:提供REST_API使用戶能夠查詢和管理volume、volume snapshot以及volume type,
			提供scheduler調度volume創建請求,合理優化存儲資源的分配
			通過driver架構支持多種back-end(后端)存儲方式,包括LVM,NFS,Ceph和其它諸如EMC、IBM等商業存儲產品方案
	
	cinder的架構圖:
	http://m.qpic.cn/psb?/V12uCjhD3ATBKt/FpuhoZP0gP2rwhfFn*1Q1BXUZlHCtEvh7xmNRgJYqiw!/b/dL8AAAAAAAAA&bo=CQIYAQAAAAARByI!&rf=viewer_4
	
	cinder包含的組件:
		cinder-api:接收API請求,調用cinder-volume執行操作
		cinder-volume:管理volume的服務,與volume provider協調工作,管理volume的生命周期。運行cinder-volume服務的節點被稱作為存儲節點
		cinder-scheduler:scheduler通過調度算法選擇最合適的存儲節點創建volume
		volume provider:數據的存儲設備,為volume提供物理存儲空間。cinder-volume支持多種volume provider,每種volume provider通過自己的driver與cinder-volume協調工作
		Message Queue:cinder各個子服務通過消息隊列實現進程間通信和相互協作。因為有了消息隊列,子服務之間實現了解耦,這種松散的結構也是分布式系統的重要特征
		Database cinder:有一些數據需要存放到數據庫中,一般使用MySQL。數據庫是安裝在控制節點上的。
	
搭建cinder組件
	控制節點
	數據庫相關操作
	mysql -uroot -popenstack
	create database cinder;
	grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'openstack';
	grant all privileges on cinder.* to 'cinder'@'%' identified by 'openstack';
	
	創建cinder用戶,並在service項目中添加管理員角色
	source admin_openrc
	openstack user create --domain default --password-prompt cinder
	
	openstack role add --project service --user cinder admin
	
	創建cinder服務及端口
	openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
	
	openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
	
	openstack endpoint create --region RegionOne volumev2 public http://quan:8776/v2/%\{project_id\}s
	openstack endpoint create --region RegionOne volumev2 internal http://quan:8776/v2/%\{project_id\}s
	openstack endpoint create --region RegionOne volumev2 admin http://quan:8776/v2/%\{project_id\}s
	
	openstack endpoint create --region RegionOne volumev3 public http://quan:8776/v3/%\{project_id\}s
	openstack endpoint create --region RegionOne volumev3 internal http://quan:8776/v3/%\{project_id\}s
	openstack endpoint create --region RegionOne volumev3 admin http://quan:8776/v3/%\{project_id\}s
	
	安裝相關包並配置
	yum -y install openstack-cinder
	
	vim /etc/cinder/cinder.conf
	[database]
	connection = mysql+pymysql://cinder:openstack@quan/cinder
	
	[DEFAULT]
	transport_url = rabbit://openstack:openstack@quan
	
	auth_strategy = keystone
	
	my_ip = 172.16.1.221
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = cinder
	password = openstack
	
	[oslo_concurrency]
	lock_path = /var/lib/cinder/tmp
	
	初始化數據庫
	su -s /bin/sh -c "cinder-manage db sync" cinder

	配置計算服務使用cinder
	vi /etc/nova/nova.conf
	[cinder]
	os_region_name = RegionOne
	
	計算服務重啟
	systemctl restart openstack-nova-api
	
	服務啟動,並設置開機啟動
	systemctl enable openstack-cinder-api openstack-cinder-scheduler && systemctl start openstack-cinder-api openstack-cinder-scheduler

	驗證
	openstack volume service list	#state狀態為up即為啟動成功
	
	
	存儲節點(除系統盤外要有磁盤)
	安裝相關包並配置
	yum -y install lvm2 device-mapper-persistent-data
	
	systemctl enable lvm2-lvmetad && systemctl start lvm2-lvmetad

	pvcreate /dev/sdb		#創建pv
	vgcreate cinder-volume /dev/sdb		#創建vg
	
	vi /etc/lvm/lvm.conf
	devices{"a/dev/sda/","a/dev/sdb/","r/.*/"}
	#a表示接收,r表示拒絕
	
	可通過命令lsblk查看系統安裝是否使用lvm,若sda磁盤沒有使用lvm可不添加"a/dev/sda/"
	
	yum -y install openstack-cinder targetcli python-keystone
	
	vi /etc/cinder/cinder.conf
	[database]
	connection = mysql+pymysql://cinder:openstack@quan/cinder
	
	[DEFAULT]
	transport_url = rabbit://openstack:openstack@quan
	
	auth_strategy = keystone
	
	my_ip = 172.16.1.223
	
	enabled_backends = lvm
	
	glance_api_servers = http://quan:9292
	
	[keystone_authtoken]
	auth_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_name = default
	user_domain_name = default
	project_name = service
	username = cinder
	password = openstack
	
	[lvm]
	volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
	volume_group = cinder-volumes		#vg的名稱
	iscsi_protocol = iscsi
	iscsi_helper = lioadm
	
	[oslo_concurrency]
	lock_path = /var/lib/cinder/tmp
	
	服務啟動,並設置開機啟動
	system enable openstack-cinder-volume target && system start openstack-cinder-volume target
	
	
	驗證
	source admin-openrc
	openstack volume service list
	
	
為虛擬機分配虛擬磁盤
	命令:
		source demo-openrc
		openstack volume create --size 2 volume2	#--size指定虛擬機磁盤大小2G
		openstack volume list	#狀態為available可用的
		
		openstack server add volume selfservice-instance volume2	#為虛擬機掛載磁盤
		openstack volume list	#狀態為in-use
		
		可登錄虛擬機通過fdisk -l 查看掛載磁盤
		
		
		
Swift組件
	swift:被稱為對象存儲,提供了強大的擴展性、冗余和持久性。對象存儲,用於永久類型的靜態數據的長期存儲
	
	
搭建swift組件
	控制節點
	創建swift用戶,並在service項目中添加管理員角色
	source admin-openrc
	openstack user create --domain default --password-prompt swift
	
	openstack role add --project service --user swift admin
	
	創建swift服務及端口
	openstack service create --name swift --description "OpenStack Object Stroage" object-store
	
	openstack endpoint create --region RegionOne object-store public http://quan:8080/v1/AUTH_%\{project_id\}s
	openstack endpoint create --region RegionOne object-store internal http://quan:8080/v1/AUTH_%\{project_id\}s
	openstack endpoint create --region RegionOne object-store admin http://quan:8080/v1

	安裝相關包
	yum -y install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached
	
	下載swift-proxy.conf的配置文件,並配置
	curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/queens
	
	vi /etc/swift/proxy-server.conf
	[DEFAULT]
	bind_port = 8080
	swift_dir = /etc/swift
	user = swift
	
	[pipeline:main]
	pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

	[app:proxy-server]
	use = egg:swift#proxy
	
	account_autocreate = True
	
	[filter:keystoneauth]
	use = egg:swift#keystoneauth
	
	operator_roles = admin,user
	
	[filter:authtoken]
	paste.filter_factory = keystonemiddleware.auth_token:filter_factory
	
	www_authenticate_uri = http://quan:5000
	auth_url = http://quan:35357
	memcached_servers = quan:11211
	auth_type = password
	project_domain_id = default
	user_domain_id = default
	project_name = service
	username = swift
	password = openstack
	delay_auth_decision = True
	
	[filter:cache]
	memcache_servers = quan:11211
	
	
	存儲節點(所有的)
	安裝相關包
	yum install xfsprogs rsync
	
	格式化磁盤
	mkfs.xfs /dev/sdb
	mkfs.xfs /dev/sdc
	
	mkdir -p /srv/node/sdb
	mkdir -p /src/node/sdc
	
	配置自動掛載
	vi /etc/fstab
	/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
	/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
	
	mount /srv/node/sdb
	mount /srv/node/sdc
	或者
	mount -a
	
	vi /etc/rsyncd.conf
	uid = swift
	gid = swift
	log_file = /var/log/rsyncd.log
	
	pid_file = /var/run/rsyncd.pid
	address =  172.16.1.224      #多個節點請自行調整          
	
	[account]
	max_connections = 2
	path = /srv/node/
	read only = False
	locak file = /var/lock/account.lock
	
	[container]
	max_connections = 2
	path = /srv/node/
	read only = False
	locak file = /var/lock/container.lock
	
	[object]
	max_connections = 2
	path = /srv/node/
	read only = False
	locak file = /var/lock/object.lock
	
	服務啟動,並設置開機啟動
	systemctl enable rsyncd && systemctl start rsyncd
	
	安裝相關包
	yum -y install openstack-swift-account openstack-swift-container openstack-swift-object
	
	下載相關配置文件,並配置
	curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/queens
	curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/queens
	curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/queens
	
	vi /etc/swift/account-server.conf
	[DEFAULT]
	bind_ip = 172.16.1.224
	bind_prot = 6202
	user = swift
	swift_dir = /etc/swift
	devices = /srv/node
	mount_check = True
	
	[pipeline:main]
	pipeline = healthcheck recon account-server
	
	[filter:recon]
	recon_cache_path = /var/cache/swift
	
	
	vi /etc/swift/container-server.conf
	[DEFAULT]
	bind_ip = 172.16.1.224
	bind_prot = 6201
	user = swift
	swift_dir = /etc/swift
	devices = /srv/node
	mount_check = True
	
	[filter:recon]
	recon_cache_path = /var/cache/swift
	
	
	vi /etc/swift/object-server.conf
	[DEFAULT]
	bind_ip = 172.16.1.224
	bind_prot = 6200
	user = swift
	swift_dir = /etc/swift
	devices = /srv/node
	mount_check = True
	
	[pipeline:main]
	pipeline = healthcheck recon object-server
	
	[filter:recon]
	recon_cache_path = /var/cache/swift
	recon_lock_path = /var/lock
	
	修改文件權限
	chown -R swfit:swift /srv/node
	mkdir -p /var/cache/swift
	chown -R root:swift /var/cache/swift
	chmod -R 755 /var/cache/swift
	
	終止存儲節點操作,上述操作全部在所有存儲節點中操作
	控制節點操作
	cd /etc/swift
	swift-ring-builder account.builder create 10 3 1
	
	創建第一存儲節點
	swift-ring-builder account.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100
	
	swift-ring-builder account.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100
	
	創建第二存儲節點
	swift-ring-builder account.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100
	
	swift-ring-builder account.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100
	
	swift-ring-builder account.builder
	swift-ring-builder account.builder rebalance
	
	
	
	swift-ring-builder container.builder create 10 3 1
	
	創建第一存儲節點
	swift-ring-builder container.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100
	
	swift-ring-builder container.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100
	
	創建第二存儲節點
	swift-ring-builder container.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100
	
	swift-ring-builder container.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100
	
	swift-ring-builder container.builder
	swift-ring-builder container.builder rebalance
	
	
	swift-ring-builder object.builder create 10 3 1
	
	創建第一存儲節點
	swift-ring-builder object.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdb --weight 100
	
	swift-ring-builder object.builder add \
	--region 1 --zone 1 --ip 172.16.1.224 --port 6202 --device sdc --weight 100
	
	創建第二存儲節點
	swift-ring-builder object.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdb --weight 100
	
	swift-ring-builder object.builder add \
	--region 1 --zone 2 --ip 172.16.1.225 --port 6202 --device sdc --weight 100
	
	swift-ring-builder object.builder
	swift-ring-builder object.builder rebalance
	
	將生成文件放到對象存儲節點中
	scp account.ring.gz container.ring.gz object.ring.gz object01:/etc/swift/
	scp account.ring.gz container.ring.gz object.ring.gz object02:/etc/swift/
	
	獲取swift.conf配置文件
	curl -o /etc/swift/swift.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/queens
	
	vi /etc/swift/swift.conf
	[swift-hash]
	swift_hash_path_suffix = HASH_PATH_SUFFIX
	swift_hash_path_prefix = HASH_PATH_PREFIX
	
	[storage-policy:0]
	name = Policy-0
	default = yes
	
	
	將swift.conf配置文件分發到對象存儲節點
	scp /etc/swift/swift.conf object01:/etc/swift/
	scp /etc/swift/swift.conf object02:/etc/swift/
	
	控制節點和所有對象存儲節點執行
	chown -R root:swift /etc/swift
	
	控制節點
	systemctl enable openstack-swift-proxy memcached && systemctl start openstack-swift-proxy memcached
	
	對象存儲節點(所有)
	systemctl enable openstack-swift-account openstack-swift-account-auditor openstack-swift-account-reaper openstack-swift-account-replicator
	systemctl start openstack-swift-account openstack-swift-account-auditor openstack-swift-account-reaper openstack-swift-account-replicator
	systemctl enable openstack-swift-container openstack-swift-container-auditor openstack-swift-container-replicator openstack-swift-container-updater
	systemctl start openstack-swift-container openstack-swift-container-auditor openstack-swift-container-replicator openstack-swift-container-updater
	systemctl enable openstack-swift-object openstack-swift-object-auditor openstack-swift-object-replicator openstack-swift-object-updater
	systemctl start openstack-swift-object openstack-swift-object-auditor openstack-swift-object-replicator openstack-swift-object-updater
	
	驗證(控制節點)
	備注:首先檢查/var/log/audit/audit.log,若存在selinux的信息,使得swift進程無法訪問,做如下修改:
	chcon -R system_u:object_r:swift_data_t:s0 /srv/node
	
	source demo-openrc
	swift stat  #查看swift狀態
	
	openstack container create container1
	openstack object create container1 FILE	#上傳文件到容器中
	openstack container list	#查看所有的container(容器)
	openstack object list container1	#查看container1容器中的文件
	openstack object save container1 FILE	#從容器中下載文件
	
	

  


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM