1 Cinder架構圖

Cinder是在虛擬機和具體存儲設備之間引入了一層“邏輯存儲卷”的抽象,Cinder本身並不是一種存儲技術,只是提供一個中間的抽象層,Cinder通過調用不同存儲后端類型的驅動接口來管理相對應的后端存儲,為用戶提供統一的卷相關操作的存儲接口。
由上圖可以看出,目前的Cinder組件主要由cinder-api、cinder-scheduler、cinder-volume以及cinder-backup幾個服務所組成,它們之間通過消息隊列進行通信。
2 Cinder源碼結構
跟Glance一樣,先看setup.cfg文件里[entry_points]里console_scripts的內容,它寫明Cinder的各項服務的入口點:
console_scripts = cinder-api = cinder.cmd.api:main cinder-backup = cinder.cmd.backup:main cinder-manage = cinder.cmd.manage:main cinder-rootwrap = oslo_rootwrap.cmd:main cinder-rtstool = cinder.cmd.rtstool:main cinder-scheduler = cinder.cmd.scheduler:main cinder-volume = cinder.cmd.volume:main cinder-volume-usage-audit = cinder.cmd.volume_usage_audit:main
| 服務 |
描述 |
| cinder-api |
進入Cinder的HTTP接口。 |
| cinder-backup |
用於提供存儲卷的備份功能,支持將塊存儲卷備份到OpenStack備份存儲后端,比如Swift、Ceph等。 |
| cinder-manage |
用於cinder管理的命令行接口。 |
| cinder-rtstool |
伴隨LIO(Linux-IO Target)支持而增加的工具。 |
| cinder-scheduler |
根據預定的策略選擇合適的cinder-volume節點來處理用戶的請求。 |
| cinder-volume |
通過相關驅動程序架構直接與塊存儲服務進行交互。 |
| cinder-volume-usage-audit |
用於卷使用情況統計。 |
3 Cinder各服務功能概述
從前面我們可以看到Cinder內部服務中主要是由cinder-api、cinder-scheduler、cinder-volume和cinder-backup組成的,而且各服務之間是使用高級消息隊列進行通信的,以下對各服務進行概述。
3.1 cinder-api
cinder-api的作用主要是為用戶提供Restful風格的接口,接收client的請求,在該服務中可以對用戶的權限和傳入的參數進行提前的檢查,無誤后方才將請求信息交給消息隊列,由后續的其它服務根據消息隊列信息進行處理。
3.2 cinder-scheduler
cinder-scheduler是一個調度器,用於選擇合適的存儲節點,該服務中包含過濾器算法和權重計算算法,Cinder默認的過濾算法有三個:
(1)AvailabilityZoneFilter過濾算法:判斷cinder host的availability zone是否與目標zone一致,否則過濾掉該節點;
(2)CapacityFilter過濾算法:判斷host的可用存儲空間是否不小於要分配卷的大小,否則過濾掉該節點;
(3)CapabilitiesFilter過濾算法:檢查host的屬性否和volume type中的extra specs相同,不相同則過濾掉該節點。
通過指定的過濾算法可能會得到一系列的host,這時還需使用權重計算算法來計算各節點的權重值,權重值最大的會認為是最優節點,cinder-scheduler會基於消息隊列服務的rpc調用來讓最優節點對請求進行處理,以下列出幾個計算權重的算法:
(1)AllocatedCapacityWeigher算法:存儲空間使用最小的節點為最優節點;
(2)CapacityWeigher算法:可用存儲空間最大的節點成為最優節點;
(3)ChanceWeigher算法:隨機選擇一個節點作為最優節點。
3.3 cinder-volume
cinder-volume是部署在存儲節點上的服務,cinder-volume的主要功能是對后端存儲進行一層抽象封裝,為用戶提供統一的接口,cinder-volume通過調用后端存儲驅動API來進行存儲相關的操作。cinder-volume服務執行的功能可以由如下表列出:
| 卷操作 |
創建卷 |
| 克隆卷 |
|
| 擴展卷 |
|
| 刪除卷 |
|
| 卷虛機從操作 |
掛載卷到虛擬機 |
| 從虛擬機里分離出卷 |
|
| 卷-快照操作 |
創建卷的快照 |
| 從已有卷快照創建卷 |
|
| 刪除快照 |
|
| 卷-鏡像操作 |
從鏡像創建卷 |
| 從卷創建鏡像 |
3.4 cinder-backup
cinder-backup的功能是將volume備份到別的存儲設備上去,以后可以通過restone操作恢復。
cinder-backup跟volume做快照還是有很大區別的,以下列出主要的幾點區別:
(1)快照依賴於源volume,如果源volume被刪除了,那快照是無用的,但backup是不依賴於源volume的,因為源volume也被備份到備份存儲設備上去了,通過restone操作可以完全恢復。
(2)快照和源volume通常放在一起,由同一個volume provider管理,而backup存放在獨立的備份設備上,有自己的備份方案和實現,和volume provider沒有關系。
(3)cinder-backup是具有容災功能的,但因此備份往往需要較大的空間,而快照snapshot則提供volume provider內快捷的回溯功能。
我們可以通過使用cinder backup-create命令創建某個卷的備份,通過cinder backup-list命令查看創建的備份。
4 Cinder組件重要流程分析
4.1 Cinder服務啟動流程
Cinder的主要服務包括cinder-api、cinder-scheduler和cinder-volume等,這里介紹下cinder-api服務的啟動流程,其它類似。
cinder-api服務啟動流程
從/cmd/api.py的main函數中我們可以看到以下幾行代碼:
def main(): objects.register_all() gmr_opts.set_defaults(CONF) CONF(sys.argv[1:], project='cinder', version=version.version_string()) config.set_middleware_defaults() logging.setup(CONF, "cinder") python_logging.captureWarnings(True) utils.monkey_patch() gmr.TextGuruMeditation.setup_autorun(version, conf=CONF) rpc.init(CONF) launcher = service.process_launcher() server = service.WSGIService('osapi_volume') launcher.launch_service(server, workers=server.workers) launcher.wait()
(1)register_all函數是導入/cinder/object目錄下的多個模塊
(2)接着是加載配置文件、設置中間件和初始化日志模塊
(3)初始化rpc模塊,用於與其它服務進行通信
(4)啟動wsgi服務,監聽客戶端發送的請求
4.2 創建卷過程分析
比如客戶端敲入命令:openstack volume create oop2 --size=1 --debug
首先我們知道該請求會先在cinder-api服務中進行處理,該請求首先會匹配到/cinder/api/v2/volumes.py文件的create函數:
def create(self, req, body): ...# 對卷的參數進行檢查,比如卷名、size參數,參數賦予kwargs字典等 # 調用/cinder/volume/api.py里的create函數進行創建 new_volume = self.volume_api.create(context, size, volume.get('display_name'), volume.get('display_description'), **kwargs) retval = self._view_builder.detail(req, new_volume) return retval
查看api.py文件里的create函數:
def create(self, context, size, name, description, ...): # 驗證該用戶是否有權限在該project下做這個操作 check_policy(context, 'create_from_image' if image_id else 'create') # 獲取調用cinder-scheduler服務端方法的rpc客戶端 sched_rpcapi = (self.scheduler_rpcapi if ( not cgsnapshot and not source_cg and not group_snapshot and not source_group) else None) # 獲取調用cinder-volume服務端方法的rpc客戶端 volume_rpcapi = (self.volume_rpcapi if ( not cgsnapshot and not source_cg and not group_snapshot and not source_group) else None) # 調用/cinder/volume/flows/api/create_volume.py的get_flow方法,並返回api入口流 flow_engine = create_volume.get_flow(self.db, self.image_service, availability_zones, create_what, sched_rpcapi, volume_rpcapi)
在cinder組件的代碼中我們可以經常看到在處理一個操作時,會創建一個類flow的實例對象,然后往該對象中添加任務來進行工作流程的處理,查看get_flow函數的實現:
def get_flow(db_api, image_service_api, availability_zones, create_what, scheduler_rpcapi=None, volume_rpcapi=None): """Constructs and returns the api entrypoint flow. This flow will do the following: 1. Inject keys & values for dependent tasks. 2. Extracts and validates the input keys & values. 3. Reserves the quota (reverts quota on any failures). 4. Creates the database entry. 5. Commits the quota. 6. Casts to volume manager or scheduler for further processing. """ flow_name = ACTION.replace(":", "_") + "_api" # 獲取類flow的實例化對象,可以添加task到flow對象 api_flow = linear_flow.Flow(flow_name) # 添加ExtractVolumeRequestTask,該任務是將接收到的一系列參數值通過一系列條件的驗證將驗證結果存儲起來供其它task使用 api_flow.add(ExtractVolumeRequestTask( image_service_api, availability_zones, rebind={'size': 'raw_size', 'availability_zone': 'raw_availability_zone', 'volume_type': 'raw_volume_type'})) # 添加多個task # QuotaReserveTask: 為卷保留配額,失敗時可以回滾 # EntryCreateTask: 將卷的創建過程寫入數據庫,比如卷的狀態,此時卷的狀態未"creating" # QuotaCommitTask: 用於提交保留 api_flow.add(QuotaReserveTask(), EntryCreateTask(), QuotaCommitTask()) # 將請求通過基於消息隊列服務的rpc方式發送到scheduler或者volume管理程序去處理請求 if scheduler_rpcapi and volume_rpcapi: # This will cast it out to either the scheduler or volume manager via # the rpc apis provided. # 添加VolumeCastTask # VolumeCastTask: 將卷創建工作轉移到scheduler或volume管理程序去處理,也就表示工作流程從api服務中轉到 # 其它服務中去執行 api_flow.add(VolumeCastTask(scheduler_rpcapi, volume_rpcapi, db_api)) # Now load (but do not run) the flow using the provided initial data. return taskflow.engines.load(api_flow, store=create_what)
上面的代碼中比較關鍵的代碼是VolumeCastTask該任務類它會調用它的execute函數,execute函數中會調用_cast_create_volume函數:
def _cast_create_volume(self, context, request_spec, filter_properties): # 這里判斷用戶是否有指定host進行創建卷操作,如果沒有則將創建任務交給 scheduler管理程序去完成 # 如果用戶有指定host則跳過scheduler,直接將創建任務交給volume管理程序去完成 if not source_volume_ref: # Cast to the scheduler and let it handle whatever is needed # to select the target host for this volume. # 這里會直接調用到SchedulerAPI.create_volume函數 # SchedulerAPI.create_volume函數會通過消息異步調用SchedulerManager.create_volume函數, # 也就是/cinder/scheduler/manager.py中的create_volume函數 self.scheduler_rpcapi.create_volume( context, volume, snapshot_id=snapshot_id, image_id=image_id, request_spec=request_spec, filter_properties=filter_properties) else: # Bypass the scheduler and send the request directly to the volume # manager. volume.host = source_volume_ref.host volume.cluster_name = source_volume_ref.cluster_name volume.scheduled_at = timeutils.utcnow() volume.save() if not cgsnapshot_id: self.volume_rpcapi.create_volume( context, volume, request_spec, filter_properties, allow_reschedule=False)
到這里cinder-api服務就完成它所有的工作了,它會通過消息異步調用cinder-scheduler服務里面的函數,以下創建卷的工作流程開始在cinder-scheduler服務中進行,查看scheduler服務中的manager.py文件中的create_volume函數:
def create_volume(self, context, volume, snapshot_id=None, image_id=None, request_spec=None, filter_properties=None): # 確保調度程序已經准備好 self._wait_for_scheduler() try: # 這里會調用/cinder/scheduler/flows/create_volume.py中的get_flow函數 flow_engine = create_volume.get_flow(context, self.driver, request_spec, filter_properties, volume, snapshot_id, image_id)
主要查看get_flow函數的實現:
def get_flow(context, driver_api, request_spec=None, filter_properties=None, volume=None, snapshot_id=None, image_id=None): """Constructs and returns the scheduler entrypoint flow. This flow will do the following: 1. Inject keys & values for dependent tasks. 2. Extract a scheduler specification from the provided inputs. 3. Use provided scheduler driver to select host and pass volume creation request further. """ create_what = { 'context': context, 'raw_request_spec': request_spec, 'filter_properties': filter_properties, 'volume': volume, 'snapshot_id': snapshot_id, 'image_id': image_id, } flow_name = ACTION.replace(":", "_") + "_scheduler" # 獲取類flow實例對象 scheduler_flow = linear_flow.Flow(flow_name) # This will extract and clean the spec from the starting values. # ExtractSchedulerSpecTask: 從請求規范中提取規范對象 scheduler_flow.add(ExtractSchedulerSpecTask( rebind={'request_spec': 'raw_request_spec'})) # This will activate the desired scheduler driver (and handle any # driver related failures appropriately). # ScheduleCreateVolumeTask: 激活scheduler程序的驅動程序並處理任何后續故障 scheduler_flow.add(ScheduleCreateVolumeTask(driver_api)) # Now load (but do not run) the flow using the provided initial data. return taskflow.engines.load(scheduler_flow, store=create_what)
ScheduleCreateVolumeTask任務會執行它的execute函數,execute函數會調用過濾器FilterScheduler的schedule_create_volume函數來選擇最佳的存儲后端並將工作流程過渡到volume管理服務中:
def schedule_create_volume(self, context, request_spec, filter_properties): # 選擇一個最合適的backend來進行准備創建的卷的存儲 backend = self._schedule(context, request_spec, filter_properties) if not backend: raise exception.NoValidBackend(reason=_("No weighed backends " "available")) backend = backend.obj volume_id = request_spec['volume_id'] # 更新數據庫當前卷的狀態 updated_volume = driver.volume_update_db(context, volume_id, backend.host, backend.cluster_name) self._post_select_populate_filter_properties(filter_properties, backend) # context is not serializable filter_properties.pop('context', None) # 通過消息隊列請求調用volume_rpcapi.create_volume # VolumeAPI.create_volume會通過消息隊列遠程調用VolumeManager.create_volume # 最后會調用到/cinder/volume/manager.py中的create_volume函數,也就是創建卷的工作流程會轉入到cinder-volume的服務中 self.volume_rpcapi.create_volume(context, updated_volume, request_spec, filter_properties, allow_reschedule=True)
至此cinder-scheduler服務的工作也已經全部完成了,接下來的工作會調用進cinder-volume服務的manager.py文件中create_volume函數,該函數也是通過建立flow實例對象,然后添加任務來完成創建工作,我們直接看get_flow函數:
def get_flow(context, manager, db, driver, scheduler_rpcapi, host, volume, allow_reschedule, reschedule_context, request_spec, filter_properties, image_volume_cache=None): """Constructs and returns the manager entrypoint flow. This flow will do the following: 1. Determines if rescheduling is enabled (ahead of time). 2. Inject keys & values for dependent tasks. 3. Selects 1 of 2 activated only on *failure* tasks (one to update the db status & notify or one to update the db status & notify & *reschedule*). 4. Extracts a volume specification from the provided inputs. 5. Notifies that the volume has started to be created. 6. Creates a volume from the extracted volume specification. 7. Attaches a on-success *only* task that notifies that the volume creation has ended and performs further database status updates. """ flow_name = ACTION.replace(":", "_") + "_manager" # 獲取類flow實例對象 volume_flow = linear_flow.Flow(flow_name) # This injects the initial starting flow values into the workflow so that # the dependency order of the tasks provides/requires can be correctly # determined. create_what = { 'context': context, 'filter_properties': filter_properties, 'request_spec': request_spec, 'volume': volume, } # ExtractVolumeRefTask: 提取給定卷ID的卷引用 volume_flow.add(ExtractVolumeRefTask(db, host, set_error=False)) retry = filter_properties.get('retry', None) # Always add OnFailureRescheduleTask and we handle the change of volume's # status when reverting the flow. Meanwhile, no need to revert process of # ExtractVolumeRefTask. do_reschedule = allow_reschedule and request_spec and retry volume_flow.add(OnFailureRescheduleTask(reschedule_context, db, driver, scheduler_rpcapi, do_reschedule)) LOG.debug("Volume reschedule parameters: %(allow)s " "retry: %(retry)s", {'allow': allow_reschedule, 'retry': retry}) # ExtractVolumeSpecTask: 結合數據庫存取的該卷的信息,提供有用的、易分析的卷相關數據結構給其它任務使用 # NotifyVolumeActionTask: 在卷開始創建時執行相關的通知信息 # CreateVolumeFromSpecTask: 該任務是根據卷的規格真實創建卷 # CreateVolumeOnFinishTask: 當卷創建成功后,會使用該任務將卷在數據庫的狀態更新為available volume_flow.add(ExtractVolumeSpecTask(db), NotifyVolumeActionTask(db, "create.start"), CreateVolumeFromSpecTask(manager, db, driver, image_volume_cache), CreateVolumeOnFinishTask(db, "create.end")) # Now load (but do not run) the flow using the provided initial data. return taskflow.engines.load(volume_flow, store=create_what)
上面代碼中進行卷創建的工作是在CreateVolumeFromSpecTask該任務中,該任務類首先是執行execute函數,execute函數中ongoing根據要創建的卷的類型調用相對應的方法來進行卷的創建:
# 根據不同類型的卷調用不同的方法來創建 if create_type == 'raw': model_update = self._create_raw_volume(volume, **volume_spec) elif create_type == 'snap': model_update = self._create_from_snapshot(context, volume, **volume_spec) elif create_type == 'source_vol': model_update = self._create_from_source_volume( context, volume, **volume_spec) elif create_type == 'source_replica': model_update = self._create_from_source_replica( context, volume, **volume_spec)
這里我們卷類型應該是raw,則它會調用_create_raw_volume函數:
ret = self.driver.create_volume(volume)
在該函數中,它會根據配置文件中指定的后端存儲類型來調用相對應的在driver目錄下的邏輯代碼,比如我們配置的后端存儲是Ceph的塊設備存儲,那里它就會調用到/cinder/volume/drivers/rbd.py文件的create_volume函數來進行卷的創建。
4.3 掛載卷到虛擬機過程分析
在OpenStack的搭建示例中,使用的是lvm后端存儲,然后再使用ISCSI的方式來進行訪問后端存儲卷,這里我們采用的后端存儲方式是Ceph的塊設備存儲,libvirt是支持直接掛載rbd image的,然后通過rbd協議來訪問image,以下是掛載操作過程的源碼分析分析,大部分的工作其實都是在nova組件服務中完成的。
我們可以通過敲入命令nova volume-attach instance-name volume-id來將volume-id的卷掛載到實例instance-name中,這個操作首先會調用到/nova/api/openstack/compute/volumes.py中的create函數:
def create(self, req, server_id, body): """Attach a volume to an instance.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) context.can(va_policies.POLICY_ROOT % 'create') volume_id = body['volumeAttachment']['volumeId'] device = body['volumeAttachment'].get('device') instance = common.get_instance(self.compute_api, context, server_id) if instance.vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED): _check_request_version(req, '2.20', 'attach_volume', server_id, instance.vm_state) device = self.compute_api.attach_volume(context, instance, volume_id, device) # The attach is async attachment = {} attachment['id'] = volume_id attachment['serverId'] = server_id attachment['volumeId'] = volume_id attachment['device'] = device # TODO(justinsb): How do I return "accepted" here? return {'volumeAttachment': attachment}
這里的關鍵執行時調用了/nova/compute/api.py的attach_volume函數:
def attach_volume(self, context, instance, volume_id, device=None, disk_bus=None, device_type=None): if device and not block_device.match_device(device): raise exception.InvalidDevicePath(path=device) is_shelved_offloaded = instance.vm_state == vm_states.SHELVED_OFFLOADED if is_shelved_offloaded: return self._attach_volume_shelved_offloaded(context, instance, volume_id, device, disk_bus, device_type) return self._attach_volume(context, instance, volume_id, device, disk_bus, device_type)
這里主要是根據虛擬機的當前狀態來判斷執行哪個函數,我們當前的虛擬機是在運行的,它走的邏輯是_attach_volume函數,我們查看該函數:
def _attach_volume(self, context, instance, volume_id, device, disk_bus, device_type): # 遠程目的主機確定設備名和更新數據庫並返回相關信息對象 volume_bdm = self._create_volume_bdm( context, instance, device, volume_id, disk_bus=disk_bus, device_type=device_type) try: # 這個函數中會通過遠程調用cinder-volume服務里的函數來檢查該卷是否是可添加的且更新數據庫的狀態 self._check_attach_and_reserve_volume(context, volume_id, instance) self.compute_rpcapi.attach_volume(context, instance, volume_bdm) except Exception: with excutils.save_and_reraise_exception(): volume_bdm.destroy() return volume_bdm.device_name
前面一些代碼主要都是用以檢查卷和更新數據庫信息的,可以看到后面調用到attach_volume繼續attach的任務,在attach_volume函數中通過rpc方式調用到目的計算節點上的函數進行任務的執行,這里調用到的是目的計算節點上/nova/compute/manager.py中的attach_volume函數,該函數里根據之前的參數信息轉換了一個driver_block_device類實例對象,然后調用_attach_volume函數並將該實例作為參數傳入:
def _attach_volume(self, context, instance, bdm): context = context.elevated() # 因為我們創建的volume,所以這里調用的attach方法是/nova/virt/block_device.py里的attach函數 bdm.attach(context, instance, self.volume_api, self.driver, do_check_attach=False, do_driver_attach=True) info = {'volume_id': bdm.volume_id} self._notify_about_instance_usage( context, instance, "volume.attach", extra_usage_info=info)
查看attach函數:
def attach(self, context, instance, volume_api, virt_driver, do_check_attach=True, do_driver_attach=False, **kwargs): # 獲取有關要attach的volume的信息對象 volume = volume_api.get(context, self.volume_id) if do_check_attach: # 檢查volume的狀態屬性是否符合attach的要求 volume_api.check_attach(context, volume, instance=instance) volume_id = volume['id'] context = context.elevated() # 返回該計算節點信息,比如節點名、節點ip、操作系統架構等 connector = virt_driver.get_volume_connector(instance) # 獲取卷的信息,比如如果是存儲在Ceph集群的,會包含集群名、monitor節點ip等,確保擁有的信息能訪問集群中的該volume connection_info = volume_api.initialize_connection(context, volume_id, connector) if 'serial' not in connection_info: connection_info['serial'] = self.volume_id self._preserve_multipath_id(connection_info) # If do_driver_attach is False, we will attach a volume to an instance # at boot time. So actual attach is done by instance creation code. if do_driver_attach: # 遠程調用cinder組件服務中的函數去獲取該卷的加密元數據 encryption = encryptors.get_encryption_metadata( context, volume_api, volume_id, connection_info) virt_driver.attach_volume( context, connection_info, instance, self['mount_device'], disk_bus=self['disk_bus'], device_type=self['device_type'], encryption=encryption) self['connection_info'] = connection_info if self.volume_size is None: self.volume_size = volume.get('size') mode = 'rw' if 'data' in connection_info: mode = connection_info['data'].get('access_mode', 'rw')
這段代碼中initialize_connection函數主要都是遠程調用了volume組件服務來獲取信息,這里我們可以詳細看下這個函數在cinder組件服務里所做的事情,主要查看/cinder/volume/manager.py的initialize_connection函數:
def initialize_connection(self, context, volume, connector): # TODO(jdg): Add deprecation warning # 驗證driver是否已經初始化 utils.require_driver_initialized(self.driver) # 對於rbd驅動沒有重寫該方法,所以沒做任何事 self.driver.validate_connector(connector) # 對於rbd driver,該方法是空的,因為rbd不需要像ISCSI那樣創建target、創建protal model_update = self.driver.create_export(context.elevated(), volume, connector) if model_update: volume.update(model_update) volume.save() # 對於ceph rbd驅動,這里是獲取有關該卷所在集群的信息,比如monitor、ip、secret_uuid等 conn_info = self.driver.initialize_connection(volume, connector) conn_info = self._parse_connection_options(context, volume, conn_info) LOG.info(_LI("Initialize volume connection completed successfully."), resource=volume) return conn_info
注意這里的driver是根據配置文件的配置的后端存儲類型的driver,由於我們配置的是Ceph的塊設備作為后端存儲,因此其實例driver調用的函數都是/cinder/volume/drivers/rbd.py里的函數。
現在我們回到nova組件代碼中,查看virt_driver.attach_volume調用的是/nova/virt/libvirt/driver.py中的attach_volume函數:
def attach_volume(self, context, connection_info, instance, mountpoint, disk_bus=None, device_type=None, encryption=None): guest = self._host.get_guest(instance) disk_dev = mountpoint.rpartition("/")[2] bdm = { 'device_name': disk_dev, 'disk_bus': disk_bus, 'device_type': device_type} # Note(cfb): If the volume has a custom block size, check that # that we are using QEMU/KVM and libvirt >= 0.10.2. The # presence of a block size is considered mandatory by # cinder so we fail if we can't honor the request. data = {} if ('data' in connection_info): data = connection_info['data'] if ('logical_block_size' in data or 'physical_block_size' in data): if ((CONF.libvirt.virt_type != "kvm" and CONF.libvirt.virt_type != "qemu")): msg = _("Volume sets block size, but the current " "libvirt hypervisor '%s' does not support custom " "block size") % CONF.libvirt.virt_type raise exception.InvalidHypervisorType(msg) disk_info = blockinfo.get_info_from_bdm( instance, CONF.libvirt.virt_type, instance.image_meta, bdm) self._connect_volume(connection_info, disk_info) if disk_info['bus'] == 'scsi': disk_info['unit'] = self._get_scsi_controller_max_unit(guest) + 1 conf = self._get_volume_config(connection_info, disk_info) self._check_discard_for_attach_volume(conf, instance) state = guest.get_power_state(self._host) live = state in (power_state.RUNNING, power_state.PAUSED) if encryption: encryptor = self._get_volume_encryptor(connection_info, encryption) encryptor.attach_volume(context, **encryption) guest.attach_device(conf, persistent=True, live=live)
這段代碼中關鍵的地方有首先獲取該該虛擬機的實例對象,然后將一系列參數都封裝進conf對象中,然后guest調用attach_device函數來完成掛載工作,查看該函數:
def attach_device(self, conf, persistent=False, live=False): """Attaches device to the guest. :param conf: A LibvirtConfigObject of the device to attach :param persistent: A bool to indicate whether the change is persistent or not :param live: A bool to indicate whether it affect the guest in running state """ flags = persistent and libvirt.VIR_DOMAIN_AFFECT_CONFIG or 0 flags |= live and libvirt.VIR_DOMAIN_AFFECT_LIVE or 0 # 把conf中的信息轉換成xml的格式,然后可通過libvirt工具將卷掛載到guest中 device_xml = conf.to_xml() if six.PY3 and isinstance(device_xml, six.binary_type): device_xml = device_xml.decode('utf-8') LOG.debug("attach device xml: %s", device_xml) self._domain.attachDeviceFlags(device_xml, flags=flags)
這一段代碼的主要工作就是將conf轉換成libvirt掛載卷需要的xml格式的文件,然后由libvirt提供的功能來卷掛載到虛擬機實例上去。
