nova掛載volume源碼分析


當nova volume-attach instance_uuid volume_uuid 執行后,主要流程如下:

使用的存儲類型是lvm+iscis

1、nova client解析該命令行,通過restful api接口訪問nova-api
訪問nova api的接口如下形式,及請求體的內容如下
post /servers/{server_id}/os-volume_attachments

請求體
{
  "volumeAttachment": {
  "volumeId": "a26887c6-c47b-4654-abb5-dfadf7d3f803",
  "device": "/dev/vdd"
  }
}

2、nova-api掛載volume的入口為nova/api/openstack/compute/volumes.py,controller為 VolumeAttachmentController ,掛載volume的方法為create。

這個方法的主要作用是獲取請求體中volume_uuid,卷掛載到虛機上的設備名,通過instance_id從數據庫instances表中獲取該虛機信息,
最后調用compute目錄中api模塊的attach_volume方法

def create(self, req, server_id, body):
        .......
    volume_id = body['volumeAttachment']['volumeId']
    device = body['volumeAttachment'].get('device')

    instance = common.get_instance(self.compute_api, context, server_id)

    if instance.vm_state in (vm_states.SHELVED,
                             vm_states.SHELVED_OFFLOADED):
        _check_request_version(req, '2.20', 'attach_volume',
                               server_id, instance.vm_state)

    try:
        device = self.compute_api.attach_volume(context, instance,
                                                volume_id, device)

進一步調用到如下內容:

nova/compute/api.py:API.attach_volume
self._attach_volume(context, instance, volume_id, device,disk_bus, device_type)

nova/compute/api.py:API._attach_volume
  step 1:volume_bdm = self._create_volume_bdm
  step 2:self._check_attach_and_reserve_volume()
  step 3:self.compute_rpcapi.attach_volume

step 1:在block_device_mapping 表中創建對應的虛機和數據卷的映射記錄BDM entry。
bdm創建時,不是在API節點創建的,而是通過RPC請求到虛擬機所在的nova-compute節點創建,請求方法為compute_rpcapi.reserve_block_device_name

step 2:_check_attach_and_reserve_volume()

主要工作是,調用 cinderclient 獲取需要掛載的 volume 信息,檢查虛機和volume是否在相同的az域,
最后更新cinder數據庫中,卷的狀態volume['status']為attaching,防止其他api在別的地方使用這個卷

def _check_attach_and_reserve_volume(self, context, volume_id, instance):
    volume = self.volume_api.get(context, volume_id)
    self.volume_api.check_availability_zone(context, volume,
                                            instance=instance)
    self.volume_api.reserve_volume(context, volume_id)

step 3: self.compute_rpcapi.attach_volume
nova/compute/rpcapi.py:ComputeAPI.attach_volume
nova-api向虛機所在的計算節點發送cast 方式的RPC異步調用請求,nova-compute服務接受到這個rpc請求以后,進行后續請求處理,nova-api任務結束

def attach_volume(self, ctxt, instance, bdm):
    version = '4.0'
    cctxt = self.router.by_instance(ctxt, instance).prepare(
            server=_compute_host(None, instance), version=version)
    cctxt.cast(ctxt, 'attach_volume', instance=instance, bdm=bdm)

step 4:nova-compute計算節點接收到RPC請求,函數處理入口是
nova/compute/manager.py:ComputeManager 類attach_volume方法

nova.compute.manager.ComputeManager.attach_volume
     step5: driver_bdm = driver_block_device.convert_volume(bdm)
     step6: self._attach_volume(context, instance, driver_bdm)

step 5:根據bdm實例參數 中source_type 類型獲取 BDM 驅動
由於是掛載卷,所以創建的bdm實例中,source_type的取值為volume, 因此獲取的驅動是 DriverVolumeBlockDevice
nova/virt/block_device.py

def convert_volume(volume_bdm):
    try:
        return convert_all_volumes(volume_bdm)[0]
    except IndexError:
        pass
def convert_all_volumes(*volume_bdms):
    source_volume = convert_volumes(volume_bdms)
    source_snapshot = convert_snapshots(volume_bdms)
    source_image = convert_images(volume_bdms)
    source_blank = convert_blanks(volume_bdms)

    return [vol for vol in
            itertools.chain(source_volume, source_snapshot,
                            source_image, source_blank)]
                            
convert_volumes = functools.partial(_convert_block_devices,
                                   DriverVolumeBlockDevice)    

step6: BDM driver attach
self._attach_volume(context, instance, driver_bdm)
nova/compute/manager.py:ComputeManager._attach_volume
bdm.attach(context, instance, self.volume_api, self.driver,do_check_attach=False, do_driver_attach=True)

進一步調用的是 DriverVolumeBlockDevice 中的attach方法
nova/virt/block_device.py:DriverVolumeBlockDevice.attach
step 7:connector = virt_driver.get_volume_connector(instance)-----即使虛機掛載的卷是本地的lvm卷,也會把這個connetor信息發送給cinder使用
step 8:connection_info = volume_api.initialize_connection(context,volume_id,connector)
step 9: virt_driver.attach_volume(context, connection_info, instance,self['mount_device'], disk_bus=self['disk_bus'],device_type=self['device_type'], encryption=encryption)
step 13 :volume_api.attach((context, volume_id, instance.uuid,self['mount_device'], mode=mode))

step 7:該方法的主要作用是返回虛機所在計算節點的ip、操作系統類型、系統架構以及initiator name,給cinder 使用
connector = virt_driver.get_volume_connector(instance)
由於使用的是libvirt,所以virt_driver使用的是nova/virt/libvirt/driver.py
nova.virt.libvirt.driver.LibvirtDriver.get_volume_connector

from os_brick.initiator import connector
def get_volume_connector(self, instance):
    root_helper = utils.get_root_helper()
    return connector.get_connector_properties(
        root_helper, CONF.my_block_storage_ip,
        CONF.libvirt.volume_use_multipath,
        enforce_multipath=True,
        host=CONF.host)

def get_connector_properties(root_helper, my_ip, multipath, enforce_multipath,
                             host=None):
    iscsi = ISCSIConnector(root_helper=root_helper)
    fc = linuxfc.LinuxFibreChannel(root_helper=root_helper)

    props = {}
    props['ip'] = my_ip-----------------計算節點nova.conf配置文件中my_block_storage_ip
    props['host'] = host if host else socket.gethostname()-----計算節點主機名
    initiator = iscsi.get_initiator()
    if initiator:
        props['initiator'] = initiator------iscsc客戶端的名字,獲取的是計算節點/etc/iscsi/initiatorname.iscsi的內容
    wwpns = fc.get_fc_wwpns()
    if wwpns:
        props['wwpns'] = wwpns
    wwnns = fc.get_fc_wwnns()
    if wwnns:
        props['wwnns'] = wwnns
    props['multipath'] = (multipath and
                          _check_multipathd_running(root_helper,
                                                    enforce_multipath))
    props['platform'] = platform.machine()-----計算節點系統架構,x86_64
    props['os_type'] = sys.platform------------操作系統類型
    return props

step 8:該函數的作用是調用Cinder API的initialize_connection方法,由cinder去創建target,創建lun,認證信息,把這些信息拼接成合適的信息,返回給nova compute
同時在cinder數據庫的volume_attachment表中,新增了一條記錄,記錄connector信息,
connection_info = volume_api.initialize_connection(context,volume_id,connector)

step 9: 計算節點獲取到卷的connection_info以后,進行虛機掛載卷操作
virt_driver.attach_volume
由於使用的是libvirt,所以調用nova/virt/libvirt/driver.py:LibvirtDriver的 attach_volume 函數

主要調用
step 10:self._connect_volume(connection_info, disk_info)
step 11:conf = self._get_volume_config(connection_info, disk_info)
step 12:guest.attach_device(conf, persistent=True, live=live)

step 10:該方法的主要作用是使用iscsiadm命令的discovory以及login子命令,即把lun設備映射到本地設備
self._connect_volume(connection_info, disk_info)
discovory命令發現的所有的target保存在/var/lib/iscsi/nodes目錄下
login的lun設備,映射到/dev/disk/by-path目錄下

step 11:該函數的作用是拿到lun映射到計算節點的/dev/disk/by-pass的路徑后,生成volume的xml文件
conf = self._get_volume_config(connection_info, disk_info)

step 12:調用virsh attach-device命令把卷掛載到虛擬機中
guest.attach_device(conf, persistent=True, live=live)

step 13:更新cinder數據庫中volume的狀態為in-use狀態
volume_api.attach(context, volume_id, instance.uuid,self['mount_device'], mode=mode)

參考文檔:http://int32bit.me/2017/09/08/OpenStack%E8%99%9A%E6%8B%9F%E6%9C%BA%E6%8C%82%E8%BD%BD%E6%95%B0%E6%8D%AE%E5%8D%B7%E8%BF%87%E7%A8%8B%E5%88%86%E6%9E%90/

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM