題目:ironic學習筆記day1
日期:2018.7.9
參考官方文檔:https://docs.openstack.org/ironic/latest/user/index.html
一.絮絮叨叨
今年剛畢業,作為應屆生入職了一家雲計算公司,開發基於openstack的產品.
openstack真的是個很大分布式系統,通過網絡將用戶和網絡背后豐富的硬件資源分離開.
而我進入的是計算組,主要需要掌握Nova的知識,目前除了了解一些Nova表面上的對虛機的運維操作,很多很深入的知識還是兩眼一摸黑,不過功夫不負有心人嘛,好的開始是成功的一半,工作生活都是一步一步來的,現在只能給自己打氣啦嘿嘿~
加油加油~
二.內容概述
現在掌握的情報:對Nova而言,Nova是用於管理虛機的生命周期;而ironic是用於管理物理機的生命周期.
在網上講ironic的文章很多,但是對於小白的我來說還是感到很苦手,因為內容真的太多了,感到有點無力,我只能很笨的從官方的文檔進行一個大致的瀏覽.
看到了整個ironic與Nova交互的一個部署過程官方文檔,通過谷歌翻譯了一下,進行了如下的摘錄:
三.ironic簡介
Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) machines. It may be used independently or as part of an OpenStack Cloud, and integrates with the OpenStack Identity (keystone), Compute (nova), Network (neutron), Image (glance) and Object (swift) services
Ironic是一個OpenStack項目,為裸機(而不是虛擬)機器提供服務。 它可以單獨使用,也可以作為OpenStack Cloud的一部分使用,並與OpenStack Identity(keystone),Compute(nova),Network(neutron),Image(glance)和Object(swift)服務集成。
四.ironic節點部署過程圖示

五.部署過程(Deploy Process)
This describes a typical ironic node deployment using PXE and the Ironic Python Agent (IPA). Depending on the ironic driver interfaces used, some of the steps might be marginally different, however the majority of them will remain the same.
- A boot instance request comes in via the Nova API, through the message queue to the Nova scheduler.
- Nova scheduler applies filters and finds the eligible hypervisor. The nova scheduler also uses the flavor’s extra_specs, such as cpu_arch, to match the target physical node.
- Nova compute manager claims the resources of the selected hypervisor.
- Nova compute manager creates (unbound) tenant virtual interfaces (VIFs) in the Networking service according to the network interfaces requested in the nova boot request. A caveat here is, the MACs of the ports are going to be randomly generated, and will be updated when the VIF is attached to some node to correspond to the node network interface card’s (or bond’s) MAC.Nova計算管理器根據nova引導請求中請求的網絡接口在Networking服務中創建(未綁定)租戶虛擬接口(VIF)。 這里需要注意的是,端口的MAC將隨機生成,並且當VIF連接到某個節點以對應於節點網絡接口卡(或債券)MAC時將更新。
- A spawn task is created by the nova compute which contains all the information such as which image to boot from etc. It invokes the driver.spawn from the virt layer of Nova compute. During the spawn process, the virt driver does the following:產生的任務由nova計算機創建,它包含所有信息,例如從哪個圖像引導等。它從Nova計算的virt層調用driver.spawn。 在spawn過程中,virt驅動程序執行以下操作:
- Updates the target ironic node with the information about deploy image, instance UUID, requested capabilities and various flavor properties.
- Validates node’s power and deploy interfaces, by calling the ironic API.
- Attaches the previously created VIFs to the node. Each neutron port can be attached to any ironic port or port group, with port groups having higher priority than ports. On ironic side, this work is done by the network interface. Attachment here means saving the VIF identifier into ironic port or port group and updating VIF MAC to match the port’s or port group’s MAC, as described in bullet point 4..將先前創建的VIF附加到節點。 每個neutron端口可以連接到任何ironic的端口或端口組,端口組的優先級高於端口。 在ironic這一邊,這項工作是通過網絡接口完成的。 此處的附件意味着將VIF標識符保存到ironic端口或端口組中,並更新VIF MAC以匹配端口或端口組的MAC,如第4點中所述。
- Generates config drive, if requested.
- Nova’s ironic virt driver issues a deploy request via the Ironic API to the Ironic conductor servicing the bare metal node.
- Virtual interfaces are plugged in and Neutron API updates DHCP port to set PXE/TFTP options. In case of using neutron network interface, ironic creates separate provisioning ports in the Networking service, while in case of flat network interface, the ports created by nova are used both for provisioning and for deployed instance networking.插入虛擬接口,Neutron API更新DHCP端口以設置PXE / TFTP選項。 在使用neutron網絡接口的情況下,ironic在網絡服務中創建單獨的配置端口,而在扁平網絡接口的情況下,nova創建的端口用於配置和部署的實例網絡。
- The ironic node’s boot interface prepares (i)PXE configuration and caches deploy kernel and ramdisk.
- The ironic node’s management interface issues commands to enable network boot of a node.
- The ironic node’s deploy interface caches the instance image (in case of iscsi deploy interface or most pxe_* classic drivers), and kernel and ramdisk if needed (it is needed in case of netboot for example).ironic節點的部署接口緩存實例映像(在iscsi部署接口或大多數pxe_ *經典驅動程序的情況下),以及內核和ramdisk(如果需要)(例如在netboot的情況下需要)。
- The ironic node’s power interface instructs the node to power on.
- The node boots the deploy ramdisk.
- Depending on the exact driver used, either the conductor copies the image over iSCSI to the physical node (iSCSI deploy) or the deploy ramdisk downloads the image from a temporary URL (Direct deploy). The temporary URL can be generated by Swift API-compatible object stores, for example Swift itself or RadosGW.根據所使用的確切驅動程序,導體將圖像通過iSCSI復制到物理節點(iSCSI部署),或者部署ramdisk從臨時URL下載映像(直接部署)。 臨時URL可以由Swift API兼容的對象存儲生成,例如Swift本身或RadosGW。The image deployment is done.
- The node’s boot interface switches pxe config to refer to instance images (or, in case of local boot, sets boot device to disk), and asks the ramdisk agent to soft power off the node. If the soft power off by the ramdisk agent fails, the bare metal node is powered off via IPMI/BMC call.節點的引導接口將pxe config切換為引用實例映像(或者,如果是本地引導,則將引導設備設置為磁盤),並要求ramdisk代理軟關閉節點電源。 如果ramdisk代理關閉軟電源失敗,則通過IPMI / BMC呼叫關閉裸機節點電源。
- The deploy interface triggers the network interface to remove provisioning ports if they were created, and binds the tenant ports to the node if not already bound. Then the node is powered on.部署接口會觸發網絡接口,以便在創建時刪除配置端口,並將租戶端口綁定到節點(如果尚未綁定)。 然后節點上電。Note:There are 2 power cycles during bare metal deployment; the first time the node is powered-on when ramdisk is booted, the second time after the image is deployed.注意:裸機部署期間有2個電源循環; 第一次啟動ramdisk時啟動節點,第二次部署映像后。
- The bare metal node’s provisioning state(配置狀態) is updated to active.
