kvm雲主機網卡多隊列


網卡多隊列
centos 7開始支持virtio網卡多隊列,可以大大提高虛擬機網絡性能,配置方法如下:
虛擬機的xml網卡配置
示范配置
<interface type='network'>
     <source network='default'/>
        <model type='virtio'/>         
        <driver name='vhost' queues='N'/>
</interface>
N 1 - 8 最多支持8個隊列
在虛擬機上執行以下命令開啟多隊列網卡
#ethtool -L eth0 combined M
M 1 - N M小於等於N
 
生產環境示范
 
1.ROS系統(示范)
沒有多隊列前
 
修改xml,重啟雲主機
    <interface type='bridge'>
      <mac address='fa:85:92:5a:86:00'/>
      <source bridge='br_p2p1_130'/>
      <bandwidth>
        <inbound average='640'/>
        <outbound average='640'/>
      </bandwidth>
      <target dev='vnic12641.0'/>
      <model type='virtio'/>
      <driver name='vhost' queues='4'/>
      <alias name='net0'/>
    </interface>
 
 
2.Centos系統示范
 
    <interface type='bridge'>
      <mac address='fa:e0:f4:7f:15:00'/>
      <source bridge='br_p2p1_30'/>
      <bandwidth>
        <inbound average='640'/>
        <outbound average='640'/>
      </bandwidth>
      <target dev='vnic12641.0'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2'/>
      <alias name='net0'/>
    </interface>
 
重啟雲主機,在進入虛機根據網卡接口和隊列數目輸入以下命令
ethtool -L eth0 combined 2
 
 
查看是否生效(進入虛機查看該網卡隊列數和對應的內核中斷數)
ethtool -l eth0
 
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       2  # 這一行表示最多支持設置2個隊列
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       2  #表示當前生效的是2個隊列
cat /proc/interrupts
 
雲主機處在大流量下的效果(雙網卡4隊列,負載被均分到4個物理CPU上了)
 
擴展1:要達到上面圖像所顯示的效果必須做vcpu和物理cpu的綁定
雲主機vcpu和物理cpu綁定效果如下:
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 0 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 1 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 2 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 3 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 4 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 5 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 6 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 7 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 8 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 9 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 10 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 11 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 12 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 13 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 14 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 15 --cpulist 0-7 --live --config
 
擴展2:centos6的手動軟中斷綁定(cetnos7雲主機可以直接啟動systemctl start irqbalance)
問題:我們在雲主機的xml文件中加入了網卡多隊列,並使用了ethtool -L eth0 combined 4命令給雲主機的網卡配置上了多隊列,使用'ethtool -l eth0'命令查看到網絡的多隊列生效了 
但是客戶反饋在大流量和高負載的情況下做iptables的NAT轉發,還是丟包嚴重
           
在宿主機上使用sar -n DEV 2命令看到PPS( 網絡吞吐率)有17W, 嗯量有點大
 
 
 
解決辦法:登錄雲主機,進行軟中斷的手動綁定
 
 
把中斷號配置進smp_affinity_list文件里   
echo 0  >/proc/irq/27/smp_affinity_list
echo 0  >/proc/irq/28/smp_affinity_list
echo 1  >/proc/irq/29/smp_affinity_list
echo 1  >/proc/irq/30/smp_affinity_list
echo 2  >/proc/irq/31/smp_affinity_list
echo 2  >/proc/irq/32/smp_affinity_list
echo 3  >/proc/irq/33/smp_affinity_list
echo 3  >/proc/irq/34/smp_affinity_list
 
查看是否生效
# grep -E "eth|em|bond|virtio"  /proc/interrupts
            CPU0       CPU1      CPU2       CPU3
10:        366          0          0          0   IO-APIC-fasteoi   virtio2
24:          0          0          0          0   PCI-MSI-edge      virtio1-config
25:       4931          0          0          0   PCI-MSI-edge      virtio1-requests
26:          0          0          0          0   PCI-MSI-edge      virtio0-config
27:      512908          0          0          0   PCI-MSI-edge      virtio0-input.0
28:          5          0          0          0   PCI-MSI-edge      virtio0-output.0
29:          1        8592          0          0   PCI-MSI-edge      virtio0-input.1
30:          1          0          0          0   PCI-MSI-edge      virtio0-output.1
31:          1          0        1068          0   PCI-MSI-edge      virtio0-input.2
32:          1          0          0          0   PCI-MSI-edge      virtio0-output.2
33:          1          0          0        2214   PCI-MSI-edge      virtio0-input.3
34:          1          0          0          0   PCI-MSI-edge      virtio0-output.3
 
關於centos7,機器默認就是開啟網卡多隊列和irqbalance服務,我們查看到網卡隊列雖然不是被平均分配到每個CPU上的,但是最起碼沒有集中到一個CPU上
 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM