kvm云主机网卡多队列


网卡多队列
centos 7开始支持virtio网卡多队列,可以大大提高虚拟机网络性能,配置方法如下:
虚拟机的xml网卡配置
示范配置
<interface type='network'>
     <source network='default'/>
        <model type='virtio'/>         
        <driver name='vhost' queues='N'/>
</interface>
N 1 - 8 最多支持8个队列
在虚拟机上执行以下命令开启多队列网卡
#ethtool -L eth0 combined M
M 1 - N M小于等于N
 
生产环境示范
 
1.ROS系统(示范)
没有多队列前
 
修改xml,重启云主机
    <interface type='bridge'>
      <mac address='fa:85:92:5a:86:00'/>
      <source bridge='br_p2p1_130'/>
      <bandwidth>
        <inbound average='640'/>
        <outbound average='640'/>
      </bandwidth>
      <target dev='vnic12641.0'/>
      <model type='virtio'/>
      <driver name='vhost' queues='4'/>
      <alias name='net0'/>
    </interface>
 
 
2.Centos系统示范
 
    <interface type='bridge'>
      <mac address='fa:e0:f4:7f:15:00'/>
      <source bridge='br_p2p1_30'/>
      <bandwidth>
        <inbound average='640'/>
        <outbound average='640'/>
      </bandwidth>
      <target dev='vnic12641.0'/>
      <model type='virtio'/>
      <driver name='vhost' queues='2'/>
      <alias name='net0'/>
    </interface>
 
重启云主机,在进入虚机根据网卡接口和队列数目输入以下命令
ethtool -L eth0 combined 2
 
 
查看是否生效(进入虚机查看该网卡队列数和对应的内核中断数)
ethtool -l eth0
 
# ethtool -l eth0
Channel parameters for eth0:
Pre-set maximums:
RX:             0
TX:             0
Other:          0
Combined:       2  # 这一行表示最多支持设置2个队列
Current hardware settings:
RX:             0
TX:             0
Other:          0
Combined:       2  #表示当前生效的是2个队列
cat /proc/interrupts
 
云主机处在大流量下的效果(双网卡4队列,负载被均分到4个物理CPU上了)
 
扩展1:要达到上面图像所显示的效果必须做vcpu和物理cpu的绑定
云主机vcpu和物理cpu绑定效果如下:
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 0 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 1 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 2 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 3 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 4 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 5 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 6 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 7 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 8 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 9 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 10 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 11 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 12 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 13 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 14 --cpulist 0-7 --live --config
sudo virsh vcpupin 2bb61dddd8874d418b1959d88794d7c1 --vcpu 15 --cpulist 0-7 --live --config
 
扩展2:centos6的手动软中断绑定(cetnos7云主机可以直接启动systemctl start irqbalance)
问题:我们在云主机的xml文件中加入了网卡多队列,并使用了ethtool -L eth0 combined 4命令给云主机的网卡配置上了多队列,使用'ethtool -l eth0'命令查看到网络的多队列生效了 
但是客户反馈在大流量和高负载的情况下做iptables的NAT转发,还是丢包严重
           
在宿主机上使用sar -n DEV 2命令看到PPS( 网络吞吐率)有17W, 嗯量有点大
 
 
 
解决办法:登录云主机,进行软中断的手动绑定
 
 
把中断号配置进smp_affinity_list文件里   
echo 0  >/proc/irq/27/smp_affinity_list
echo 0  >/proc/irq/28/smp_affinity_list
echo 1  >/proc/irq/29/smp_affinity_list
echo 1  >/proc/irq/30/smp_affinity_list
echo 2  >/proc/irq/31/smp_affinity_list
echo 2  >/proc/irq/32/smp_affinity_list
echo 3  >/proc/irq/33/smp_affinity_list
echo 3  >/proc/irq/34/smp_affinity_list
 
查看是否生效
# grep -E "eth|em|bond|virtio"  /proc/interrupts
            CPU0       CPU1      CPU2       CPU3
10:        366          0          0          0   IO-APIC-fasteoi   virtio2
24:          0          0          0          0   PCI-MSI-edge      virtio1-config
25:       4931          0          0          0   PCI-MSI-edge      virtio1-requests
26:          0          0          0          0   PCI-MSI-edge      virtio0-config
27:      512908          0          0          0   PCI-MSI-edge      virtio0-input.0
28:          5          0          0          0   PCI-MSI-edge      virtio0-output.0
29:          1        8592          0          0   PCI-MSI-edge      virtio0-input.1
30:          1          0          0          0   PCI-MSI-edge      virtio0-output.1
31:          1          0        1068          0   PCI-MSI-edge      virtio0-input.2
32:          1          0          0          0   PCI-MSI-edge      virtio0-output.2
33:          1          0          0        2214   PCI-MSI-edge      virtio0-input.3
34:          1          0          0          0   PCI-MSI-edge      virtio0-output.3
 
关于centos7,机器默认就是开启网卡多队列和irqbalance服务,我们查看到网卡队列虽然不是被平均分配到每个CPU上的,但是最起码没有集中到一个CPU上
 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM