dpdk课程学习之练习笔记八(dpvs的了解)

只是看到这个,跟着流程做一下练习,了解这个东东是干啥的,再就是搭建环境,基于dpdk的环境,顺手也就练习dpdk的环境搭建了。

0:总结

1:知道了lvs能实现的功能,挺强大。

2:熟悉练习一下dpdk的环境搭建,设置网卡支持多队列,设置巨页,dpdk接管网卡。

3:dpvs是基于dpdk上的一个产物,了解一下,未涉及相关业务,并未深入。

1:lvs进行练习。

参考: LVS的介绍与使用-CSDN博客

1.1 首先安装nginx并启动 默认端口是80,有多个网卡,两个ip都能访问到。

bash 复制代码
sudo apt-get install nginx
netstat -anop|grep nginx
sudo systemctl restart nginx 
ps afx|grep nginx
netstat -anop|grep nginx
curl 192.168.40.137:80
ifconfig
curl 192.168.40.139:80

1.2 安装lvs并进行测试

bash 复制代码
#这里简单配置 测试 具体根据实际场景
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# apt-get install ipvsadm
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# sudo modprobe ip_vs

ifconfig ens33:250 192.168.40.222/24        #临时设置 ip 作为虚拟ip
ifconfig
sudo ipvsadm -A -t 192.168.40.222:80 -s rr   #配置虚拟ip  设置轮询 
ipvsadm -L -n                                #查看配置
sudo ipvsadm -a -t 192.168.40.222:80 -r 192.168.40.139:80  #配置虚拟ip对应的两个真实ip
ipvsadm -L -n
sudo ipvsadm -a -t 192.168.40.222:80 -r 192.168.40.137:80 
ipvsadm -L -n
ipvsadm -C    #清除配置

这里配置完成后,使用curl或者直接在浏览器上访问这个虚拟ip 192.168.40.222:80 都能正常访问到nginx,实际上后台是按照设置的负载均衡的策略,依次访问后台真实的服务器。

1.3 补充

简单练习了解lvs,真实场景根据业务,虚拟ip提供对外访问的功能,以及真实ip不提供设置,以及配合keepalive实现能探测到服务的正常等,都待了解。

bash 复制代码
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.40.137  netmask 255.255.255.0  broadcast 192.168.40.255
        inet6 fe80::20c:29ff:fe40:9a67  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:40:9a:67  txqueuelen 1000  (Ethernet)
        RX packets 655  bytes 396055 (396.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 444  bytes 41127 (41.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33:250: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.40.222  netmask 255.255.255.0  broadcast 192.168.40.255
        ether 00:0c:29:40:9a:67  txqueuelen 1000  (Ethernet)

ens38: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.40.139  netmask 255.255.255.0  broadcast 192.168.40.255
        inet6 fe80::20c:29ff:fe40:9a7b  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:40:9a:7b  txqueuelen 1000  (Ethernet)
        RX packets 34291  bytes 19297727 (19.2 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24840  bytes 6069135 (6.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 504  bytes 62922 (62.9 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 504  bytes 62922 (62.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
#这里我有多个网卡   部署了nginx服务器,同一个,多个ip都可以正常访问到 192.168.40.137:80  192.168.40.139:80
#部署一个虚拟ip   192.168.40.222:80    ifconfig ens33:250 192.168.40.222/24    sudo ipvsadm -A -t 192.168.40.222:80 -s rr
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.40.222:80 rr
  -> 192.168.40.137:80            Route   1      0          2         
  -> 192.168.40.139:80            Route   1      0          2 

2:dpvs的环境搭建和练习

2.1:虚拟机环境,新增网卡,配置多队列网卡。

dpdk要接管一个网卡,这里使用桥接新增一个网卡,设置支持多队列。

2.2.1 问题1:新增的桥接模式的网卡,要配置好ip,配置静态ip(动态dhcp一直不成功)。

设置桥接模式,然后dhcp进行获取ip,发现一直不成功,只能设置静态ip了。

这里还需要再vmware上设置 编辑---》虚拟网络编辑器中,设置桥接模式,桥接到真正连接网络的网卡上。 这里我用的wifi,填的对应网卡。

bash 复制代码
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.5 LTS
Release:	22.04
Codename:	jammy

#参考默认网卡 nat的方式 在配置文件/etc/netplan/00-installer-config.yaml 下新增设置动态dhcp,发现一直不可用,得手动配置静态ip
#笔记本上使无线 对应信息如下 参考https://www.cnblogs.com/liulog/p/17639196.html
  无线局域网适配器 WLAN:

   连接特定的 DNS 后缀 . . . . . . . :
   IPv6 地址 . . . . . . . . . . . . : 240e:874:10a:698d:c33d:e2de:8d02:c4d3
   临时 IPv6 地址. . . . . . . . . . : 240e:874:10a:698d:dd2:b758:b5d3:4217
   本地链接 IPv6 地址. . . . . . . . : fe80::3c7e:3c6e:c834:4c19%7
   IPv4 地址 . . . . . . . . . . . . : 192.168.0.102
   子网掩码  . . . . . . . . . . . . : 255.255.255.0
   默认网关. . . . . . . . . . . . . : 192.168.0.1

#这里的ens37 就是我新增的网卡适配器  设置静态ip为192.168.0.111   和主机同一网段
root@ubuntu:/etc/netplan# cat 00-installer-config.yaml 
# This is the network config written by 'subiquity'
network:
  ethernets:
    ens33:
      dhcp4: true
    ens37:
      dhcp4: false
      dhcp6: false
      addresses: [192.168.0.111/24]
      routes:
        - to: default
          via: 192.168.0.1
      nameservers:
        addresses: [114.114.114.114, 8.8.8.8, 192.168.0.1]
    ens38:
      dhcp4: true
  version: 2
  renderer: networkd
  
root@ubuntu:/etc/netplan# sudo netplan apply
WARNING:root:Cannot call Open vSwitch: ovsdb-server.service is not running.
root@ubuntu:/etc/netplan# systemctl restart systemd-networkd

root@ubuntu:/etc/netplan# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.40.137  netmask 255.255.255.0  broadcast 192.168.40.255
        inet6 fe80::20c:29ff:fe40:9a67  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:40:9a:67  txqueuelen 1000  (Ethernet)
        RX packets 1254  bytes 125885 (125.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1004  bytes 186983 (186.9 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens37: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.111  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::20c:29ff:fe40:9a71  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:40:9a:71  txqueuelen 1000  (Ethernet)
        RX packets 943  bytes 65395 (65.3 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 263  bytes 56196 (56.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
...

root@ubuntu:/etc/netplan# ping -I ens37 www.baidu.com
PING www.a.shifen.com (183.2.172.185) from 192.168.0.111 ens37: 56(84) bytes of data.
64 bytes from 183.2.172.185 (183.2.172.185): icmp_seq=1 ttl=52 time=80.8 ms
64 bytes from 183.2.172.185 (183.2.172.185): icmp_seq=2 ttl=52 time=66.1 ms

#定位dhcp为啥不识别的参考日志  和手动执行查看dhcp获取ip
dmesg | grep DHCP
sudo tail -f /var/log/syslog
sudo dhclient -v ens38
2.2.2 设置网卡支持多队列
bash 复制代码
#顺手改个原始eth命名规则  修改配置文件/etc/default/grub,在配置文件中"GRUB_CMDLINE_LINUX"字段对应值新增:net.ifnames=0 biosdevname=0
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"    #这里只新增net.ifnames=0 biosdevname=0 字段,如果有其他,保留就好

#修改后 sudo update-grub
#重启生效  发现确实改成eth系列  网络也正常  但是上面修改的网络配置 内部ens系列依然成功。  所以应该重新配置00-installer-config.yaml ? 
#这个命名规则还是别瞎改,改了后后面无法对应关系    要么再开始的时候就改了再进行后面的适配器网络配置



#打开对应虚拟机目录下文件  *******.vmx  修改如下配置
ethernet1.virtualDev = "vmxnet3"
ethernet1.wakeOnPcktRcv = "TRUE"

#ens160已经支持多队列网卡了  这里适配器名称发生变化 ens160  是因为中间折腾换成eth模式,发现得一系列从头设置。 恢复后发现适配器名称变化了 
root@ubuntu:/etc/netplan# cat /proc/interrupts |grep ens
  16:          0          0        163          0          0       3196          0          0   IO-APIC   16-fasteoi   vmwgfx, snd_ens1371, ens38
  19:          0          0          0          0          0          0         20        264   IO-APIC   19-fasteoi   ens33
  56:          0          0          0          0          0          7          0          6   PCI-MSI 1572864-edge      ens160-rxtx-0
  57:          0          0          0          0          0          0          0          0   PCI-MSI 1572865-edge      ens160-rxtx-1
  58:          4          0          0          0          0          0          0          0   PCI-MSI 1572866-edge      ens160-rxtx-2
  59:          0          1          0          0          0          0          0          0   PCI-MSI 1572867-edge      ens160-rxtx-3
  60:          0          1          0          4          0          0          0          0   PCI-MSI 1572868-edge      ens160-rxtx-4
  61:          0          0          0          0          0          0          0          0   PCI-MSI 1572869-edge      ens160-rxtx-5
  62:          0          0          0          0          0          1          0          0   PCI-MSI 1572870-edge      ens160-rxtx-6
  63:          0          0          0          0          0          0          1          0   PCI-MSI 1572871-edge      ens160-rxtx-7
  64:          0          0          0          0          0          0          0          0   PCI-MSI 1572872-edge      ens160-event-8

2.2 dpvs源码安装,先dpdk的安装

参考github中的设置进行一步步来即可,中间遇到问题解决问题。

我遇到的问题,使用主干代码,和最新版本的代码编译都有一个报错 ,需要编译时 -Werror=format-overflow=

error: '%s' directive writing up to 63 bytes into a region of size between 33 and 96 [-Werror=format-overflow=]

2.2.1 下载源码 基础dpdk的安装
bash 复制代码
#试试主干dpvs能否编译成功  果然有问题  按md文档进行就好 
ubuntu@ubuntu:~/dpvs_test$ wget https://fast.dpdk.org/rel/dpdk-20.11.1.tar.xz 
ubuntu@ubuntu:~/dpvs_test$ git clone https://github.com/iqiyi/dpvs.git
ubuntu@ubuntu:~/dpvs_test$ tar xf dpdk-20.11.1.tar.xz 
ubuntu@ubuntu:~/dpvs_test/dpvs$ cp patch/dpdk-stable-20.11.1/* ../dpdk-stable-20.11.1/

ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ patch -p1 < 0001-kni-use-netlink-event-for-multicast-driver-part.patch 
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ patch -p1 < 0002-pdump-change-dpdk-pdump-tool-for-dpvs.patch 
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ patch -p1 < 0003-debug-enable-dpdk-eal-memory-debug.patch 
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ patch -p1 < 0004-ixgbe_flow-patch-ixgbe-fdir-rte_flow-for-dpvs.patch 
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ patch -p1 < 0005-bonding-allow-slaves-from-different-numa-nodes.patch 
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ patch -p1 < 0006-bonding-fix-bonding-mode-4-problems.patch 


ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ mkdir dpdklib
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ mkdir dpdkbuild

#报错后安装
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ sudo apt install meson
#这里用绝对路径
#ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ meson -Denable_kmods=true -Dprefix=dpdklib dpdkbuild
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ meson -Denable_kmods=true -Dprefix=/home/ubuntu/dpvs_test/dpdk-stable-20.11.1/dpdklib dpdkbuild
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ ninja -C dpdkbuild
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1$ cd dpdkbuild/
ubuntu@ubuntu:~/dpvs_test/dpdk-stable-20.11.1/dpdkbuild$ sudo ninja install

#编译完成后 需要设置环境变量 
export PKG_CONFIG_PATH=/home/ubuntu/dpvs_test/dpdk-stable-20.11.1/dpdklib/lib/x86_64-linux-gnu/pkgconfig
2.2.2 配置巨页
bash 复制代码
root@ubuntu:/home/ubuntu/dpvs_test/dpdk-stable-20.11.1/dpdklib/lib/x86_64-linux-gnu/pkgconfig# cd /sys/devices/system/node/node0/hugepages/hugepages-2048kB/
root@ubuntu:/sys/devices/system/node/node0/hugepages/hugepages-2048kB# ls
free_hugepages  nr_hugepages  surplus_hugepages
root@ubuntu:/sys/devices/system/node/node0/hugepages/hugepages-2048kB# cat nr_hugepages 
0
#2048 的内存  设置块数  这里设置为4096块 2M*4096 = 8G  看起来并不能完全分配
root@ubuntu:/sys/devices/system/node/node0/hugepages/hugepages-2048kB# cat nr_hugepages 
0
#只有一个内存 
root@ubuntu:/sys/devices/system/node/node0/hugepages/hugepages-2048kB# echo 4096 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages 
# 这里的数值不是4096  看起来还得设置更小   虚拟机设置的8G内存
root@ubuntu:/sys/devices/system/node/node0/hugepages/hugepages-2048kB# cat nr_hugepages 
3558

#挂载 设置巨页的文件系统
root@ubuntu:/sys/devices/system/node/node0/hugepages/hugepages-2048kB# mkdir /mnt/huge
root@ubuntu:/sys/devices/system/node/node0/hugepages/hugepages-2048kB# mount -t hugetlbfs nodev /mnt/huge
2.2.3 插入igb_uio模块,打开支持kni模块
bash 复制代码
#使用igb_uio  对网卡nic的操作  使用系统自带的  dpdk应该有个模块也支持
root@ubuntu:/home/ubuntu/dpvs_test/dpvs# modprobe uio_pci_generic

#插入kni的模块 并打开该模块    内核提供这个功能,
root@ubuntu:/home/ubuntu/dpvs_test/dpdk-stable-20.11.1# insmod dpdkbuild/kernel/linux/kni/rte_kni.ko carrier=on

root@ubuntu:/home/ubuntu/dpvs_test/dpdk-stable-20.11.1# ./usertools/dpdk-devbind.py --status

Network devices using kernel driver
===================================
0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens33 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*
0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens38 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*
0000:03:00.0 'VMXNET3 Ethernet Controller 07b0' if=ens160 drv=vmxnet3 unused=vfio-pci,uio_pci_generic *Active*
2.2.3 使用dpdk接管对应的网卡
bash 复制代码
#备份必要信息
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.111  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::20c:29ff:fe40:9a71  prefixlen 64  scopeid 0x20<link>
        inet6 240e:bf:d100:9675:20c:29ff:fe40:9a71  prefixlen 64  scopeid 0x0<global>
        ether 00:0c:29:40:9a:71  txqueuelen 1000  (Ethernet)
        RX packets 16591  bytes 22748708 (22.7 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4947  bytes 343441 (343.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


root@ubuntu:/home/ubuntu/dpvs_test/dpdk-stable-20.11.1# ifconfig ens160 down

#绑定接管对应的网卡  使用上面刚才插入的uio模块
root@ubuntu:/home/ubuntu/dpvs_test/dpdk-stable-20.11.1#./usertools/dpdk-devbind.py -b uio_pci_generic 0000:03:00.0
root@ubuntu:/home/ubuntu/dpvs_test/dpdk-stable-20.11.1# ./usertools/dpdk-devbind.py --status

Network devices using DPDK-compatible driver
============================================
0000:03:00.0 'VMXNET3 Ethernet Controller 07b0' drv=uio_pci_generic unused=vmxnet3,vfio-pci

Network devices using kernel driver
===================================
0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens33 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*
0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens38 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*
2.2.4 测试一下dpdk的example
bash 复制代码
#makefile 编译改为static 直接运行测试
root@ubuntu:/home/ubuntu/dpvs_test/dpdk-stable-20.11.1/examples/helloworld/build# ./helloworld-static 
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_vmxnet3 (15ad:7b0) device: 0000:03:00.0 (socket 0)
EAL: No legacy callbacks, legacy socket not created
hello from core 1
hello from core 2
hello from core 3
hello from core 4
hello from core 5
hello from core 6
hello from core 7
hello from core 0

2.3 dpvs的安装

发现高版本安装一直有些问题 最后使用dpvs-1.9.4.tar.gz 编译练习

bash 复制代码
#已经接管网卡了  考虑dpvs
root@ubuntu:/home/ubuntu/dpvs_test/dpdk-stable-20.11.1# cd ../dpvs/
root@ubuntu:/home/ubuntu/dpvs_test/dpvs# export PKG_CONFIG_PATH=/home/ubuntu/dpvs_test/dpdk-stable-20.11.1/dpdklib/lib/x86_64-linux-gnu/pkgconfig

#报错 安装必要的库
root@ubuntu:/home/ubuntu/dpvs_test/dpvs# apt-get install pkg-config
root@ubuntu:/home/ubuntu/dpvs_test/dpvs# sudo apt-get install libnuma-dev

root@ubuntu:/home/ubuntu/dpvs_test# tar -xf dpvs-1.9.4.tar.gz
root@ubuntu:/home/ubuntu/dpvs_test# cd dpvs-1.9.4

#直接使用clone下来的dpvs源码  发现有问题 printf相关
#使用 1.9.4版本 发现md5接口问题,需要安装低版本的openssl
wget https://www.openssl.org/source/openssl-1.1.1.tar.gz
tar -xf openssl-1.1.1.tar.gz 
cd openssl-1.1.1
./config 
make
sudo make install

sudo apt install autoconf
sudo apt install libpopt-dev

#编译成功
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4# make
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4# make install

#可以看到 已经生成需要的目标可执行文件  就是我们操作用的  
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ls
dpip  dpvs  ipvsadm  keepalived

#搞定配置文件   需要修改内部配置 rx和tx 对应 queue_number的个数 以及设置对应cpu对应的队列号一一对应 cpu个数不要超了
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4# cp conf/dpvs.conf.single-nic.sample /etc/dpvs.conf

#安装openssl低版本时的遗留   对应目录环境不识别
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ./dpvs 
./dpvs: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ldconfig /usr/local/lib/

#还是有一些报错的   是我上面设置的大页过大 以及其他就需要根据源码依次排查适配了。  
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ./dpvs 
current thread affinity is set to FF
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB 
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
EAL:   Invalid NUMA socket, default to 0
EAL: Probe PCI driver: net_vmxnet3 (15ad:7b0) device: 0000:03:00.0 (socket 0)
...
NETIF: Ethdev port_id=0 invalid rss_hf: 0x3afbc, valid value: 0x514
NETIF: Ethdev port_id=0 invalid rx_offload: 0x3, valid value: 0x82a1d
NETIF: Ethdev port_id=0 invalid tx_offload: 0x1000c, valid value: 0x802d
NETIF: dpdk_set_mc_list: rte_eth_dev_set_mc_addr_list is not supported, enable all multicast.
...
NETIF: netif_port_start: dpdk0 update rss reta failed (cause: failed dpdk api)

2.4 dpvs的练习

个人理解,这里dpvs实现的功能,和上面lvs实现的功能差不多吧。

bash 复制代码
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ./dpvs &

root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ./dpip link show
1: dpdk0: socket 0 mtu 1500 rx-queue 4 tx-queue 4
    UP 10000 Mbps full-duplex fixed-nego 
    addr 00:0C:29:40:9A:71 OF_TX_TCP_CSUM OF_TX_UDP_CSUM 
#运行有问题     主要关注dpvs可执行文件的逻辑  这里是虚拟机环境练习,也需要改dpvs的代码。  
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ./dpip addr add 192.168.0.111/24 dev dpdk0
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ./ipvsadm -A -t 192.168.0.111:80 -s rr
root@ubuntu:/home/ubuntu/dpvs_test/dpvs-1.9.4/bin# ./ipvsadm -a -t 192.168.0.111:80 -r 192.168.40.137:80 -b