linux 网卡配置 vlan/bond/bridge/macvlan/ipvlan/macvtap 模式

linux 网卡模式

linux网卡支持非vlan模式、vlan模式、bond模式、bridge模式,macvlan模式、ipvlan模式等,下面介绍交换机端及服务器端配置示例。

前置要求:

  • 准备一台物理交换机,以 H3C S5130 三层交换机为例
  • 准备一台物理服务器,以 Ubuntu 22.04 LTS 操作系统为例

交换机创建2个示例VLAN,vlan10和vlan20,及VLAN接口。

bash 复制代码
<H3C>system-view

[H3C]vlan 10 20

[H3C]interface Vlan-interface 10
[H3C-Vlan-interface10]ip address 172.16.10.1 24
[H3C-Vlan-interface10]undo shutdown
[H3C-Vlan-interface10]exit
[H3C]

[H3C]interface Vlan-interface 20
[H3C-Vlan-interface20]ip address 172.16.20.1 24
[H3C-Vlan-interface20]undo  shutdown 
[H3C-Vlan-interface20]exit
[H3C]

网卡非vlan模式

网卡非vlan模式,一般直接配置IP地址,对端上连交换机配置为access口,access口一般用于连接纯物理服务器或办公终端设备。

示意图如下

交换机配置,交换机接口配置为access模式,并加入对应vlan

bash 复制代码
<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]
[H3C]interface GigabitEthernet 1/0/2
[H3C-GigabitEthernet1/0/2]port link-type access
[H3C-GigabitEthernet1/0/2]port access vlan 20
[H3C-GigabitEthernet1/0/2]exit
[H3C]

服务器1配置,服务器网卡直接配置IP地址

yaml 复制代码
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
  version: 2

服务器2配置,服务器网卡直接配置IP地址

yaml 复制代码
root@server2:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.20.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.20.1
  version: 2

应用网卡配置

netplan apply

查看服务器接口信息

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever

通过server1 ping server2测试连通性,三层交换机支持路由功能,能够打通二层隔离的vlan网段。

bash 复制代码
root@server1:~# ping 172.16.20.10 -c 4
PING 172.16.20.10 (172.16.20.10) 56(84) bytes of data.
64 bytes from 172.16.20.10: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.20.10: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.20.10: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.20.10: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.20.10 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms

网卡vlan模式

vlan模式下,对端上连交换机需要配置为trunk口,允许多个vlan通过。

示意图如下

交换机配置,交换机需要配置为trunk口,允许多个vlan通过

bash 复制代码
H3C>system-view 
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type trunk
[H3C-GigabitEthernet1/0/1]port trunk permit vlan 10 20
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,服务器需要配置vlan子接口

yaml 复制代码
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: true
  vlans:
    vlan10:
      id: 10
      link: enp1s0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: enp1s0
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300
  version: 2

查看接口信息,新建了两个vlan子接口vlan10和vlan20

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
10: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
11: vlan20@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever

通过vlan10 和 vlan20测试与网关连通性

bash 复制代码
root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms
root@server1:~#
root@server1:~# ping 172.16.20.1 -c 4
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=64 time=0.048 ms
64 bytes from 172.16.20.1: icmp_seq=4 ttl=64 time=0.047 ms

--- 172.16.20.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.033/0.044/0.048/0.006 ms

网卡bond模式

bond模式下,对端交换机需要配置bond聚合口。

示意图如下

交换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3加入聚合组。然后将bond口配置为trunk模式。

bash 复制代码
<H3C>system-view
[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]link-aggregation mode dynamic
[H3C-Bridge-Aggregation1]quit

[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-aggregation group 1
[H3C-GigabitEthernet1/0/1]exit

[H3C]interface GigabitEthernet 1/0/3
[H3C-GigabitEthernet1/0/3]port link-aggregation group 1
[H3C-GigabitEthernet1/0/3]exit

[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]port link-type trunk
[H3C-Bridge-Aggregation1]port trunk permit vlan 10 20
[H3C-Bridge-Aggregation1]exit

服务器配置

yaml 复制代码
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: no
    enp2s0:
      dhcp4: no
  bonds:
    bond0:
      interfaces:
        - enp1s0
        - enp2s0
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer2+3
  vlans:
    vlan10:
      id: 10
      link: bond0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: bond0
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300

查看网卡信息,新建了bond0网口,并且基于bond0网口创建了两个vlan子接口vlan10和vlan20,enp1s0和enp2s0显示master bond0,说明两个网卡属于bond0成员接口。

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
8: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
9: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever

查看bond状态,Bonding Mode显示为IEEE 802.3ad Dynamic link aggregation,并且下面Slave Interface显示了两个成员接口的信息。

yaml 复制代码
root@server1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.15.0-60-generic

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

802.3ad info
LACP active: on
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: ae:fd:60:48:84:1a
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 2
        Actor Key: 9
        Partner Key: 1
        Partner Mac Address: fc:60:9b:35:ad:18

Slave Interface: enp1s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr: 7c:b5:9b:59:0a:71
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: ae:fd:60:48:84:1a
    port key: 9
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: fc:60:9b:35:ad:18
    oper key: 1
    port priority: 32768
    port number: 2
    port state: 61

Slave Interface: enp2s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 3
Permanent HW addr: e4:54:e8:dc:e5:88
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: ae:fd:60:48:84:1a
    port key: 9
    port priority: 255
    port number: 2
    port state: 63
details partner lacp pdu:
    system priority: 32768
    system mac address: fc:60:9b:35:ad:18
    oper key: 1
    port priority: 32768
    port number: 1
    port state: 61

测试连通性,测试与交换机网关地址的连通性:

bash 复制代码
root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.64 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.59 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.95 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=255 time=1.93 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 1.589/1.776/1.953/0.165 ms
root@server1:~# 

关闭一个接口,再次测试连通性,依然能够ping通

bash 复制代码
root@server1:~# ip link set dev enp2s0 down
root@server1:~# ip link show enp2s0
3: enp2s0: <BROADCAST,MULTICAST,SLAVE> mtu 1500 qdisc fq_codel master bond0 state DOWN mode DEFAULT group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
root@server1:~# 
root@server1:~# ping 172.16.10.1 -c 4
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.54 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.64 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=2.73 ms
64 bytes from 172.16.10.1: icmp_seq=4 ttl=255 time=1.47 ms

--- 172.16.10.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 1.470/1.844/2.732/0.516 ms

网卡桥接模式

桥接模式下,对端交换机可配置access模式或trunk模式。

示意图如下

交换机配置,交换机接口配置为access模式为例,并加入对应vlan

bash 复制代码
<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,物理网卡加入到网桥中,IP地址配置到网桥接口br0上。

yaml 复制代码
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      interfaces: [enp1s0]
      addresses: [172.16.10.10/24]
      routes:
      - to: default
        via: 172.16.10.1
        metric: 100
        on-link: true
      mtu: 1500
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      parameters:
        stp: true
        forward-delay: 4

查看网卡信息

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0e:d0:7e:31:9c:74 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::cd0:7eff:fe31:9c74/64 scope link 
       valid_lft forever preferred_lft forever

查看网桥及接口,当前网桥上只有一个物理接口enp1s0。

bash 复制代码
root@server1:~# apt install -y bridge-utils
root@ubuntu:~# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.0ed07e319c74       yes             enp1s0
root@server1:~# 

这样在KVM虚拟化环境,虚拟机实例连接到网桥后,虚拟机可以配置与物理网卡相同网段的IP地址。访问虚拟机可以像访问物理机一样方便。

网卡macvlan模式

macvlan(MAC Virtual LAN)是Linux内核提供的一种网络虚拟化技术,它允许在一个物理网卡接口上创建多个虚拟网卡接口,每个虚拟接口都有自己独立的MAC地址,也可以配置上 IP 地址进行通信。Macvlan 下的虚拟机或者容器网络和主机在同一个网段中,共享同一个广播域。

macvlan模式下,对端交换机可配置access模式或trunk模式,trunk模式下macvlan能够与vlan很好的结合使用。

示意图如下:

macvlan IP模式

该模式下,上连交换机接口配置为access模式,服务器macvlan主网卡和子接口直接配置相同网段的IP地址。

交换机配置

bash 复制代码
<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit

服务器配置,macvlan支持多种模式,这里使用bridge模式,并持久化配置

bash 复制代码
cat >/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add macvlan0 link enp1s0 type macvlan mode bridge
ip link add macvlan1 link enp1s0 type macvlan mode bridge
EOF

chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh

配置netplan

yaml 复制代码
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
    macvlan0:
      addresses:
        - 172.16.10.11/24
    macvlan1:
      addresses:
        - 172.16.10.12/24
  version: 2

应用网卡配置

bash 复制代码
netplan apply

查看网卡信息,新建了两个macvlan接口,IP地址与主网卡位于同一网段,并且每个接口都有独立的MAC地址。

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
13: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.11/24 brd 172.16.10.255 scope global macvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
14: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.12/24 brd 172.16.10.255 scope global macvlan1
       valid_lft forever preferred_lft forever
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever

测试与网关的连通性

bash 复制代码
root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 

macvlan vlan模式

该模式下,上连交换机接口配置为trunk模式,服务器macvlan主网卡不配置IP地址,每个macvlan子接口配置为不同的vlan子接口。

交换机配置

bash 复制代码
<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/3]port link-type trunk
[H3C-GigabitEthernet1/0/3]port trunk permit vlan 10 20
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,macvlan支持多种模式,这里使用bridge模式,并持久化配置

bash 复制代码
cat >/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add macvlan0 link enp1s0 type macvlan mode bridge
ip link add macvlan1 link enp1s0 type macvlan mode bridge
EOF

chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh

配置netplan,两个macvlan接口macvlan0和macvlan1分别配置vlan子接口vlan10和vlan20。

yaml 复制代码
root@ubuntu:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
    macvlan0:
      dhcp4: false
    macvlan1:
      dhcp4: false
  vlans:
    vlan10:
      id: 10
      link: macvlan0
      addresses: [ "172.16.10.10/24" ]
      routes:
        - to: default
          via: 172.16.10.1
          metric: 200
    vlan20:
      id: 20
      link: macvlan1
      addresses: [ "172.16.20.10/24" ]
      routes:
        - to: default
          via: 172.16.20.1
          metric: 300
  version: 2

应用网卡配置

bash 复制代码
netplan apply

查看网卡信息,新建了两个macvlan接口,以及对应的两个vlan子接口。

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
11: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
12: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever
13: vlan10@macvlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
       valid_lft forever preferred_lft forever
    inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link 
       valid_lft forever preferred_lft forever
14: vlan20@macvlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
       valid_lft forever preferred_lft forever
    inet6 fe80::d073:75ff:fe14:b204/64 scope link 
       valid_lft forever preferred_lft forever

测试两个VLAN接口与外部网关的连通性

bash 复制代码
root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 
root@server1:~# ping -c 3 172.16.20.1 
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=255 time=1.35 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=255 time=1.48 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=255 time=1.46 ms

--- 172.16.20.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.353/1.429/1.477/0.054 ms
root@server1:~# 

网卡ipvlan模式

IPVLAN(IP Virtual LAN)是Linux内核提供的一种网络虚拟化技术,它可以在一个物理网卡上创建多个虚拟网卡接口,每个虚拟接口都有自己独立的IP地址。

IPVLAN和macvlan类似,都是从一个主机接口虚拟出多个虚拟网络接口。唯一比较大的区别就是ipvlan虚拟出的子接口都有相同的mac地址(与物理接口共用同个mac地址),但可配置不同的ip地址。

ipvlan模式下,对端交换机也可以配置access模式或trunk模式,trunk模式下ipvlan能够与vlan很好的结合使用。

示意图如下

交换机配置

bash 复制代码
<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit
[H3C]

服务器配置,ipvlan支持三种模式(l2、l3、l3s),这里使用l3模式,并持久化配置

bash 复制代码
cat >/etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh<<EOF
#! /bin/bash
ip link add ipvlan0 link enp1s0 type ipvlan mode l3
ip link add ipvlan1 link enp1s0 type ipvlan mode l3
EOF
chmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh

配置netplan

yaml 复制代码
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
    ipvlan0:
      addresses:
        - 172.16.10.11/24
    ipvlan1:
      addresses:
        - 172.16.10.12/24
  version: 2

应用网卡配置

bash 复制代码
netplan apply

查看网卡信息,新建了两ipvlan接口,IP地址与主网卡位于同一网段,并且每个接口都有与主网卡相同的MAC地址。

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
9: ipvlan0@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.11/24 brd 172.16.10.255 scope global ipvlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::7cb5:9b00:159:a71/64 scope link 
       valid_lft forever preferred_lft forever
10: ipvlan1@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.12/24 brd 172.16.10.255 scope global ipvlan1
       valid_lft forever preferred_lft forever
    inet6 fe80::7cb5:9b00:259:a71/64 scope link 
       valid_lft forever preferred_lft forever

测试与网关的连通性

bash 复制代码
root@server1:~# ping -c 3 172.16.10.1
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=3.60 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.45 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=1.44 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.441/2.163/3.602/1.017 ms
root@server1:~# 

网卡 macvtap 模式

使用 bridge 使 KVM 虚拟机能够进行外部通信的另一种替代方法是使用 Linux MacVTap 驱动程序。当不想创建普通网桥,但希望本地网络中的用户访问虚拟机时,可以使用 MacVTap。

与使用bridge 的一个主要区别是 MacVTap 直接连接到 KVM 主机中的网络接口。这种直接连接绕过了 KVM 主机中与连接和使用软件bridge 相关的大部分代码和组件,有效地缩短了代码路径。这种较短的代码路径通常会提高吞吐量并减少外部系统的延迟。

示意图如下:

交换机配置

bash 复制代码
<H3C>system-view
[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-type access
[H3C-GigabitEthernet1/0/1]port access vlan 10
[H3C-GigabitEthernet1/0/1]exit

主机网卡配置

yaml 复制代码
root@server1:~# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 192.168.137.2
  version: 2

安装kvm虚拟化环境,创建两个虚拟机,指定从enp1s0主网卡分配mavtap子接口。

bash 复制代码
virt-install \
  --name vm1 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --noautoconsole \
  --import \
  --autostart \
  --network type=direct,source=enp1s0,source_mode=bridge,model=virtio

virt-install \
  --name vm2 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --noautoconsole \
  --import \
  --autostart \
  --network type=direct,source=enp1s0,source_mode=bridge,model=virtio

查看网卡信息,新创建了两个macvtap接口

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7eb5:9bff:fe59:a71/64 scope link 
       valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:bb:15:22 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
6: macvtap0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
    link/ether 52:54:00:41:8f:a3 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe41:8fa3/64 scope link 
       valid_lft forever preferred_lft forever
7: macvtap1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
    link/ether 52:54:00:93:2c:4a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5054:ff:fe93:2c4a/64 scope link 
       valid_lft forever preferred_lft forever

虚拟机1配置IP地址

yaml 复制代码
root@vm1:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.11/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
  version: 2

虚拟机2配置IP地址

yaml 复制代码
root@vm2:~# cat /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    enp1s0:
      dhcp4: false
      addresses:
        - 172.16.10.12/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.10.1
  version: 2

测试与网关的连通性

bash 复制代码
root@vm1:~# ping 172.16.10.1 -c 3
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.38 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=1.75 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=4.34 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.382/2.491/4.344/1.318 ms

bond、vlan与桥接混合配置

将服务器两块网卡组成bond口,在bond口之上创建两个vlan子接口,分别加入两个linux bridge中,然后在不同bridge下创建虚拟机,虚拟机将属于不同vlan。

示意图如下:

交换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3加入聚合组。将聚合口配置为trunk模式,允许vlan 8 10 20通过,并且将vlan8 配置为聚合口的native vlan,作为管理使用。

bash 复制代码
<H3C>system-view
[H3C]interface Vlan-interface 8
[H3C-Vlan-interface8]ip address 172.16.8.1 24
[H3C-Vlan-interface8]exit
[H3C]

[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]link-aggregation mode dynamic
[H3C-Bridge-Aggregation1]quit

[H3C]interface GigabitEthernet 1/0/1
[H3C-GigabitEthernet1/0/1]port link-aggregation group 1
[H3C-GigabitEthernet1/0/1]exit

[H3C]interface GigabitEthernet 1/0/3
[H3C-GigabitEthernet1/0/3]port link-aggregation group 1
[H3C-GigabitEthernet1/0/3]exit

[H3C]interface Bridge-Aggregation 1
[H3C-Bridge-Aggregation1]port link-type trunk 
[H3C-Bridge-Aggregation1]port trunk permit vlan 8 10 20 
[H3C-Bridge-Aggregation1]port trunk pvid vlan 8
[H3C-Bridge-Aggregation1]undo port trunk permit vlan 1
[H3C-Bridge-Aggregation1]exit
[H3C]

服务器网卡配置,注意bond0配置了管理IP地址,匹配交换机native vlan 8。

yaml 复制代码
root@server1:~# cat /etc/netplan/00-installer-config.yaml
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: false
    enp2s0:
      dhcp4: false
  bonds:
    bond0:
      dhcp4: false
      dhcp6: false
      interfaces:
        - enp1s0
        - enp2s0
      addresses:
        - 172.16.8.10/24
      nameservers:
        addresses:
          - 223.5.5.5
          - 223.6.6.6
      routes:
        - to: default
          via: 172.16.8.1
      parameters:
        mode: 802.3ad
        lacp-rate: fast
        mii-monitor-interval: 100
        transmit-hash-policy: layer2+3
  bridges:
    br10:
      interfaces: [ vlan10 ]
    br20:
      interfaces: [ vlan20 ]
  vlans:
    vlan10:
      id: 10
      link: bond0
    vlan20:
      id: 20
      link: bond0

查看网卡信息,新建了bond0网口,并且基于bond0网口创建了两个vlan子接口vlan10和vlan20。

bash 复制代码
root@server1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
15: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.8.10/24 brd 172.16.8.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::acfd:60ff:fe48:841a/64 scope link 
       valid_lft forever preferred_lft forever
16: br10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:df:66:ab:c2:4b brd ff:ff:ff:ff:ff:ff
    inet6 fe80::ecdf:66ff:feab:c24b/64 scope link 
       valid_lft forever preferred_lft forever
17: br20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9e:4d:f4:0a:6d:13 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9c4d:f4ff:fe0a:6d13/64 scope link 
       valid_lft forever preferred_lft forever
18: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br10 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
19: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br20 state UP group default qlen 1000
    link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff

查看创建的网桥

bash 复制代码
root@server1:~# brctl show
bridge name     bridge id               STP enabled     interfaces
br10            8000.eedf66abc24b       no              vlan10
br20            8000.9e4df40a6d13       no              vlan20

测试bond0 IP与外部网关连通性

bash 复制代码
root@server1:~# ping 172.16.8.1 -c 3
PING 172.16.8.1 (172.16.8.1) 56(84) bytes of data.
64 bytes from 172.16.8.1: icmp_seq=1 ttl=255 time=1.55 ms
64 bytes from 172.16.8.1: icmp_seq=2 ttl=255 time=1.61 ms
64 bytes from 172.16.8.1: icmp_seq=3 ttl=255 time=1.62 ms

--- 172.16.8.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.550/1.593/1.620/0.030 ms
root@server1:~# 

在server1安装kvm虚拟化环境,然后创建两个新的kvm网络,分别绑定到不同网桥

xml 复制代码
cat >br10-network.xml<<EOF
<network>
  <name>br10-net</name>
  <forward mode="bridge"/>
  <bridge name="br10"/>
</network>
EOF
cat >br20-network.xml<<EOF
<network>
  <name>br20-net</name>
  <forward mode="bridge"/>
  <bridge name="br20"/>
</network>
EOF

virsh net-define br10-network.xml
virsh net-define br20-network.xml
virsh net-start br10-net
virsh net-start br20-net
virsh net-autostart br10-net
virsh net-autostart br20-net

查看新建的网络

bash 复制代码
root@server1:~# virsh net-list
 Name       State    Autostart   Persistent
---------------------------------------------
 br10-net   active   yes         yes
 br20-net   active   yes         yes
 default    active   yes         yes

创建两个虚拟机,指定使用不同网络

bash 复制代码
virt-install \
  --name vm1 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --import \
  --autostart \
  --noautoconsole \
  --network network=br10-net

virt-install \
  --name vm2 \
  --vcpus 1 \
  --memory 2048 \
  --disk path=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
  --os-variant ubuntu22.04 \
  --import \
  --autostart \
  --noautoconsole \
  --network network=br20-net

查看创建的虚拟机

bash 复制代码
root@server1:~# virsh list
 Id   Name   State
----------------------
 13   vm1    running
 14   vm2    running

为vm1配置vlan10的IP地址

yaml 复制代码
virsh console vm1
cat >/etc/netplan/00-installer-config.yaml<<EOF
network:
  ethernets:
    enp1s0:
      addresses:
      - 172.16.10.10/24
      nameservers:
        addresses:
        - 223.5.5.5
      routes:
      - to: default
        via: 172.16.10.1
  version: 2
EOF
netplan apply

为vm2配置vlan20的IP地址

yaml 复制代码
virsh console vm2
cat >/etc/netplan/00-installer-config.yaml<<EOF
network:
  ethernets:
    enp1s0:
      addresses:
      - 172.16.20.10/24
      nameservers:
        addresses:
        - 223.5.5.5
      routes:
      - to: default
        via: 172.16.20.1
  version: 2
EOF
netplan apply

登录到vm1,测试vm1与外部网关连通性

bash 复制代码
root@vm1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:a4:aa:9d brd ff:ff:ff:ff:ff:ff
    inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fea4:aa9d/64 scope link 
       valid_lft forever preferred_lft forever
root@vm1:~# 
root@vm1:~# ping 172.16.10.1 -c 3
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
64 bytes from 172.16.10.1: icmp_seq=1 ttl=255 time=1.51 ms
64 bytes from 172.16.10.1: icmp_seq=2 ttl=255 time=7.10 ms
64 bytes from 172.16.10.1: icmp_seq=3 ttl=255 time=2.10 ms

--- 172.16.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.505/3.568/7.101/2.509 ms
root@vm1:~# 

登录到vm2,测试vm2与外部网关连通性

bash 复制代码
root@vm2:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:89:61:da brd ff:ff:ff:ff:ff:ff
    inet 172.16.20.10/24 brd 172.16.20.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe89:61da/64 scope link 
       valid_lft forever preferred_lft forever
root@vm2:~# 
root@vm2:~# ping 172.16.20.1 -c 3
PING 172.16.20.1 (172.16.20.1) 56(84) bytes of data.
64 bytes from 172.16.20.1: icmp_seq=1 ttl=255 time=1.73 ms
64 bytes from 172.16.20.1: icmp_seq=2 ttl=255 time=2.00 ms
64 bytes from 172.16.20.1: icmp_seq=3 ttl=255 time=2.00 ms

--- 172.16.20.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.732/1.911/2.003/0.126 ms
root@vm2:~# 
相关推荐
帅大大的架构之路4 分钟前
Could not resolve host: mirrorlist.centos.org
linux·运维·centos
xuehaowang5 分钟前
Ubuntu20.04中EasyConnect启动报错
linux·运维·服务器
leisigoyle12 分钟前
第四届智能系统、通信与计算机网络国际学术会议(ISCCN 2025)
网络·人工智能·计算机网络
zhangxueyi21 分钟前
一次完成Win10下MySQL 9.1 的安装
网络·数据库·sql·mysql·oracle
凉秋girl1 小时前
JVM vs JDK vs JRE
linux·运维·服务器
侬本多情。1 小时前
复杂园区网基本分支的构建
运维·服务器·网络
pumpkin845141 小时前
Windows上使用VSCode开发linux C++程序
linux·windows·vscode
FHKHH2 小时前
Boost.Asio 的 TCP 通信教程
网络·网络协议·tcp/ip
Dusk_橙子2 小时前
在Linux中,zabbix如何监控脑裂?
linux·运维·zabbix
小林熬夜学编程2 小时前
【Linux网络编程】第二十一弹---深入解析I/O多路转接技术之poll函数:优势、缺陷与实战代码
linux·运维·服务器·开发语言·网络·c++