OVS Faucet Tutorial笔记(上)

发布于:2025-06-10 ⋅ 阅读:(24) ⋅ 点赞:(0)

官方文档:

OVS Faucet Tutorial

1、Setting Up OVS

1. Get a copy of the Open vSwitch source repository using Git, then cd into the new directory:

root@server1:~# git clone https://github.com/openvswitch/ovs.git
Cloning into 'ovs'...
remote: Enumerating objects: 180427, done.
remote: Counting objects: 100% (284/284), done.
remote: Compressing objects: 100% (138/138), done.
remote: Total 180427 (delta 190), reused 149 (delta 146), pack-reused 180143 (from 3)
Receiving objects: 100% (180427/180427), 129.60 MiB | 3.19 MiB/s, done.
Resolving deltas: 100% (143083/143083), done.
root@server1:~# 
root@server1:~# cd ovs
root@server1:~/ovs# pwd
/root/ovs

2. 选择安装测试OVS,直接进入step 3.

3.  获取 OVS 编译依赖

root@server1:~/ovs# apt-get build-dep openvswitch

4.  Configure and build Open vSwitch:

root@server1:~/ovs# ./boot.sh
libtoolize: putting auxiliary files in AC_CONFIG_AUX_DIR, 'build-aux'.
libtoolize: copying file 'build-aux/ltmain.sh'
libtoolize: putting macros in AC_CONFIG_MACRO_DIRS, 'm4'.
libtoolize: copying file 'm4/libtool.m4'
libtoolize: copying file 'm4/ltoptions.m4'
libtoolize: copying file 'm4/ltsugar.m4'
libtoolize: copying file 'm4/ltversion.m4'
libtoolize: copying file 'm4/lt~obsolete.m4'
configure.ac:27: installing 'build-aux/compile'
configure.ac:43: installing 'build-aux/config.guess'
configure.ac:43: installing 'build-aux/config.sub'
configure.ac:22: installing 'build-aux/install-sh'
configure.ac:22: installing 'build-aux/missing'
Makefile.am: installing 'build-aux/depcomp'
root@server1:~/ovs# 

./boot.sh 的作用:

boot.sh 是 Open vSwitch 源码仓库中用于初始化构建系统的脚本,它会:

  • 运行 libtoolize、aclocal、autoconf、automake 等工具;
  • 自动生成 configure 脚本;
  • 创建必要的目录如 build-aux/ 和 m4/;
  • 拷贝一些辅助构建文件;

你现在已经完成了这一步,说明你的构建环境是正常的,下一步你可以继续编译安装。

root@server1:~/ovs# ./configure
root@server1:~/ovs# make -j4

./configure

这个命令会检查你的系统环境,生成对应的 Makefile。

make -j4

这一步是实际编译 OVS,用 -j4 表示并行使用 4 个 CPU 核心编译,加快速度。完成后会生成以下关键可执行文件和工具:

  • ovs-vswitchd

  • ovsdb-server

  • ovs-ofctl

  • 等核心工具在 ./utilities/./vswitchd/ 目录中

 5. Try out the sandbox by running:

root@server1:~/ovs# make sandbox

You can exit the sandbox with exit or Control+D. 

2、Setting up Faucet

1、Get a copy of the Faucet source repository using Git, then cd into the new directory:

root@server1:~# git clone https://github.com/faucetsdn/faucet.git
root@server1:~# cd faucet
root@server1:~/faucet# 

At this point I checked out the latest tag:

root@server1:~/faucet# latest_tag=$(git describe --tags $(git rev-list --tags --max-count=1))
root@server1:~/faucet# git checkout $latest_tag
Note: switching to '1.10.11'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at 74fa257a Merge pull request #4489 from gizmoguy/docker-python3-12.0.1
root@server1:~/faucet#

你已经成功地从 faucet 仓库中:

  1. 获取了最新的 tag(版本号是 1.10.11):

    latest_tag=$(git describe --tags $(git rev-list --tags --max-count=1))

  2. 切换到了这个版本:

    git checkout $latest_tag

当前处于 “detached HEAD” 状态,意思是你现在是基于一个 tag 的快照,而不是在任何分支上。

2. Build a docker container image: 

root@server1:~/faucet# docker build -t faucet/faucet -f Dockerfile.faucet .
[+] Building 330.6s (8/8) FINISHED                                                                                                                                                                                                               docker:default
 => [internal] load build definition from Dockerfile.faucet                                                                                                                                                                                                0.0s
 => => transferring dockerfile: 267B                                                                                                                                                                                                                       0.0s
 => [internal] load metadata for docker.io/faucet/python3:12.0.1                                                                                                                                                                                           5.1s
 => [internal] load .dockerignore                                                                                                                                                                                                                          0.0s
 => => transferring context: 74B                                                                                                                                                                                                                           0.0s
 => [internal] load build context                                                                                                                                                                                                                          0.9s
 => => transferring context: 45.64MB                                                                                                                                                                                                                       0.8s
 => [1/3] FROM docker.io/faucet/python3:12.0.1@sha256:98c3bbe0db19f33a49835e1dd225a23d6981544b1cf6db8ec118ddaf3db9fd34                                                                                                                                     7.3s
 => => resolve docker.io/faucet/python3:12.0.1@sha256:98c3bbe0db19f33a49835e1dd225a23d6981544b1cf6db8ec118ddaf3db9fd34                                                                                                                                     0.0s
 => => sha256:ec64d71e712d40b0f4a5daf9c6b02376197eaa3c740334466db66822ed2f0cc2 12.66MB / 12.66MB                                                                                                                                                           5.4s
 => => sha256:9bf88deb21bcda88685e30fab42664f209df7f52432bb43da2edbc4758dd7948 1.62kB / 1.62kB                                                                                                                                                             0.0s
 => => sha256:8af2c5d8f9a71ed3ee45079942f685e4d4dda504f9d61ba5a8be6938668c097a 7.07kB / 7.07kB                                                                                                                                                             0.0s
 => => sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 3.41MB / 3.41MB                                                                                                                                                             3.8s
 => => sha256:c3cdf40b8bda8e4ca4be0f5fa7f1d128907271efcbc72cbfc7c8b0f939ec25ea 619.60kB / 619.60kB                                                                                                                                                         4.5s
 => => sha256:98c3bbe0db19f33a49835e1dd225a23d6981544b1cf6db8ec118ddaf3db9fd34 5.42kB / 5.42kB                                                                                                                                                             0.0s
 => => extracting sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8                                                                                                                                                                  0.2s
 => => sha256:cf6fce90ba3a1de06c8a7eb74c0a5780b6bde4b12c4a82fa64b3baf68f8ee216 239B / 239B                                                                                                                                                                 4.7s
 => => extracting sha256:c3cdf40b8bda8e4ca4be0f5fa7f1d128907271efcbc72cbfc7c8b0f939ec25ea                                                                                                                                                                  0.2s
 => => sha256:22580357404ef4f7f02a6a4106c78f0f394c0cdbd1c4b0aff00af855b9d7ec13 3.13MB / 3.13MB                                                                                                                                                             6.9s
 => => sha256:44a7b9f2e5e0e553b0d0ddad0690291b2e556b414d00634f2c8450bafe690a37 44.81kB / 44.81kB                                                                                                                                                           6.3s
 => => extracting sha256:ec64d71e712d40b0f4a5daf9c6b02376197eaa3c740334466db66822ed2f0cc2                                                                                                                                                                  0.7s
 => => sha256:42a768f9b0e7e29dde6c42c2c654f4303a303834aa2e6a20190f915f836c2264 486B / 486B                                                                                                                                                                 6.2s
 => => extracting sha256:cf6fce90ba3a1de06c8a7eb74c0a5780b6bde4b12c4a82fa64b3baf68f8ee216                                                                                                                                                                  0.0s
 => => extracting sha256:22580357404ef4f7f02a6a4106c78f0f394c0cdbd1c4b0aff00af855b9d7ec13                                                                                                                                                                  0.3s
 => => extracting sha256:44a7b9f2e5e0e553b0d0ddad0690291b2e556b414d00634f2c8450bafe690a37                                                                                                                                                                  0.0s
 => => extracting sha256:42a768f9b0e7e29dde6c42c2c654f4303a303834aa2e6a20190f915f836c2264                                                                                                                                                                  0.0s
 => [2/3] COPY ./ /faucet-src/                                                                                                                                                                                                                             0.6s
 => [3/3] RUN ./faucet-src/docker/install-faucet.sh                                                                                                                                                                                                      317.2s
 => exporting to image                                                                                                                                                                                                                                     0.5s 
 => => exporting layers                                                                                                                                                                                                                                    0.5s 
 => => writing image sha256:52b854da3b80fb1d13bd2cbf6c2390ceb7576d368a9f7b8264da4c4e2fb02566                                                                                                                                                               0.0s 
 => => naming to docker.io/faucet/faucet                                                                                                                                                                                                                   0.0s 
root@server1:~/faucet#  

3. Create an installation directory under the faucet directory for the docker image to use:

root@server1:~/faucet# mkdir inst                                                                                                                                                                                                                               
root@server1:~/faucet# pwd                                                                                                                                                                                                                                      
/root/faucet

 The Faucet configuration will go in inst/faucet.yaml and its main log will appear in inst/faucet.log

4. Create a container and start Faucet:

root@server1:~/faucet# docker run -d --name faucet --restart=always -v $(pwd)/inst/:/etc/faucet/ -v $(pwd)/inst/:/var/log/faucet/ -p 6653:6653 -p 9302:9302 faucet/faucet
0ae2e2fbd31838535098b5cf700f3af79920db12a3c148885425214b3e4285b9
root@server1:~/faucet# 

3、Overview

接下来会开展三个实验:

  1. Switching
  2. Routing
  3. ACLs

每个实验都会从上到下的三个层面进行观察:

  1. 控制器:Faucet.
  2. 数据流表:The OpenFlow subsystem in Open vSwitch.
  3. datapath:Open vSwitch datapath.

控制器主要有两个文件:

  • faucet.yaml——配置switch等。
  • faucet.log——查看控制器日志。

数据流表主要使用两条命令:

  • ovs-ofctl——管理OVS流表
  • ovs-appctl——管理OVS进程

datapath实际就是cache,用于加速报文转发。datapath可以基于Linux kernel实现,也可以基于用户空间实现(比如DPDK)。

4、Switching

4.1 控制器定义交换机

1. 编辑inst/faucet.yaml:

dps:
    switch-1:
        dp_id: 0x1
        timeout: 3600
        arp_neighbor_timeout: 3600
        interfaces:
            1:
                native_vlan: 100
            2:
                native_vlan: 100
            3:
                native_vlan: 100
            4:
                native_vlan: 200
            5:
                native_vlan: 200
vlans:
    100:
    200:

 2. restart faucet

root@server1:~/faucet/inst# docker restart faucet
faucet
root@server1:~/faucet/inst# docker ps
CONTAINER ID   IMAGE           COMMAND                  CREATED       STATUS          PORTS                                                                                      NAMES
0ae2e2fbd318   faucet/faucet   "/usr/local/bin/entr…"   2 weeks ago   Up 16 seconds   0.0.0.0:6653->6653/tcp, [::]:6653->6653/tcp, 0.0.0.0:9302->9302/tcp, [::]:9302->9302/tcp   faucet

3. 有报错:

 root@server1:~/faucet/inst# cat faucet.log

May 27 13:36:28 faucet ERROR    New config bad (DP switch-1: L2 timeout must be > ARP timeout * 2) - rejecting

错误解释:

在数据路径(DP)switch-1 的配置中,L2 超时时间(L2 timeout)必须大于 ARP 超时时间(ARP timeout)的两倍。否则 Faucet 会拒绝加载这份新配置。

4. 修改faucet.yaml

root@server1:~/faucet/inst# vi faucet.yaml
root@server1:~/faucet/inst# cat faucet.yaml
dps:
    switch-1:
        dp_id: 0x1
        timeout: 8000  # ← L2 表项超时时间,单位秒
        arp_neighbor_timeout: 3600 # ← ARP 表项超时时间,单位秒
        interfaces:
            1:
                native_vlan: 100
            2:
                native_vlan: 100
            3:
                native_vlan: 100
            4:
                native_vlan: 200
            5:
                native_vlan: 200
vlans:
    100:
    200:
root@server1:~/faucet/inst# 
  • dps:表示数据路径(Datapaths),即 Faucet 控制的交换机列表。

  • switch-1:定义了一个逻辑交换机的名称,你可以自定义,用于标识这台交换机。

  • dp_id: 0x1:这是这台交换机的唯一标识(Datapath ID),必须与你的 OpenFlow 交换机的 DPID 匹配,十六进制格式(比如你用的是 OVS,则 ovs-vsctl set bridge br0 other-config:datapath-id=0000000000000001)。

  • 表示 switch-1 交换机的端口 1~5

  • native_vlan: 100/200:表示这个端口默认属于哪个 VLAN(即接入模式口)。例如:

    • 端口 1、2、3 属于 VLAN 100

    • 端口 4、5 属于 VLAN 200

  • ⚠️ 注意:这些接口默认是“access”口,不支持 trunk 或 802.1Q tag。

5. resart faucet

root@server1:~/faucet/inst# docker restart faucet
faucet
root@server1:~/faucet/inst# docker ps
CONTAINER ID   IMAGE           COMMAND                  CREATED       STATUS         PORTS                                                                                      NAMES
0ae2e2fbd318   faucet/faucet   "/usr/local/bin/entr…"   2 weeks ago   Up 4 seconds   0.0.0.0:6653->6653/tcp, [::]:6653->6653/tcp, 0.0.0.0:9302->9302/tcp, [::]:9302->9302/tcp   faucet
root@server1:~/faucet/inst# 

6. 查看日志

为了便于查看,可以先清空log文件:
echo -n > faucet.log

在重启docker restart faucet之前执行实时查看日志:
root@server1:~/faucet/inst# tail -f faucet.log
Jun 07 07:13:55 faucet INFO     version 1.10.11
Jun 07 07:13:55 faucet INFO     Reloading configuration
Jun 07 07:13:55 faucet INFO     configuration /etc/faucet/faucet.yaml changed, analyzing differences
Jun 07 07:13:55 faucet INFO     Add new datapath DPID 1 (0x1)

这条日志信息表示 Faucet 控制器已经成功识别并接入了一个新的交换机(Datapath):

详细含义:

  • Add new datapath:表示一个新的数据路径(即 OpenFlow 交换机)已连接到 Faucet。

  • DPID 1 (0x1):这是该交换机的 Datapath ID,十进制为 1,十六进制为 0x1,对应你配置中的 dp_id: 0x1

这表明:

  • Faucet 成功解析了配置文件(之前的配置错误已经被修复)。

Faucet is now waiting for a switch with datapath ID 0x1 to connect to it over OpenFlow。 

4.2 创建OVS交换机

1. 进入ovs sandbox

root@server1:~# cd ovs
root@server1:~/ovs# tutorial/ovs-sandbox

2.  创建新的ovs bridge

执行:
ovs-vsctl add-br br0 \
         -- set bridge br0 other-config:datapath-id=0000000000000001 \
         -- add-port br0 p1 -- set interface p1 ofport_request=1 \
         -- add-port br0 p2 -- set interface p2 ofport_request=2 \
         -- add-port br0 p3 -- set interface p3 ofport_request=3 \
         -- add-port br0 p4 -- set interface p4 ofport_request=4 \
         -- add-port br0 p5 -- set interface p5 ofport_request=5 \
         -- set-controller br0 tcp:127.0.0.1:6653 \
         -- set controller br0 connection-mode=out-of-band

root@server1:~/ovs# ovs-vsctl show
410ef4cd-3f7c-41c9-b123-a3bcd76e611d
    Bridge br0
        Controller "tcp:127.0.0.1:6653"
            is_connected: true
        Port p5
            Interface p5
        Port br0
            Interface br0
                type: internal
        Port p2
            Interface p2
        Port p1
            Interface p1
        Port p3
            Interface p3
        Port p4
            Interface p4
root@server1:~/ovs# ovs-vsctl get-controller br0
tcp:127.0.0.1:6653
root@server1:~/ovs# 
  • other-config:datapath-id=0000000000000001,指定这个桥的 OpenFlow 数据路径 ID 为 0x1 

🧠 为什么要设置 datapath-id?

Faucet 控制器通过 Datapath ID(DPID)来识别每一个连接进来的 OpenFlow 交换机。
 

  • set interface p1 ofport_request=1,将接口 p1OpenFlow 端口号设置为 1

🔧 为什么要设置 ofport_request

在 Faucet 配置中,端口号是你在 faucet.yaml 中用于标识接口的数字,比如:
 

interfaces:
  1:
    native_vlan: 100

如果 OVS 自动分配的端口号不是 1,Faucet 就无法正确匹配该接口配置。

所以你需要显式地把接口 p1 的 OpenFlow 端口号设置为 1。

  • ovs-vsctl set-controller br0 tcp:127.0.0.1:6653
    • 给 Open vSwitch 桥 br0 设置 OpenFlow 控制器的地址为 127.0.0.1:6653

    • 也就是说,br0 会尝试连接本机(localhost)上的端口 6653,该端口通常是 Faucet 或其他 OpenFlow 控制器监听的端口。

  • ovs-vsctl set controller br0 connection-mode=out-of-band

    • 将 OVS(Open vSwitch)桥接设备 br0 设置为 out-of-band 模式的控制连接方式。

🔍 什么是 connection-mode

Open vSwitch 有两种控制器连接模式:

模式 说明
in-band 控制器流量走数据通道(例如你从 br0 接口上连控制器)。
控制器必须先下发规则放行自己的 TCP 连接。
out-of-band 控制器连接走 管理接口,不依赖数据平面。
这种模式中,OpenFlow 控制流与数据流完全分离,更安全稳定

4.3 控制器连接OVS日志

Now, if you look at inst/faucet.log again, you should see that Faucet recognized and configured the new switch and its ports:

(在创建ovs-vsctl add-br br0..同时,继续tail -f faucet.log,可以清楚看到日志变化)

Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 port desc stats
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 delta in up state: set() => {1, 2, 3, 4, 5, 4294967294}
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 1 fabricating ADD status True
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 status change: Port 1 up status True reason ADD state 0
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 1 (1) up
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 100 vid:100 untagged: Port 1,Port 2,Port 3
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 2 fabricating ADD status True
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 status change: Port 2 up status True reason ADD state 0
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 2 (2) up
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 100 vid:100 untagged: Port 1,Port 2,Port 3
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 3 fabricating ADD status True
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 status change: Port 3 up status True reason ADD state 0
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 3 (3) up
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 100 vid:100 untagged: Port 1,Port 2,Port 3
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 4 fabricating ADD status True
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 status change: Port 4 up status True reason ADD state 0
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 4 (4) up
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 200 vid:200 untagged: Port 4,Port 5
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 5 fabricating ADD status True
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 status change: Port 5 up status True reason ADD state 0
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 5 (5) up
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 200 vid:200 untagged: Port 4,Port 5
Jun 07 07:18:17 faucet.valve ERROR    DPID 1 (0x1) switch-1 send_flow_msgs: DP not up
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Cold start configuring DP
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 1 (1) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 2 (2) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 3 (3) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 4 (4) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 5 (5) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 100 vid:100 untagged: Port 1,Port 2,Port 3
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 200 vid:200 untagged: Port 4,Port 5
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 table ID 0 table config match_types: (('eth_dst', True), ('eth_type', False), ('in_port', False), ('vlan_vid', False)) name: vlan next_tables: ['eth_src'] output: True set_fields: ('vlan_vid',) size: 32 vlan_port_scale: 3
table ID 1 table config match_types: (('eth_dst', True), ('eth_src', False), ('eth_type', False), ('in_port', False), ('vlan_vid', False)) miss_goto: eth_dst name: eth_src next_tables: ['eth_dst', 'flood'] output: True set_fields: ('vlan_vid', 'eth_dst') size: 64 table_id: 1 vlan_port_scale: 4.1
table ID 2 table config exact_match: True match_types: (('eth_dst', False), ('vlan_vid', False)) miss_goto: flood name: eth_dst output: True size: 64 table_id: 2 vlan_port_scale: 4.1
table ID 3 table config match_types: (('eth_dst', True), ('in_port', False), ('vlan_vid', False)) name: flood output: True size: 96 table_id: 3 vlan_port_scale: 8.0

其中:

Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Cold start configuring DP
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 1 (1) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 2 (2) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 3 (3) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 4 (4) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Port 5 (5) configured
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 100 vid:100 untagged: Port 1,Port 2,Port 3
Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 Configuring VLAN 200 vid:200 untagged: Port 4,Port 5

 

这段日志说明 Faucet 控制器已经成功完成了整个交换机(DPID 1)的初始化配置工作。下面是对每条日志的解读:

✅ 配置状态分析

✅ 1. 冷启动初始化
Cold start configuring DP
表示 Faucet 检测到这是一个“冷启动”,开始重新配置整个 datapath(交换机)。在 Faucet(以及一般网络控制器)中,Cold Start(冷启动) 是指控制器与交换机建立连接后,对交换机的所有配置进行完全重新初始化的一种启动过程。
✅ 2. 端口配置成功
Port 1 (1) configured
Port 2 (2) configured
...
Port 5 (5) configured
说明你的 5 个物理端口(或逻辑端口)已经全部根据 faucet.yaml 完成了初始化配置,没有报错。

✅ 3. VLAN 配置成功

Configuring VLAN 100 vid:100 untagged: Port 1,Port 2,Port 3
Configuring VLAN 200 vid:200 untagged: Port 4,Port 5

说明你的 VLAN 配置已按预期生效:

  • VLAN 100:端口 1/2/3

  • VLAN 200:端口 4/5

  • 这些端口都是以“untagged”方式加入 VLAN。

4.4. OVS连接控制器日志

查看~/ovs/sandbox/ovs-vswitchd.log:

2025-06-07T07:18:17.560Z|00050|rconn|INFO|br0<->tcp:127.0.0.1:6653: connecting...
2025-06-07T07:18:17.561Z|00051|connmgr|INFO|br0: added primary controller "tcp:127.0.0.1:6653"
2025-06-07T07:18:17.561Z|00052|connmgr|INFO|br0: added service controller "punix:/root/ovs/sandbox/br0.mgmt"
2025-06-07T07:18:17.563Z|00053|vconn|DBG|tcp:127.0.0.1:6653: sent (Success): OFPT_HELLO (OF1.5) (xid=0x1):
 version bitmap: 0x01, 0x02, 0x03, 0x04, 0x05, 0x06
2025-06-07T07:18:17.564Z|00054|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_HELLO (OF1.3) (xid=0x4228901d):
 version bitmap: 0x01, 0x02, 0x03, 0x04
2025-06-07T07:18:17.564Z|00055|vconn|DBG|tcp:127.0.0.1:6653: negotiated OpenFlow version 0x04 (we support version 0x06 and earlier, peer supports version 0x04 and earlier)
2025-06-07T07:18:17.564Z|00056|rconn|INFO|br0<->tcp:127.0.0.1:6653: connected
2025-06-07T07:18:17.564Z|00057|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FEATURES_REQUEST (OF1.3) (xid=0x4228901e):
2025-06-07T07:18:17.566Z|00058|vconn|DBG|tcp:127.0.0.1:6653: sent (Success): OFPT_FEATURES_REPLY (OF1.3) (xid=0x4228901e): dpid:0000000000000001
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS
2025-06-07T07:18:17.568Z|00059|vconn|DBG|tcp:127.0.0.1:6653: received: OFPST_PORT_DESC request (OF1.3) (xid=0x4228901f): port=ANY
2025-06-07T07:18:17.571Z|00060|vconn|DBG|tcp:127.0.0.1:6653: sent (Success): OFPST_PORT_DESC reply (OF1.3) (xid=0x4228901f):
 1(p1): addr:aa:55:aa:55:01:3f
     config:     0
     state:      LIVE
     speed: 0 Mbps now, 0 Mbps max
 2(p2): addr:aa:55:aa:55:01:3c
     config:     0
     state:      LIVE
     speed: 0 Mbps now, 0 Mbps max
 3(p3): addr:aa:55:aa:55:01:3d
     config:     0
     state:      LIVE
     speed: 0 Mbps now, 0 Mbps max
 4(p4): addr:aa:55:aa:55:01:40
     config:     0
     state:      LIVE
     speed: 0 Mbps now, 0 Mbps max
 5(p5): addr:aa:55:aa:55:01:3e
     config:     0
     state:      LIVE
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br0): addr:3a:d2:b3:f1:ef:49
     config:     0
     state:      LIVE
     speed: 0 Mbps now, 0 Mbps max

你这段日志是 Open vSwitch(OVS) 的启动和与 SDN 控制器(如 Faucet)建立连接过程的标准调试输出,格式符合 OVS 日志(通常位于 /var/log/openvswitch/ovs-vswitchd.log)的风格。
 

🧩 日志流程解析

我们逐步解析这段日志的关键事j件:
🧱 连接建立

2025-06-07T07:18:17.560Z|00050|rconn|INFO|br0<->tcp:127.0.0.1:6653: connecting...
网桥 br0 正在尝试连接控制器(位于 127.0.0.1:6653)。

added primary controller "tcp:127.0.0.1:6653"
added service controller "punix:/root/ovs/sandbox/br0.mgmt"

添加了主控制器(OpenFlow 控制)和管理接口(可能是 Faucet/Mininet 环境中的 Unix socket 管理控制器)。

🤝 协议协商

sent: OFPT_HELLO (OF1.5)
received: OFPT_HELLO (OF1.3)
negotiated OpenFlow version 0x04

OVS 支持 OpenFlow 1.5(0x06 是最高版本),但控制器(Faucet)只支持到 1.3,于是双方最终协商使用 OpenFlow 1.3

🔍 交换机特性通告

received: OFPT_FEATURES_REQUEST
sent: OFPT_FEATURES_REPLY: dpid:0000000000000001

控制器请求特性,交换机(DPID 为 0x1)报告:

  • 支持 254 张表(n_tables

  • 支持各类统计能力(flow, port, group, queue 等)

🧵 端口描述

received: OFPST_PORT_DESC request
sent: OFPST_PORT_DESC reply:

控制器请求端口信息,交换机返回:

端口 MAC 地址 状态
p1 aa:55:aa:55:00:1a LIVE
p2 aa:55:aa:55:00:17 LIVE
p3 aa:55:aa:55:00:18 LIVE
p4 aa:55:aa:55:00:1b LIVE
p5 aa:55:aa:55:00:19 LIVE
LOCAL(br0) 96:17:97:26:56:43 LIVE
  • 所有端口均为 LIVE,速率为 0,表示未连接实际链路(常见于 Mininet、测试环境,或链路未 UP)。

📌 总结:这段日志告诉我们什么?

  • 这是 OVS 启动并成功与 Faucet 控制器完成连接、协商、注册的过程。

  • 成功协商使用 OpenFlow 1.3。

  • 交换机 DPID 为 0x1,控制器请求了特性和端口信息,返回了 5 个活跃端口 + 1 个本地端口。

  • 一切初始化顺利,表示控制器和交换机已经“握手成功”,即将开始下发流表。

4.5 控制器下发流表

4.5.1 流表概览

"docs/architecture.rst in the Faucet documentation"说明了完整的表项:

Table 说明
Table 0 Port-based ACLs(基于端口的访问控制)
Table 1 Ingress VLAN processing(入口 VLAN 处理)
Table 2 VLAN-based ACLs(基于 VLAN 的访问控制)
Table 3 Ingress L2 processing, MAC learning(入口二层处理,MAC 学习)
Table 4 L3 forwarding for IPv4(IPv4 三层转发)
Table 5 L3 forwarding for IPv6(IPv6 三层转发)
Table 6 Virtual IP processing, e.g. for router IP addresses implemented by Faucet(虚拟 IP 处理,例如 Faucet 实现的路由器 IP)
Table 7 Egress L2 processing(出口二层处理)
Table 8 Flooding(广播/泛洪)

 Faucet 是一个非常“按需精简”的控制器,不会像一些 SDN 控制器一样默认装一大堆功能。它会:

  • 只根据 faucet.yaml 中启用的功能来设定流表

 4.5.2 控制器查看下发流表日志

以下为控制器一侧的日志(inst/faucet.log)

Jun 07 07:18:17 faucet.valve INFO     DPID 1 (0x1) switch-1 table ID 0 table config match_types: (('eth_dst', True), ('eth_type', False), ('in_port', False), ('vlan_vid', False)) name: vlan next_tables: ['eth_src'] output: True set_fields: ('vlan_vid',) size: 32 vlan_port_scale: 3
table ID 1 table config match_types: (('eth_dst', True), ('eth_src', False), ('eth_type', False), ('in_port', False), ('vlan_vid', False)) miss_goto: eth_dst name: eth_src next_tables: ['eth_dst', 'flood'] output: True set_fields: ('vlan_vid', 'eth_dst') size: 64 table_id: 1 vlan_port_scale: 4.1
table ID 2 table config exact_match: True match_types: (('eth_dst', False), ('vlan_vid', False)) miss_goto: flood name: eth_dst output: True size: 64 table_id: 2 vlan_port_scale: 4.1
table ID 3 table config match_types: (('eth_dst', True), ('in_port', False), ('vlan_vid', False)) name: flood output: True size: 96 table_id: 3 vlan_port_scale: 8.0

 你这段 Faucet 日志是它在执行 Cold Start 时,为交换机 switch-1 配置 OpenFlow 流表结构的输出。每一条日志对应一个流表(table):

🔢 总览:Faucet 默认使用多个表来实现灵活的二层转发逻辑(Pipeline)

Table ID 表名 主要功能
0 vlan VLAN 接入、打标
1 eth_src 源 MAC 学习(学习交换)
2 eth_dst 目的 MAC 查找转发
3 flood 广播或泛洪

🧩 每条日志解析

Table ID 0: vlan
table ID 0 table config match_types: (('eth_dst', True), ...) name: vlan next_tables: ['eth_src'] output: True set_fields: ('vlan_vid',)

  • 作用:处理 VLAN 的入端口分类和 VID 打标。

  • 动作:设置 VLAN ID(set_fields: ('vlan_vid',))。

  • 下一跳:跳转到 eth_src 表(ID 1)处理源 MAC 学习。

Table ID 1: eth_src
name: eth_src next_tables: ['eth_dst', 'flood'] output: True set_fields: ('vlan_vid', 'eth_dst')

  • 作用:学习源 MAC(谁从哪个端口来);

  • 动作:设置一些字段并继续跳转。

  • 跳转路径

    • 如果匹配成功,就去 eth_dst 查目标 MAC。

    • 如果找不到目标 MAC,就跳 flood

Table ID 2: eth_dst
name: eth_dst output: True exact_match: True miss_goto: flood

  • 作用:精确查找目的 MAC 对应的端口。

  • miss_goto:如果查不到,就广播(跳到 flood 表)。

Table ID 3: flood

name: flood output: True

  • 作用:执行广播(例如 ARP 请求)或目标未知的泛洪操作。

  • 输出:转发到 VLAN 中除来源口外的所有口

📊 其他字段说明

  • vlan_port_scale:Faucet 估算该表规模与 VLAN 端口数量的关系(调优参考)。

  • size: 默认表容量(例如 64 表示最多存 64 条 flow entry)

✅ 总结

你看到的是 Faucet 控制器基于 faucet.yaml 配置为交换机构建的OpenFlow 转发管道结构,它遵循:

VLAN 接入 → 源 MAC 学习 → 目标 MAC 查找 → 广播/泛洪

这个流程既支持 L2 学习交换,也能处理广播类流量(如 ARP),非常适合构建基于 VLAN 的可控 L2 网络。

如果你想进一步了解每张表的 flow rule 长什么样,使用如下命令可查看:
ovs-ofctl dump-flows br0

4.5.3 OVS查看下发流表日志 

查看~/ovs/sandbox/ovs-vswitchd.log:

(控制器下发流表到OVS bridge的日志)
2025-06-07T07:18:17.585Z|00067|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289023): DEL table:255 priority=0 actions=drop
2025-06-07T07:18:17.585Z|00068|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289025): ADD table:3 priority=8240,dl_dst=01:00:0c:cc:cc:cc cookie:0x5adc15c0 out_port:0 actions=drop
2025-06-07T07:18:17.585Z|00069|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289026): ADD table:3 priority=8240,dl_dst=01:00:0c:cc:cc:cd cookie:0x5adc15c0 out_port:0 actions=drop
2025-06-07T07:18:17.585Z|00070|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289027): ADD table:3 priority=8240,dl_vlan=100,dl_dst=ff:ff:ff:ff:ff:ff cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:1,output:2,output:3
2025-06-07T07:18:17.585Z|00071|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289028): ADD table:3 priority=8240,dl_vlan=200,dl_dst=ff:ff:ff:ff:ff:ff cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:4,output:5
2025-06-07T07:18:17.585Z|00072|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289029): ADD table:3 priority=8236,dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 cookie:0x5adc15c0 out_port:0 actions=drop
2025-06-07T07:18:17.586Z|00073|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228902a): ADD table:3 priority=8216,dl_vlan=100,dl_dst=01:80:c2:00:00:00/ff:ff:ff:00:00:00 cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:1,output:2,output:3
2025-06-07T07:18:17.587Z|00074|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228902b): ADD table:3 priority=8216,dl_vlan=100,dl_dst=01:00:5e:00:00:00/ff:ff:ff:00:00:00 cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:1,output:2,output:3
2025-06-07T07:18:17.587Z|00075|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228902c): ADD table:3 priority=8216,dl_vlan=200,dl_dst=01:80:c2:00:00:00/ff:ff:ff:00:00:00 cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:4,output:5
2025-06-07T07:18:17.587Z|00076|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228902d): ADD table:3 priority=8216,dl_vlan=200,dl_dst=01:00:5e:00:00:00/ff:ff:ff:00:00:00 cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:4,output:5
2025-06-07T07:18:17.587Z|00077|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228902e): ADD table:3 priority=8208,dl_vlan=100,dl_dst=33:33:00:00:00:00/ff:ff:00:00:00:00 cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:1,output:2,output:3
2025-06-07T07:18:17.587Z|00078|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228902f): ADD table:3 priority=8208,dl_vlan=200,dl_dst=33:33:00:00:00:00/ff:ff:00:00:00:00 cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:4,output:5
2025-06-07T07:18:17.587Z|00079|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289030): ADD table:3 priority=8192,dl_vlan=100 cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:1,output:2,output:3
2025-06-07T07:18:17.588Z|00080|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289031): ADD table:3 priority=8192,dl_vlan=200 cookie:0x5adc15c0 out_port:0 actions=pop_vlan,output:4,output:5
2025-06-07T07:18:17.588Z|00081|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289032): ADD table:3 priority=0 cookie:0x5adc15c0 out_port:0 actions=drop
2025-06-07T07:18:17.589Z|00082|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289033): ADD table:2 priority=0 cookie:0x5adc15c0 out_port:0 actions=goto_table:3
2025-06-07T07:18:17.589Z|00083|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289034): ADD table:1 priority=20490,dl_type=0x9000 cookie:0x5adc15c0 out_port:0 actions=drop
2025-06-07T07:18:17.589Z|00084|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289035): ADD table:1 priority=20480,dl_src=ff:ff:ff:ff:ff:ff cookie:0x5adc15c0 out_port:0 actions=drop
2025-06-07T07:18:17.589Z|00085|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289036): ADD table:1 priority=20480,dl_src=0e:00:00:00:00:01 cookie:0x5adc15c0 out_port:0 actions=drop
2025-06-07T07:18:17.589Z|00086|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289037): ADD table:1 priority=4096,dl_vlan=100 cookie:0x5adc15c0 out_port:0 actions=CONTROLLER:96,goto_table:2
2025-06-07T07:18:17.589Z|00087|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289038): ADD table:1 priority=4096,dl_vlan=200 cookie:0x5adc15c0 out_port:0 actions=CONTROLLER:96,goto_table:2
2025-06-07T07:18:17.589Z|00088|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289039): ADD table:1 priority=0 cookie:0x5adc15c0 out_port:0 actions=goto_table:2
2025-06-07T07:18:17.590Z|00089|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228903a): ADD priority=4096,in_port=1,vlan_tci=0x0000/0x1fff cookie:0x5adc15c0 out_port:0 actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
2025-06-07T07:18:17.590Z|00090|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228903b): ADD priority=4096,in_port=2,vlan_tci=0x0000/0x1fff cookie:0x5adc15c0 out_port:0 actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
2025-06-07T07:18:17.590Z|00091|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228903c): ADD priority=4096,in_port=3,vlan_tci=0x0000/0x1fff cookie:0x5adc15c0 out_port:0 actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
2025-06-07T07:18:17.590Z|00092|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228903d): ADD priority=4096,in_port=4,vlan_tci=0x0000/0x1fff cookie:0x5adc15c0 out_port:0 actions=push_vlan:0x8100,set_field:4296->vlan_vid,goto_table:1
2025-06-07T07:18:17.590Z|00093|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228903e): ADD priority=4096,in_port=5,vlan_tci=0x0000/0x1fff cookie:0x5adc15c0 out_port:0 actions=push_vlan:0x8100,set_field:4296->vlan_vid,goto_table:1
2025-06-07T07:18:17.590Z|00094|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x4228903f): ADD priority=0 cookie:0x5adc15c0 out_port:0 actions=drop

(Google Gemini)

OpenFlow 流表修改日志分析

这些日志记录了一系列发送给网络交换机的 OpenFlow (OF1.3) 流表修改 (OFPT_FLOW_MOD) 命令。这些命令用于指导交换机如何处理不同类型的网络流量。


OpenFlow 流表修改概述

OpenFlow 是一种协议,允许中央控制器编程网络交换机的转发行为。流表修改指示交换机在其流表中添加、修改或删除条目。每个条目定义了:

  • 匹配字段 (Match Fields):用于匹配传入数据包的条件(例如,源/目的 MAC 地址、VLAN ID、输入端口)。
  • 优先级 (Priority):当多个流匹配一个数据包时,选择优先级最高的那一个。
  • 动作 (Actions):如果数据包匹配,则执行的操作(例如,转发到特定端口、丢弃、推送/弹出 VLAN 标签、发送到控制器、跳转到另一个流表)。
  • 流表 (Table):流表按顺序处理。数据包可以通过 goto_table 动作发送到后续流表进行进一步处理。

具体流表修改详情

以下是日志中观察到的流表修改的详细说明:

流表 255 (删除操作)

  • DEL table:255 priority=0 actions=drop: 这个条目正在被删除。它之前配置为丢弃流表 255 中的所有流量(优先级 0)。

流表 3 (基于 VLAN 和 MAC 的转发/过滤)

这个流表似乎负责基于 VLAN 和目的 MAC 地址的流量转发或丢弃。

  • 丢弃特定 MAC 地址的流量:
    • dl_dst=01:00:0c:cc:cc:cc
    • dl_dst=01:00:0c:cc:cc:cd
    • dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 这些优先级为 82408236 的条目配置为丢弃发往特定 MAC 地址的流量。这些通常对应于不应正常转发的专有或控制平面协议。
  • VLAN 100 转发:
    • 匹配 dl_vlan=100 且目的 MAC 地址为 ff:ff:ff:ff:ff:ff (广播)、01:80:c2:00:00:00/ff:ff:ff:00:00:0001:00:5e:00:00:00/ff:ff:ff:00:00:00 (组播) 或 33:33:00:00:00:00/ff:ff:00:00:00:00 (IPv6 组播) 的流量,将弹出 VLAN 标签,然后输出到端口 1、2 和 3
    • 一个优先级为 8192 的通用规则,针对 dl_vlan=100 的流量,也执行弹出 VLAN 标签并输出到端口 1、2 和 3。
  • VLAN 200 转发:
    • 与 VLAN 100 类似,匹配 dl_vlan=200 且目的 MAC 地址为特定广播/组播的流量,将弹出 VLAN 标签,然后输出到端口 4 和 5
    • 一个优先级为 8192 的通用规则,针对 dl_vlan=200 的流量,也执行弹出 VLAN 标签并输出到端口 4 和 5。
  • 默认丢弃:
    • ADD table:3 priority=0 cookie:0x5adc15c0 out_port:0 actions=drop: 这是流表 3 中的最低优先级规则。任何没有匹配到更高优先级规则的流量都将被丢弃

流表 2 (跳转规则)

这个流表非常简单,它将所有流量都发送到流表 3 进行进一步处理。

  • ADD table:2 priority=0 cookie:0x5adc15c0 out_port:0 actions=goto_table:3: 流表 2 中唯一的规则是,任何进入流表 2 的流量(优先级 0 意味着匹配所有)都将跳转到流表 3

流表 1 (初始分类和 VLAN 处理)

这个流表主要负责流量的初步分类和 VLAN 标签的添加/丢弃。

  • 丢弃特定类型的流量:
    • dl_type=0x9000 (Q-in-Q 或类似协议)
    • dl_src=ff:ff:ff:ff:ff:ff (广播源 MAC)
    • dl_src=0e:00:00:00:00:01 (特定源 MAC) 这些优先级为 2049020480 的条目配置为丢弃特定类型的流量。
  • VLAN 100 和 200 的控制器交互:
    • dl_vlan=100dl_vlan=200 的流量 (优先级 4096) 将被发送到控制器 (长度为 96 字节),然后跳转到流表 2。这表明控制器可能需要处理这些特定 VLAN 的流量,例如用于学习 MAC 地址或进行其他控制平面操作。
  • 默认跳转:
    • ADD table:1 priority=0 cookie:0x5adc15c0 out_port:0 actions=goto_table:2: 这是流表 1 中的最低优先级规则。任何没有匹配到更高优先级规则的流量都将被跳转到流表 2

根流表 (未指定流表 ID 的规则)

这些规则没有明确指定流表 ID,通常意味着它们是应用于数据包进入交换机后的第一个流表(通常是流表 0)。

  • 入端口 VLAN 标记:
    • in_port=1,vlan_tci=0x0000/0x1fff (未标记或优先级标记) 的流量将执行 push_vlan:0x8100 (添加 802.1Q 标签)、set_field:4196->vlan_vid (设置 VLAN ID 为 100,因为 4196 是 0x1064,而 VLAN ID 是 100) 并跳转到流表 1
      • The syntax set_field:4196->vlan_vid is curious and somewhat misleading. OpenFlow 1.3 defines the vlan_vid field as a 13-bit field where bit 12 is set to 1 if the VLAN header is present. Thus, since 4196 is 0x1064, this action sets VLAN value 0x64, which in decimal is 100.
    • in_port=2,vlan_tci=0x0000/0x1fff 的流量也将执行类似的 VLAN 100 标记并跳转到流表 1。
    • in_port=3,vlan_tci=0x0000/0x1fff 的流量也将执行类似的 VLAN 100 标记并跳转到流表 1。
    • in_port=4,vlan_tci=0x0000/0x1fff 的流量将执行 push_vlan:0x8100set_field:4296->vlan_vid (设置 VLAN ID 为 200,因为 4296 是 0x10C8,而 VLAN ID 是 200) 并跳转到流表 1
    • in_port=5,vlan_tci=0x0000/0x1fff 的流量也将执行类似的 VLAN 200 标记并跳转到流表 1。 这些规则表明,从端口 1、2、3 进入的未标记流量会被打上 VLAN 100 标签,从端口 4、5 进入的未标记流量会被打上 VLAN 200 标签,然后这些流量被送往流表 1 进行进一步处理。
  • 默认丢弃:
    • ADD priority=0 cookie:0x5adc15c0 out_port:0 actions=drop: 这是最底层的默认丢弃规则。任何没有匹配到上述入端口或更高优先级规则的流量都将被丢弃

总结

总的来说,这些 OpenFlow 规则配置了一个具备以下功能的网络交换机:

  1. VLAN 分段: 传入的未标记流量根据其入端口被分配到 VLAN 100 或 VLAN 200。
  2. MAC 地址和协议过滤: 丢弃一些特定的广播/组播 MAC 地址流量,以及某些特殊类型的以太网协议(如 0x9000)。
  3. 控制器交互: VLAN 100 和 200 的流量在转发前会先被发送到控制器。
  4. 基于 VLAN 的泛洪/转发:
    • VLAN 100 的流量在剥离 VLAN 标签后,会泛洪到端口 1、2、3。
    • VLAN 200 的流量在剥离 VLAN 标签后,会泛洪到端口 4、5。
  5. 分层流表处理: 流量从最初的流表(隐式流表 0),通过流表 1 进行初步分类和 VLAN 处理,然后进入流表 2,最终进入流表 3 进行精细的转发或丢弃决策。
  6. 默认丢弃: 在每个流表和整个处理流程的末尾都有默认丢弃规则,确保任何未明确匹配的流量都不会被意外转发。

这些配置常见于软件定义网络 (SDN) 环境中,用于实现复杂的网络策略、隔离不同的租户或业务流量,并对流量进行精细控制。

4.6 OVS查看下发流表结果

4.6.1 完整流表查看

ovs-ofctl dump-flows br0

root@server1:~/ovs# ovs-ofctl dump-flows br0
 cookie=0x5adc15c0, duration=622.877s, table=0, n_packets=0, n_bytes=0, priority=4096,in_port=p1,vlan_tci=0x0000/0x1fff actions=mod_vlan_vid:100,resubmit(,1)
 cookie=0x5adc15c0, duration=622.877s, table=0, n_packets=0, n_bytes=0, priority=4096,in_port=p2,vlan_tci=0x0000/0x1fff actions=mod_vlan_vid:100,resubmit(,1)
 cookie=0x5adc15c0, duration=622.877s, table=0, n_packets=0, n_bytes=0, priority=4096,in_port=p3,vlan_tci=0x0000/0x1fff actions=mod_vlan_vid:100,resubmit(,1)
 cookie=0x5adc15c0, duration=622.876s, table=0, n_packets=0, n_bytes=0, priority=4096,in_port=p4,vlan_tci=0x0000/0x1fff actions=mod_vlan_vid:200,resubmit(,1)
 cookie=0x5adc15c0, duration=622.876s, table=0, n_packets=0, n_bytes=0, priority=4096,in_port=p5,vlan_tci=0x0000/0x1fff actions=mod_vlan_vid:200,resubmit(,1)
 cookie=0x5adc15c0, duration=622.876s, table=0, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x5adc15c0, duration=622.878s, table=1, n_packets=0, n_bytes=0, priority=20490,dl_type=0x9000 actions=drop
 cookie=0x5adc15c0, duration=622.878s, table=1, n_packets=0, n_bytes=0, priority=20480,dl_src=ff:ff:ff:ff:ff:ff actions=drop
 cookie=0x5adc15c0, duration=622.878s, table=1, n_packets=0, n_bytes=0, priority=20480,dl_src=0e:00:00:00:00:01 actions=drop
 cookie=0x5adc15c0, duration=622.877s, table=1, n_packets=0, n_bytes=0, priority=4096,dl_vlan=100 actions=CONTROLLER:96,resubmit(,2)
 cookie=0x5adc15c0, duration=622.877s, table=1, n_packets=0, n_bytes=0, priority=4096,dl_vlan=200 actions=CONTROLLER:96,resubmit(,2)
 cookie=0x5adc15c0, duration=622.877s, table=1, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,2)
 cookie=0x5adc15c0, duration=622.878s, table=2, n_packets=0, n_bytes=0, priority=0 actions=resubmit(,3)
 cookie=0x5adc15c0, duration=622.881s, table=3, n_packets=0, n_bytes=0, priority=8240,dl_dst=01:00:0c:cc:cc:cc actions=drop
 cookie=0x5adc15c0, duration=622.881s, table=3, n_packets=0, n_bytes=0, priority=8240,dl_dst=01:00:0c:cc:cc:cd actions=drop
 cookie=0x5adc15c0, duration=622.881s, table=3, n_packets=0, n_bytes=0, priority=8240,dl_vlan=100,dl_dst=ff:ff:ff:ff:ff:ff actions=strip_vlan,output:p1,output:p2,output:p3
 cookie=0x5adc15c0, duration=622.881s, table=3, n_packets=0, n_bytes=0, priority=8240,dl_vlan=200,dl_dst=ff:ff:ff:ff:ff:ff actions=strip_vlan,output:p4,output:p5
 cookie=0x5adc15c0, duration=622.881s, table=3, n_packets=0, n_bytes=0, priority=8236,dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 actions=drop
 cookie=0x5adc15c0, duration=622.880s, table=3, n_packets=0, n_bytes=0, priority=8216,dl_vlan=100,dl_dst=01:80:c2:00:00:00/ff:ff:ff:00:00:00 actions=strip_vlan,output:p1,output:p2,output:p3
 cookie=0x5adc15c0, duration=622.880s, table=3, n_packets=0, n_bytes=0, priority=8216,dl_vlan=100,dl_dst=01:00:5e:00:00:00/ff:ff:ff:00:00:00 actions=strip_vlan,output:p1,output:p2,output:p3
 cookie=0x5adc15c0, duration=622.880s, table=3, n_packets=0, n_bytes=0, priority=8216,dl_vlan=200,dl_dst=01:80:c2:00:00:00/ff:ff:ff:00:00:00 actions=strip_vlan,output:p4,output:p5
 cookie=0x5adc15c0, duration=622.879s, table=3, n_packets=0, n_bytes=0, priority=8216,dl_vlan=200,dl_dst=01:00:5e:00:00:00/ff:ff:ff:00:00:00 actions=strip_vlan,output:p4,output:p5
 cookie=0x5adc15c0, duration=622.879s, table=3, n_packets=0, n_bytes=0, priority=8208,dl_vlan=100,dl_dst=33:33:00:00:00:00/ff:ff:00:00:00:00 actions=strip_vlan,output:p1,output:p2,output:p3
 cookie=0x5adc15c0, duration=622.879s, table=3, n_packets=0, n_bytes=0, priority=8208,dl_vlan=200,dl_dst=33:33:00:00:00:00/ff:ff:00:00:00:00 actions=strip_vlan,output:p4,output:p5
 cookie=0x5adc15c0, duration=622.879s, table=3, n_packets=0, n_bytes=0, priority=8192,dl_vlan=100 actions=strip_vlan,output:p1,output:p2,output:p3
 cookie=0x5adc15c0, duration=622.879s, table=3, n_packets=0, n_bytes=0, priority=8192,dl_vlan=200 actions=strip_vlan,output:p4,output:p5
 cookie=0x5adc15c0, duration=622.879s, table=3, n_packets=0, n_bytes=0, priority=0 actions=drop
root@server1:~/ovs# 

4.6.2 简化流表查看的命令

简化格式,并使用of1.3:

root@server1:~/ovs/sandbox# dump-flows () {
>   ovs-ofctl -OOpenFlow13 --names --no-stat dump-flows "$@" \
>     | sed 's/cookie=0x5adc15c0, //'
> }
root@server1:~/ovs/sandbox# save-flows () {
>   ovs-ofctl -OOpenFlow13 --no-names --sort dump-flows "$@"
> }
root@server1:~/ovs/sandbox# diff-flows () {
>   ovs-ofctl -OOpenFlow13 diff-flows "$@" | sed 's/cookie=0x5adc15c0 //'
> }
root@server1:~/ovs/sandbox# 

是在定义一个名为 dump-flows 的自定义命令,用于简化并美化你从 Open vSwitch 中查看 Faucet 安装的 OpenFlow 流表的输出。

详细解释:

ovs-ofctl -OOpenFlow13 --names --no-stat dump-flows "$@"

  • -OOpenFlow13:指定使用 OpenFlow 1.3 协议,Faucet 就是基于这个版本的;

  • --names:在输出中显示端口、表、动作等的 友好名字,而不是纯数字;

  • --no-stat:不显示流表统计信息(比如 packet count、byte count),让输出更简洁;

  • dump-flows "$@":将你传给 dump-flows 命令的参数传给 ovs-ofctl,比如 bridge 名或 table id。

| sed 's/cookie=0x5adc15c0, //'

  • sed 删除特定 cookie 值 cookie=0x5adc15c0,

  • Faucet 安装的流通常会使用固定的 cookie 值,代表其管理的流;

  • 这一步是为了让输出更清爽,忽略这个重复的 cookie 信息

4.6.3 简化流表查看

dump-flows br0

root@server1:~/ovs# dump-flows br0
 priority=4096,in_port=p1,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
 priority=4096,in_port=p2,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
 priority=4096,in_port=p3,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
 priority=4096,in_port=p4,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4296->vlan_vid,goto_table:1
 priority=4096,in_port=p5,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4296->vlan_vid,goto_table:1
 priority=0 actions=drop
 table=1, priority=20490,dl_type=0x9000 actions=drop
 table=1, priority=20480,dl_src=ff:ff:ff:ff:ff:ff actions=drop
 table=1, priority=20480,dl_src=0e:00:00:00:00:01 actions=drop
 table=1, priority=4096,dl_vlan=100 actions=CONTROLLER:96,goto_table:2
 table=1, priority=4096,dl_vlan=200 actions=CONTROLLER:96,goto_table:2
 table=1, priority=0 actions=goto_table:2
 table=2, priority=0 actions=goto_table:3
 table=3, priority=8240,dl_dst=01:00:0c:cc:cc:cc actions=drop
 table=3, priority=8240,dl_dst=01:00:0c:cc:cc:cd actions=drop
 table=3, priority=8240,dl_vlan=100,dl_dst=ff:ff:ff:ff:ff:ff actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8240,dl_vlan=200,dl_dst=ff:ff:ff:ff:ff:ff actions=pop_vlan,output:p4,output:p5
 table=3, priority=8236,dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 actions=drop
 table=3, priority=8216,dl_vlan=100,dl_dst=01:80:c2:00:00:00/ff:ff:ff:00:00:00 actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8216,dl_vlan=100,dl_dst=01:00:5e:00:00:00/ff:ff:ff:00:00:00 actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8216,dl_vlan=200,dl_dst=01:80:c2:00:00:00/ff:ff:ff:00:00:00 actions=pop_vlan,output:p4,output:p5
 table=3, priority=8216,dl_vlan=200,dl_dst=01:00:5e:00:00:00/ff:ff:ff:00:00:00 actions=pop_vlan,output:p4,output:p5
 table=3, priority=8208,dl_vlan=100,dl_dst=33:33:00:00:00:00/ff:ff:00:00:00:00 actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8208,dl_vlan=200,dl_dst=33:33:00:00:00:00/ff:ff:00:00:00:00 actions=pop_vlan,output:p4,output:p5
 table=3, priority=8192,dl_vlan=100 actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8192,dl_vlan=200 actions=pop_vlan,output:p4,output:p5
 table=3, priority=0 actions=drop
vlan_tci解释

https://learningnetwork.cisco.com/s/question/0D56e0000CytEbjCQE/what-is-the-tci-field

The Tag Control Information (TCI) field is a 16-bit field that is added to an ethernet header, when a frame is passed over a trunk link.

This contains three things:

  1. VLAN ID
  2. DEI (Drop Eligible Indicator)
  3. Priority Code Point

The VLAN ID is 12-bits, and should be well understood by everyone here.

The DEI field is one bit. It indicates if a frame is eligible to be dropped in times of congestion.

This field was previously known as the CFI field.

The priority code point field is 3 bits.

This is where the 802.1p CoS bits are written in the frame.

This provides 8 classes to represent the frame's priority level.

Table 0 (vlan)
  • Ingress VLAN processing
priority=4096,in_port=p1,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
priority=4096,in_port=p2,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
priority=4096,in_port=p3,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4196->vlan_vid,goto_table:1
priority=4096,in_port=p4,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4296->vlan_vid,goto_table:1
priority=4096,in_port=p5,vlan_tci=0x0000/0x1fff actions=push_vlan:0x8100,set_field:4296->vlan_vid,goto_table:1
priority=0 actions=drop

流表 0: 入站 VLAN 标记

此流表是数据包到达网桥的入口点。它主要负责确保所有传入的未标记流量都被分配到一个特定的 VLAN。

  • VLAN 100 分配 (端口 p1, p2, p3):
    • priority=4096,in_port=p1/p2/p3,vlan_tci=0x0000/0x1fff actions=mod_vlan_vid:100,resubmit(,1)
    • 如果一个未标记或带有优先级标记的数据包到达端口 p1、p2 或 p3,它的 VLAN ID 将被修改为 100,然后重新提交给流表 1 进行进一步处理。
  • VLAN 200 分配 (端口 p4, p5):
    • priority=4096,in_port=p4/p5,vlan_tci=0x0000/0x1fff actions=mod_vlan_vid:200,resubmit(,1)
    • 同样,如果一个未标记或带有优先级标记的数据包到达端口 p4 或 p5,它的 VLAN ID 将被设置为 200,然后重新提交给流表 1
  • 默认丢弃:
    • priority=0 actions=drop
    • 任何进入网桥但未匹配上述 VLAN 分配规则的数据包(例如,已经带有不同 VLAN 标签,或者到达未配置特定 VLAN 的端口)都将被丢弃
Table 1 (eth_src)
  • Ingress L2 processing, MAC learning
 table=1, priority=20490,dl_type=0x9000 actions=drop
 table=1, priority=20480,dl_src=ff:ff:ff:ff:ff:ff actions=drop
 table=1, priority=20480,dl_src=0e:00:00:00:00:01 actions=drop
 table=1, priority=4096,dl_vlan=100 actions=CONTROLLER:96,goto_table:2
 table=1, priority=4096,dl_vlan=200 actions=CONTROLLER:96,goto_table:2
 table=1, priority=0 actions=goto_table:2

流表 1: 初步过滤和控制器交互

此流表执行一些初步过滤并决定是否需要与 OpenFlow 控制器进行交互。

  • 丢弃特定流量:
    • priority=20490,dl_type=0x9000 actions=drop (丢弃 Q-in-Q 或类似协议的数据包)
    • priority=20480,dl_src=ff:ff:ff:ff:ff:ff actions=drop (丢弃源 MAC 地址为广播地址的数据包)
    • priority=20480,dl_src=0e:00:00:00:00:01 actions=drop (丢弃来自特定无效源 MAC 地址的数据包) 这些规则确保某些类型的不受欢迎或无效的流量立即被丢弃
  • VLAN 100 和 200 的控制器交互:
    • priority=4096,dl_vlan=100 actions=CONTROLLER:96,resubmit(,2)
    • priority=4096,dl_vlan=200 actions=CONTROLLER:96,resubmit(,2)
    • 如果数据包的 VLAN ID 为 100 或 200,一个副本(或最多 96 字节)将发送给 OpenFlow 控制器,然后数据包重新提交给流表 2。这通常用于 MAC 学习或其他控制平面功能,其中控制器需要检查流量。
  • 默认重新提交到流表 2:
    • priority=0 actions=resubmit(,2)
    • 任何其他到达流表 1 且未匹配更高优先级丢弃或控制器规则的数据包将重新提交给流表 2
Table 2 (eth_dst)
  • Egress L2 processing
table=2, priority=0 actions=goto_table:3

流表 2: 根据目的mac进行转发,如果没有匹配的mac,就跳转到流表3进行flooding

此流表充当一个简单的转发机制。

  • 重新提交到流表 3:
    • priority=0 actions=resubmit(,3)
    • 所有进入流表 2 的数据包都将重新提交给流表 3
Table 3 (flood)
  • Flooding
 table=3, priority=8240,dl_dst=01:00:0c:cc:cc:cc actions=drop
 table=3, priority=8240,dl_dst=01:00:0c:cc:cc:cd actions=drop
 table=3, priority=8236,dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 actions=drop

这些规则都是 Faucet 控制器为保护网络稳定性而默认插入的 安全防护规则,防止如下行为:

协议 MAC 地址 Faucet 默认操作
CDP 01:00:0c:cc:cc:cc Drop
VTP 01:00:0c:cc:cc:cd Drop
STP/LLDP/LACP 等 01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 Drop

这些规则不会影响正常的主机通信(例如 IP 层交换或 L2 MAC 学习转发),只会阻止不希望被中继或转发的控制帧

 table=3, priority=8240,dl_vlan=100,dl_dst=ff:ff:ff:ff:ff:ff actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8240,dl_vlan=200,dl_dst=ff:ff:ff:ff:ff:ff actions=pop_vlan,output:p4,output:p5
 table=3, priority=8192,dl_vlan=100 actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8192,dl_vlan=200 actions=pop_vlan,output:p4,output:p5

这几条 OpenFlow 流表规则依然来自 Faucet 的 Table 3(L2 forwarding/flood),是 Faucet 控制器在 VLAN 场景中实现广播与泛洪(flooding)行为的关键流项。

✅ 含义总览

规则编号 匹配条件 动作 含义
1 VLAN 100 + 广播 MAC pop VLAN、发往 p1/p2/p3 VLAN 100 内的广播帧(如 ARP)泛洪到这三个端口
2 VLAN 200 + 广播 MAC pop VLAN、发往 p4/p5 同理,VLAN 200 的广播帧泛洪
3 VLAN 100 + 任意目标 MAC pop VLAN、发往 p1/p2/p3 VLAN 100 内未知目标 MAC 的泛洪
4 VLAN 200 + 任意目标 MAC pop VLAN、发往 p4/p5 同理,VLAN 200 内未知目标 MAC 的泛洪

 table=3, priority=8216,dl_vlan=100,dl_dst=01:80:c2:00:00:00/ff:ff:ff:00:00:00 actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8216,dl_vlan=100,dl_dst=01:00:5e:00:00:00/ff:ff:ff:00:00:00 actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8216,dl_vlan=200,dl_dst=01:80:c2:00:00:00/ff:ff:ff:00:00:00 actions=pop_vlan,output:p4,output:p5
 table=3, priority=8216,dl_vlan=200,dl_dst=01:00:5e:00:00:00/ff:ff:ff:00:00:00 actions=pop_vlan,output:p4,output:p5
 table=3, priority=8208,dl_vlan=100,dl_dst=33:33:00:00:00:00/ff:ff:00:00:00:00 actions=pop_vlan,output:p1,output:p2,output:p3
 table=3, priority=8208,dl_vlan=200,dl_dst=33:33:00:00:00:00/ff:ff:00:00:00:00 actions=pop_vlan,output:p4,output:p5
 table=3, priority=0 actions=drop

这段 OpenFlow 流表输出展示的是 Faucet 在 VLAN 场景下处理 组播流量 的默认规则行为。我们可以看到它对多个组播 MAC 地址范围进行了匹配,然后做了 VLAN 剥离(pop_vlan)和泛洪(output:...)操作,最后还有一个默认的丢弃规则。

4.7 模拟测试

4.7.1 Tracing 

root@server1:~/ovs/sandbox# ovs-appctl ofproto/trace br0 in_port=p1
Flow: in_port=1,vlan_tci=0x0000,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,dl_type=0x0000

bridge("br0")
-------------
 0. in_port=1,vlan_tci=0x0000/0x1fff, priority 4096, cookie 0x5adc15c0
    push_vlan:0x8100
    set_field:4196->vlan_vid
    goto_table:1
 1. dl_vlan=100, priority 4096, cookie 0x5adc15c0
    CONTROLLER:96
    goto_table:2
 2. priority 0, cookie 0x5adc15c0
    goto_table:3
 3. dl_vlan=100, priority 8192, cookie 0x5adc15c0
    pop_vlan
    output:1
     >> skipping output to input port
    output:2
    output:3

Final flow: unchanged
Megaflow: recirc_id=0,eth,in_port=1,dl_src=00:00:00:00:00:00,dl_dst=00:00:00:00:00:00,dl_type=0x0000
Datapath actions: push_vlan(vid=100,pcp=0),userspace(pid=0,controller(reason=1,dont_send=1,continuation=0,recirc_id=1,rule_cookie=0x5adc15c0,controller_id=0,max_len=96)),pop_vlan,2,3

table 0: 打上vlan tag (100),发给table 1
table 1: 发给controller/ table 2,CONTROLLER:96 代表 Faucet 控制器会看到这个包的前 96 字节,通常用于 MAC 学习或 ACL 日志;

table2: 由于没有对应目的mac的转发表项,所以发给table3去flooding

table 3:弹出vlan tag,从相关端口发出(除去入口)

最终,包内容没有变化(unchanged)。


root@server1:~/faucet/inst# tail -f faucet.log

root@server1:~/ovs/sandbox# tail -f ovs-vswitchd.log 

都没有看到流表下发的情况。

(结论:没有触发流表下发)

4.7.2 Triggering MAC Learning

保存上一份流表: 

root@server1:~/ovs/sandbox# save-flows br0 > flows1

模拟从p1端口流入dl_src=00:11:11:00:00:00,dl_dst=00:22:22:00:00:00的packet:

注意开启实时日志查看

root@server1:~/faucet/inst# tail -f faucet.log

root@server1:~/ovs/sandbox# tail -f ovs-vswitchd.log 

root@server1:~/ovs# ovs-appctl ofproto/trace br0 in_port=p1,dl_src=00:11:11:00:00:00,dl_dst=00:22:22:00:00:00 -generate
Flow: in_port=1,vlan_tci=0x0000,dl_src=00:11:11:00:00:00,dl_dst=00:22:22:00:00:00,dl_type=0x0000

bridge("br0")
-------------
 0. in_port=1,vlan_tci=0x0000/0x1fff, priority 4096, cookie 0x5adc15c0
    push_vlan:0x8100
    set_field:4196->vlan_vid
    goto_table:1
 1. dl_vlan=100, priority 4096, cookie 0x5adc15c0
    CONTROLLER:96
    goto_table:2
 2. priority 0, cookie 0x5adc15c0
    goto_table:3
 3. dl_vlan=100, priority 8192, cookie 0x5adc15c0
    pop_vlan
    output:1
     >> skipping output to input port
    output:2
    output:3

Final flow: unchanged
Megaflow: recirc_id=0,eth,in_port=1,dl_src=00:11:11:00:00:00,dl_dst=00:22:22:00:00:00,dl_type=0x0000
Datapath actions: push_vlan(vid=100,pcp=0),userspace(pid=0,controller(reason=1,dont_send=0,continuation=0,recirc_id=2,rule_cookie=0x5adc15c0,controller_id=0,max_len=96)),pop_vlan,2,3
root@server1:~/ovs# 

root@server1:~/ovs/sandbox# tail -f ovs-vswitchd.log 

2025-06-07T07:44:13.189Z|00811|vconn|DBG|tcp:127.0.0.1:6653: sent (Success): OFPT_PACKET_IN (OF1.3) (xid=0x0): table_id=1 cookie=0x5adc15c0 total_len=18 in_port=1 (via action) data_len=18 (unbuffered)
dl_vlan=100,dl_vlan_pcp=0,vlan_tci1=0x0000,dl_src=00:11:11:00:00:00,dl_dst=00:22:22:00:00:00,dl_type=0x05ff
2025-06-07T07:44:13.199Z|00812|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289040): ADD table:2 priority=8192,dl_vlan=100,dl_dst=00:11:11:00:00:00 cookie:0x5adc15c0 idle:11855 out_port:0 actions=pop_vlan,output:1
2025-06-07T07:44:13.200Z|00813|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289041): ADD table:1 priority=8191,in_port=1,dl_vlan=100,dl_src=00:11:11:00:00:00 cookie:0x5adc15c0 hard:7855 out_port:0 actions=goto_table:2
✅ 第一条(Packet-In)
2025-06-07T07:44:13.189Z|00811|vconn|DBG|tcp:127.0.0.1:6653: sent (Success): OFPT_PACKET_IN (OF1.3) (xid=0x0): table_id=1 cookie=0x5adc15c0 total_len=18 in_port=1 (via action) data_len=18 (unbuffered)
dl_vlan=100,dl_vlan_pcp=0,vlan_tci1=0x0000,dl_src=00:11:11:00:00:00,dl_dst=00:22:22:00:00:00,dl_type=0x05ff

含义:

  • Faucet 收到来自 OVS 的 PACKET_IN 报文(由 OVS 发给控制器处理)。

  • 该报文在 表 1 被触发,来源接口是 port 1,带 VLAN ID 100。

  • 源 MAC 是 00:11:11:00:00:00,目的 MAC 是 00:22:22:00:00:00

  • dl_type=0x05ff 是非标准 EtherType(可能是用于测试或非以太网的自定义协议)。

  • 数据长度仅 18 字节,表明是一个非常小的帧(可能是探测或虚拟测试帧)。

✅ 第二条(添加 FIB 表规则)
2025-06-07T07:44:13.199Z|00812|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289040): ADD table:2 priority=8192,dl_vlan=100,dl_dst=00:11:11:00:00:00 cookie:0x5adc15c0 idle:11855 out_port:0 actions=pop_vlan,output:1

含义:

  • Faucet 下发一条 添加流表规则 到表 2(ipv4_fib 或类似功能的 L2/L3 转发表)。

  • 匹配条件:目的 MAC 是 00:11:11:00:00:00,VLAN 是 100。

  • 动作:pop_vlan(移除 VLAN 标签)并从 port 1 输出。

  • 表示 Faucet 已学到该 MAC 的位置,并建立了反向转发表项。

✅ 第三条(学习 MAC 地址)
2025-06-07T07:44:13.200Z|00813|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289041): ADD table:1 priority=8191,in_port=1,dl_vlan=100,dl_src=00:11:11:00:00:00 cookie:0x5adc15c0 hard:7855 out_port:0 actions=goto_table:2

含义:

  • Faucet 向表 1(MAC 学习表)添加规则。

  • 匹配条件:入口端口为 1,VLAN 为 100,源 MAC 为 00:11:11:00:00:00

  • 动作:跳转到 表 2(后续 L3/FIB 处理)。

  • hard timeout 为 7855 秒,表示这个学习结果在那之后将自动过期。

  • 说明 Faucet 正在进行动态 MAC 学习,并维护一定时间的流表项。

🔁 总结流程:

  1. 交换机收到一条从 port 1 进入的 VLAN 100 报文(未知目的 MAC)。

  2. 无匹配表项,触发 PACKET_IN 发给 Faucet。

  3. Faucet 学习了源 MAC,对源和目的地址分别下发规则(表 1 和表 2)。

  4. 后续类似流量将不再触发控制器,而由交换机本地匹配快速处理。

root@server1:~/faucet/inst# tail -f faucet.log

Jun 07 07:44:13 faucet.valve INFO     DPID 1 (0x1) switch-1 L2 learned on Port 1 00:11:11:00:00:00 (L2 type 0x0000, L2 dst 00:22:22:00:00:00, L3 src None, L3 dst None) Port 1 VLAN 100 (1 hosts total)

这条日志说明 Faucet 控制器已经成功学习到了一个主机的 MAC 地址

root@server1:~/ovs# diff-flows flows1 br0
+table=1 priority=8191,in_port=1,dl_vlan=100,dl_src=00:11:11:00:00:00 hard_timeout=7855 actions=goto_table:2
+table=2 priority=8192,dl_vlan=100,dl_dst=00:11:11:00:00:00 idle_timeout=11855 actions=pop_vlan,output:1
root@server1:~/ovs# 

这些以 + 开头的条目表示在 br0 中存在、但 flows1 文件中没有的新增流表项。 

To demonstrate the usefulness of the learned MAC, try tracing (with side effects) a packet arriving on p2 (or p3) and destined to the address learned on p1, like this:

root@server1:~/ovs/sandbox# ovs-appctl ofproto/trace br0 in_port=p2,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00 -generate
Flow: in_port=2,vlan_tci=0x0000,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00,dl_type=0x0000

bridge("br0")
-------------
 0. in_port=2,vlan_tci=0x0000/0x1fff, priority 4096, cookie 0x5adc15c0
    push_vlan:0x8100
    set_field:4196->vlan_vid
    goto_table:1
 1. dl_vlan=100, priority 4096, cookie 0x5adc15c0
    CONTROLLER:96
    goto_table:2
 2. dl_vlan=100,dl_dst=00:11:11:00:00:00, priority 8192, cookie 0x5adc15c0
    pop_vlan
    output:1

Final flow: unchanged
Megaflow: recirc_id=0,eth,in_port=2,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00,dl_type=0x0000
Datapath actions: push_vlan(vid=100,pcp=0),userspace(pid=0,controller(reason=1,dont_send=0,continuation=0,recirc_id=3,rule_cookie=0x5adc15c0,controller_id=0,max_len=96)),pop_vlan,1
root@server1:~/ovs/sandbox# 

The first time you run this command, you will notice that it sends the packet to the controller, to learn p2’s 00:22:22:00:00:00 source address。

查看ovs-vswitchd.log 

2025-06-07T08:08:07.400Z|01392|vconn|DBG|tcp:127.0.0.1:6653: sent (Success): OFPT_PACKET_IN (OF1.3) (xid=0x0): table_id=1 cookie=0x5adc15c0 total_len=18 in_port=2 (via action) data_len=18 (unbuffered)
dl_vlan=100,dl_vlan_pcp=0,vlan_tci1=0x0000,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00,dl_type=0x05ff
2025-06-07T08:08:07.404Z|01393|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289042): ADD table:2 priority=8192,dl_vlan=100,dl_dst=00:22:22:00:00:00 cookie:0x5adc15c0 idle:11850 out_port:0 actions=pop_vlan,output:2
2025-06-07T08:08:07.405Z|01394|vconn|DBG|tcp:127.0.0.1:6653: received: OFPT_FLOW_MOD (OF1.3) (xid=0x42289043): ADD table:1 priority=8191,in_port=2,dl_vlan=100,dl_src=00:22:22:00:00:00 cookie:0x5adc15c0 hard:7850 out_port:0 actions=goto_table:2

查看inst/faucet.log, you can see that p2’s MAC has been learned too:


Jun 07 08:08:07 faucet.valve INFO     DPID 1 (0x1) switch-1 L2 learned on Port 2 00:22:22:00:00:00 (L2 type 0x0000, L2 dst 00:11:11:00:00:00, L3 src None, L3 dst None) Port 2 VLAN 100 (2 hosts total)

Similarly for diff-flows

root@server1:~/ovs# diff-flows flows1 br0                                          +table=1 priority=8191,in_port=1,dl_vlan=100,dl_src=00:11:11:00:00:00 hard_timeout=7855 actions=goto_table:2
+table=1 priority=8191,in_port=2,dl_vlan=100,dl_src=00:22:22:00:00:00 hard_timeout=7850 actions=goto_table:2
+table=2 priority=8192,dl_vlan=100,dl_dst=00:11:11:00:00:00 idle_timeout=11855 actions=pop_vlan,output:1
+table=2 priority=8192,dl_vlan=100,dl_dst=00:22:22:00:00:00 idle_timeout=11850 actions=pop_vlan,output:2
root@server1:~/ovs# 

Then, if you re-run either of the ofproto/trace commands (with or without -generate), you can see that the packets go back and forth without any further MAC learning, e.g.:

root@server1:~/ovs# ovs-appctl ofproto/trace br0 in_port=p2,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00 -generate
Flow: in_port=2,vlan_tci=0x0000,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00,dl_type=0x0000

bridge("br0")
-------------
 0. in_port=2,vlan_tci=0x0000/0x1fff, priority 4096, cookie 0x5adc15c0
    push_vlan:0x8100
    set_field:4196->vlan_vid
    goto_table:1
 1. in_port=2,dl_vlan=100,dl_src=00:22:22:00:00:00, priority 8191, cookie 0x5adc15c0
    goto_table:2
 2. dl_vlan=100,dl_dst=00:11:11:00:00:00, priority 8192, cookie 0x5adc15c0
    pop_vlan
    output:1

Final flow: unchanged
Megaflow: recirc_id=0,eth,in_port=2,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00,dl_type=0x0000
Datapath actions: 1
root@server1:~/ovs# 

4.8 Performance

slow path

  • 由用户空间的程序 ovs-vswitchd 来处理
  • ovs-vswitchd理解和执行 OpenFlow 协议

fast path(即datapath)

  • 方式一:linux kernel
  • 方式二:DPDK(userspace),绕过内核
  • 实际就是caching(多级cache机制),提高性能关键

patcket进入顺序

  • → 首先到达datapath
    • → 一级cache,microflow cache
    • → 二级cache,megaflow cache 
  • → slow path → 生成ODP actions (flow translation)→ datapath → 如果需要会生成megaflow

megaflow cache是调试性能的关键cache

再次查看之前的输出:

root@server1:~/ovs# ovs-appctl ofproto/trace br0 in_port=p2,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00 -generate
Flow: in_port=2,vlan_tci=0x0000,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00,dl_type=0x0000

bridge("br0")
-------------
 0. in_port=2,vlan_tci=0x0000/0x1fff, priority 4096, cookie 0x5adc15c0
    push_vlan:0x8100
    set_field:4196->vlan_vid
    goto_table:1
 1. in_port=2,dl_vlan=100,dl_src=00:22:22:00:00:00, priority 8191, cookie 0x5adc15c0
    goto_table:2
 2. dl_vlan=100,dl_dst=00:11:11:00:00:00, priority 8192, cookie 0x5adc15c0
    pop_vlan
    output:1

Final flow: unchanged
Megaflow: recirc_id=0,eth,in_port=2,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00,dl_type=0x0000
Datapath actions: 1
root@server1:~/ovs# 

最后一行:

Megaflow: recirc_id=0,eth,in_port=2,dl_src=00:22:22:00:00:00,dl_dst=00:11:11:00:00:00,dl_type=0x0000
Datapath actions: 1

  • Megaflow 匹配越具体(字段越多) → 可复用性越低 → 平均转发开销越高

  • 匹配越宽泛(字段越少) → 缓存命中率越高 → 系统性能越优

  • 在设计 OVS 的流表或规则时,要控制匹配字段的粒度,让 megaflow 能更好地“合并多个流”。