[SFC-160] SFC VPP Renderer can't work with Honeycomb and VPP Created: 06/Sep/16  Updated: 19/Oct/17  Resolved: 08/Sep/16

Status: Resolved
Project: sfc
Component/s: General
Affects Version/s: unspecified
Fix Version/s: None

Type: Bug
Reporter: Yi Yang Assignee: Yi Yang
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Attachments: GIF File ScreenHunter_03 Sep. 07 15.41.gif    
External issue ID: 6640
Priority: Highest

 Description   

Honeycomb and VPP have some changes to support plugin infrastructure, so ODL SFC VPP renderer has to adapt to these changes in order that it can work correctly.

Honeycomb and VPP were not ready for integration test when SFC VPP renderer was merged.

This bug is blocking issue, it can't work if it isn't fixed.



 Comments   
Comment by A H [ 06/Sep/16 ]

Once a patch to stable/boron available, please help us analyze the footprint of the patch. To better assess the impact of this bug and fix, could someone from your team please help us identify the following:
Severity: Could you elaborate on the severity of this bug? Is this a BLOCKER such that we cannot release Boron without it? Is there a workaround such that we can write a release note and fix in future Boron SR1?
Testing: Could you also elaborate on the testing of this patch? How extensively has this patch been tested? Is it covered by any unit tests or system tests?
Impact: Does this fix impact any dependent projects?

Comment by Yi Yang [ 07/Sep/16 ]

https://git.opendaylight.org/gerrit/45283 will fix this issue.

Comment by Yi Yang [ 07/Sep/16 ]

patch size:

java/org/opendaylight/sfc/sfc_vpp_renderer/renderer/VppNodeManager.java | 9
java/org/opendaylight/sfc/sfc_vpp_renderer/renderer/VppRspProcessor.java | 157 +++++-----
yang/v3po.yang | 117 +++++++
yang/vpp-nsh.yang | 48 +--
4 files changed, 236 insertions, 95 deletions

Comment by Brady Johnson [ 07/Sep/16 ]

Fixed with these patches:

master
https://git.opendaylight.org/gerrit/45284

stable/boron
https://git.opendaylight.org/gerrit/45283

Comment by Yi Yang [ 07/Sep/16 ]

I think it is difficult for jenkins to test SFC VPP Renderer, this needs Honeycomb and VPP. GBP(groupbasedpolicy) also integrate Honeycomb and VPP, maybe Vladimor or Keith can provide help for this.

My integration test:

1. Start 6 VMs: classifier1, SFF1(VPP1), SF1, SFF2(VPP2), SF2, classifier2.
classifier1 and classifier2 run my ovs-dpdk (https://github.com/yyang13/ovs_nsh_patches), they will act as classifier to encapsulate and decapsulate VxLAN-gpe+NSH. Here are br-int info.

f13fc1c6-0ffd-417f-855b-20ce88bdcb4e
Bridge br-int
Port "vxlangpe0"
Interface "vxlangpe0"
type: vxlan
options:

{dst_port="4790", exts=gpe, key=flow, remote_ip=flow}

Port br-int
Interface br-int
type: internal
Port "vxlan0"
Interface "vxlan0"
type: vxlan
options:

{key=flow, remote_ip=flow}

Port "dpdk0"
Interface "dpdk0"
type: dpdk
Port veth-br
Interface veth-br

2. SFF1(VPP1) and SFF2(VPP2) will run vpp and honeycomb, nsh_sfc must be installed as plugin.

sudo start vpp
sudo sh ./nsh/minimal-distribution/target/minimal-distribution-1.16.12-SNAPSHOT-hc/minimal-distribution-1.16.12-SNAPSHOT/honeycomb

3. Run NSH-aware SF in SF1 and SF2
sudo ifconfig eth2 <SFIP> netmask 255.255.255.0 up
sudo python ./vxlan_tool.py -i eth2 --do=forward -v on
https://git.opendaylight.org/gerrit/gitweb?p=sfc.git;a=blob;f=sfc-test/nsh-tools/vxlan_tool.py;h=d7b75316f0e29c1dc09ec63b17a054c2fcdf20af;hb=HEAD is vxlan_tool.py from sfc project.

4. Run SFC in the host VMs's running and install the below features:
feature:install odl-sfc-vpp-renderer
feature:install odl-sfc-ui

5. Add SFs, SFFs, SFC, SFP and RSP

PUT http://10.240.224.185:8181/restconf/config/service-function:service-functions

{
"service-functions": {
"service-function": [
{
"name": "firewall1-vpp1",
"type": "service-function-type:firewall",
"sf-data-plane-locator": [

{ "name": "eth0-11", "ip": "192.168.20.10", "port": 4790, "service-function-forwarder": "SFF1-VPP1", "transport": "service-locator:vxlan-gpe" }

],
"nsh-aware": true,
"ip-mgmt-address": "192.168.20.10"
},
{
"name": "dpi2-vpp2",
"type": "service-function-type:dpi",
"sf-data-plane-locator": [

{ "name": "eth0-12", "ip": "192.168.20.13", "port": 4790, "service-function-forwarder": "SFF2-VPP2", "transport": "service-locator:vxlan-gpe" }

],
"nsh-aware": true,
"ip-mgmt-address": "192.168.20.13"
}
]
}
}

PUT http://10.240.224.185:8181/restconf/config/service-function-forwarder:service-function-forwarders

{
"service-function-forwarders": {
"service-function-forwarder": [
{
"name": "SFF1-VPP1",
"ip-mgmt-address": "192.168.10.11",
"service-function-dictionary": [
{
"name": "firewall1-vpp1",
"sff-sf-data-plane-locator":

{ "sf-dpl-name": "eth0-11", "sff-dpl-name": "vxlangpe-11" }

}
],
"service-function-forwarder-vpp:sff-netconf-node-type": "netconf-node-type-honeycomb",

"sff-data-plane-locator": [
{
"name": "vxlangpe-11",
"data-plane-locator":

{ "transport": "service-locator:vxlan-gpe", "port": 4790, "ip": "192.168.20.11" }

}
]
},
{
"name": "SFF2-VPP2",
"ip-mgmt-address": "192.168.10.12",
"service-function-dictionary": [
{
"name": "dpi2-vpp2",
"sff-sf-data-plane-locator":

{ "sf-dpl-name": "eth0-12", "sff-dpl-name": "vxlangpe-12" }

}
],
"service-function-forwarder-vpp:sff-netconf-node-type": "netconf-node-type-honeycomb",

"sff-data-plane-locator": [
{
"name": "vxlangpe-12",
"data-plane-locator":

{ "transport": "service-locator:vxlan-gpe", "port": 4790, "ip": "192.168.20.12" }

}
]
}
]
}
}

PUT http://10.240.224.185:8181/restconf/config/service-function-chain:service-function-chains/

{
"service-function-chains": {
"service-function-chain": [
{
"name": "SFCVPP",
"symmetric": "true",
"sfc-service-function": [

{ "name": "firewall-abstract1", "type": "service-function-type:firewall" }

,

{ "name": "dpi-abstract1", "type": "service-function-type:dpi" }

]
}
]
}
}

PUT http://10.240.224.185:8181/restconf/config/service-function-path:service-function-paths/

{
"service-function-paths": {
"service-function-path": [

{ "name": "SFCVPP-Path", "service-chain-name": "SFCVPP", "starting-index": 255, "symmetric": "true" }

]
}
}

PUT http://10.240.224.185:8181/restconf/operations/rendered-service-path:create-rendered-path/

{
"input":

{ "name": "SFCVPP-RSP1", "parent-service-function-path": "SFCVPP-Path", "symmetric": "true" }

}

6. By far, SFC Renderer has successfully created vxlan-gpe ports, nsh entries and nsh maps for SFF1(VPP1) and SFF2(VPP2).

vagrant@vpp1:~/honeycomb$ sudo vppctl show vxlan-gpe
[0] local: 192.168.20.11 remote: 192.168.20.10 vxlan VNI 0 next-protocol nsh fibs: (encap 0, decap 0)
[1] local: 192.168.20.11 remote: 192.168.20.12 vxlan VNI 0 next-protocol nsh fibs: (encap 0, decap 0)
[2] local: 192.168.20.11 remote: 192.168.20.9 vxlan VNI 0 next-protocol nsh fibs: (encap 0, decap 0)
vl_client_get_first_plugin_msg_id:266: plugin 'export_eb694f98' not registered
vagrant@vpp1:~/honeycomb$ sudo vppctl show nsh entry
nsh ver 0 len 6 (24 bytes) md_type 1 next_protocol 3
service path 4 service index 255
c1 0 c2 0 c3 0 c4 0
nsh ver 0 len 6 (24 bytes) md_type 1 next_protocol 3
service path 4 service index 254
c1 0 c2 0 c3 0 c4 0
nsh ver 0 len 6 (24 bytes) md_type 1 next_protocol 3
service path 8388612 service index 254
c1 0 c2 0 c3 0 c4 0
nsh ver 0 len 6 (24 bytes) md_type 1 next_protocol 3
service path 8388612 service index 253
c1 0 c2 0 c3 0 c4 0
vl_client_get_first_plugin_msg_id:266: plugin 'export_eb694f98' not registered
vagrant@vpp1:~/honeycomb$ sudo vppctl show nsh map
nsh entry nsp: 4 nsi: 255 maps to nsp: 4 nsi: 255 encapped by VXLAN GPE intf: 3
nsh entry nsp: 4 nsi: 254 maps to nsp: 4 nsi: 254 encapped by VXLAN GPE intf: 4
nsh entry nsp: 8388612 nsi: 254 maps to nsp: 8388612 nsi: 254 encapped by VXLAN GPE intf: 3
nsh entry nsp: 8388612 nsi: 253 maps to nsp: 8388612 nsi: 253 encapped by VXLAN GPE intf: 5
vl_client_get_first_plugin_msg_id:266: plugin 'export_eb694f98' not registered
vagrant@vpp1:~/honeycomb$

vagrant@vpp2:~$ sudo vppctl show vxlan-gpe
[0] local: 192.168.20.12 remote: 192.168.20.13 vxlan VNI 0 next-protocol nsh fibs: (encap 0, decap 0)
[1] local: 192.168.20.12 remote: 192.168.20.14 vxlan VNI 0 next-protocol nsh fibs: (encap 0, decap 0)
[2] local: 192.168.20.12 remote: 192.168.20.11 vxlan VNI 0 next-protocol nsh fibs: (encap 0, decap 0)
vl_client_get_first_plugin_msg_id:266: plugin 'export_eb694f98' not registered
vagrant@vpp2:~$ sudo vppctl show nsh entry
nsh ver 0 len 6 (24 bytes) md_type 1 next_protocol 3
service path 4 service index 254
c1 0 c2 0 c3 0 c4 0
nsh ver 0 len 6 (24 bytes) md_type 1 next_protocol 3
service path 4 service index 253
c1 0 c2 0 c3 0 c4 0
nsh ver 0 len 6 (24 bytes) md_type 1 next_protocol 3
service path 8388612 service index 255
c1 0 c2 0 c3 0 c4 0
nsh ver 0 len 6 (24 bytes) md_type 1 next_protocol 3
service path 8388612 service index 254
c1 0 c2 0 c3 0 c4 0
vl_client_get_first_plugin_msg_id:266: plugin 'export_eb694f98' not registered
vagrant@vpp2:~$ sudo vppctl show nsh map
nsh entry nsp: 4 nsi: 254 maps to nsp: 4 nsi: 254 encapped by VXLAN GPE intf: 3
nsh entry nsp: 4 nsi: 253 maps to nsp: 4 nsi: 253 encapped by VXLAN GPE intf: 4
nsh entry nsp: 8388612 nsi: 255 maps to nsp: 8388612 nsi: 255 encapped by VXLAN GPE intf: 3
nsh entry nsp: 8388612 nsi: 254 maps to nsp: 8388612 nsi: 254 encapped by VXLAN GPE intf: 5
vl_client_get_first_plugin_msg_id:266: plugin 'export_eb694f98' not registered
vagrant@vpp2:~$

7. Now we can ping and wget namespace in host2.

vagrant@host1:~$ sudo ip netns exec app ping 10.0.100.14
PING 10.0.100.14 (10.0.100.14) 56(84) bytes of data.
64 bytes from 10.0.100.14: icmp_seq=1 ttl=64 time=3.98 ms
64 bytes from 10.0.100.14: icmp_seq=2 ttl=64 time=4.63 ms
64 bytes from 10.0.100.14: icmp_seq=3 ttl=64 time=4.12 ms
64 bytes from 10.0.100.14: icmp_seq=4 ttl=64 time=4.38 ms
64 bytes from 10.0.100.14: icmp_seq=5 ttl=64 time=4.19 ms
^C
— 10.0.100.14 ping statistics —
5 packets transmitted, 5 received, 0% packet loss, time 4017ms
rtt min/avg/max/mdev = 3.984/4.264/4.632/0.223 ms
vagrant@host1:~$

vagrant@host1:~$ sudo ip netns exec app wget http://10.0.100.14/
-2016-09-07 02:13:38- http://10.0.100.14/
Connecting to 10.0.100.14:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1677 (1.6K) [text/html]
Saving to: ‘index.html’

100%[=====================================>] 1,677 --.-K/s in 0.02s

2016-09-07 02:13:38 (89.1 KB/s) - ‘index.html’ saved [1677/1677]

vagrant@host1:~$

Comment by Yi Yang [ 07/Sep/16 ]

Test network topology

Comment by Yi Yang [ 07/Sep/16 ]

Attachment ScreenHunter_03 Sep. 07 15.41.gif has been added with description: Test network topology

Comment by Brady Johnson [ 07/Sep/16 ]

(In reply to A H from comment #1)
> Once a patch to stable/boron available, please help us analyze the footprint
> of the patch. To better assess the impact of this bug and fix, could
> someone from your team please help us identify the following:
> Severity: Could you elaborate on the severity of this bug? Is this a
> BLOCKER such that we cannot release Boron without it? Is there a workaround
> such that we can write a release note and fix in future Boron SR1?
> Testing: Could you also elaborate on the testing of this patch? How
> extensively has this patch been tested? Is it covered by any unit tests or
> system tests?
> Impact: Does this fix impact any dependent projects?

Severity: Yes it’s a blocker. No known workaround. Cannot delay to SR1. Without this patch, the functionality will NOT work.

Testing: Yes we tested the patch and it was verified. Unfortunately, no unit tests/system tests yet. See comment 5 for details about the testing: https://bugs.opendaylight.org/show_bug.cgi?id=6640#c5

Impact: No impact to any dependent projects.

Comment by A H [ 08/Sep/16 ]

Has this bug been verified as fixed in the latest Boron RC 3.1 Build?

Comment by Yi Yang [ 08/Sep/16 ]

(In reply to A H from comment #8)
> Has this bug been verified as fixed in the latest Boron RC 3.1 Build?

Where can I get this RC3.1 Build?

Comment by Yi Yang [ 08/Sep/16 ]

(In reply to A H from comment #8)
> Has this bug been verified as fixed in the latest Boron RC 3.1 Build?

An, I think this one https://nexus.opendaylight.org/content/repositories/autorelease-1477/org/opendaylight/integration/distribution-karaf/0.5.0-Boron/distribution-karaf-0.5.0-Boron.tar.gz should be what I need to verify.

I verified it, it worked as expected, so I confirm this bug is fixed in Boron RC 3.1 build.

Generated at Wed Feb 07 20:38:48 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.