[NETVIRT-900] FIB entries not showing all IP's with dualstack VM IP's Created: 11/Sep/17  Updated: 03/May/18

Status: Verified
Project: netvirt
Component/s: General
Affects Version/s: Nitrogen
Fix Version/s: None

Type: Bug
Reporter: RajaRajan Manickam Assignee: Philippe Guibert
Resolution: Unresolved Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Attachments: File logs.tar.gz     Text File ticket_9138_traceopenstack_odl.txt     Text File ticket_9138_traceopenstack_odl_test2.txt     Text File ticket_9138_traceopenstack_odl_test3_binary_karaf7.txt    
Issue Links:
Relates
relates to NETVIRT-964 upon router-interface-delete IPv4 sub... Resolved
External issue ID: 9138
Priority: High

 Description   

Description:

FIB entries not showing all IP's with dual stack VM IP's
Steps:

1.Create, Network, Subnet(IPV4), Subnet(SLAAC), boot VM's check check all IP address got assigned.
status: Pass

2. Create network and add subnet to it and check FIB entries.

Status: FAIL. FIB entries are missing.



 Comments   
Comment by RajaRajan Manickam [ 11/Sep/17 ]

Attachment logs.tar.gz has been added with description: Logs attached

Comment by Vivekanandan Narasimhan [ 12/Sep/17 ]

This issue was reverified in RC3 build and the problem did not appear.

Rajarjan please attach your comments to the bug and then move the bug state appropriately.

Comment by RajaRajan Manickam [ 12/Sep/17 ]

(In reply to Vivekanandan Narasimhan from comment #1)
> This issue was reverified in RC3 build and the problem did not appear.
>
> Rajarjan please attach your comments to the bug and then move the bug state
> appropriately.

I have reduced priority from blocker. Since, issues is fixed on RC3 build.

Will update the same once the issue is fixed with dual stack VM.

Comment by Philippe Guibert [ 14/Sep/17 ]

Hi Rajarajan,

I understood that the problem seems to be fixed in RC3.
As the initial problem has been seen with older version ( RC2 ?), do we conclude that this issue is on hold ?

Comment by RajaRajan Manickam [ 18/Sep/17 ]

(In reply to Philippe Guibert from comment #3)
> Hi Rajarajan,
>
> I understood that the problem seems to be fixed in RC3.
> As the initial problem has been seen with older version ( RC2 ?), do we
> conclude that this issue is on hold ?

Hi Philippe,

This issue is reproducible on both RC3 and RC3.1.

This will be blocker for further testing.Since, when i associate subnet to router, i am expecting table 21 should be programmed. But, few VM IP's programmed and many of them not getting programmed.

Due to this data path will not work. Shall i move it blocker from critical?

Thanks,
RajaRajan

Comment by Philippe Guibert [ 21/Sep/17 ]

Hi rajarajan,

It seems this test is not visible on CSIT.
Right ?

Can I have more informationon the setup you used:

  • how many VMs are booted ( how many minimum VMs could be used to reproduce the issue)
  • when you say create network, you create a new network and anew subnet attached to the router ; right ?
Comment by RajaRajan Manickam [ 09/Oct/17 ]

Hi Philippe,

Yes. This test case is not part of CSIT yet.

  • how many VMs are booted ( how many minimum VMs could be used to reproduce the issue)

Answer: It can be reproducible easily with only 4 VM's.

  • when you say create network, you create a new network and anew subnet attached to the router ; right ?

Answer: Yes. Create new network and subnet and attached to router.

Please let me know if any info required.

Thanks,
RajaRajan

Comment by Vivekanandan Narasimhan [ 18/Oct/17 ]

it is not very clear the steps posted by Rajarajan.

If the following steps works for on top-of-trunk of Nitrogen (not IPv6 review based workspaces), then we can actually close the bug:

1. Create a Network - N1

2. Create an IPv4 subnet on that network - V4N1

3. Create a Ipv6 subnet on that network - V6N1

4. Create a Router R1 and add only the IPv6 subnet to that router

5. Make sure the DHCP Agent running is capable of supporting both IPv4 and IPv6 (like upstream CSIT)

5. Then boot 4 VMs on the Network (not the subnet). All those VMs will come up as DualStack VMs.

6. Make sure all the 4 VMs got both the IPv4 and IPv6 addresses correctly.

7. Make sure FIB shows only the IPv6 address prefixes of V6N1 subnet.

8. Just make sure that IPv6 to IPv6 communication between all the 4 VMs are working.

9. Just make that IPv4 to IPv4 communication between all the 4 VMs are working.

10. Create another new network N2 and an IPv4 subnet V4N2 on that new network.

11. Boot 2 more VMs with network N2.

11. Add that subnet V4N2 to the same router R1 of Step 4.

10. Make sure FIB now additionally shows the two IPv4 VM prefixes of V4N2 subnet.

11. Make sure that both those VMs of step 11 are able to communicate with each other on IPv4 prefixes.

12. Make sure that both those VMs of N2 of Step 11 are not able to communicate over IPv4 to the VMs on Network N1.

13. Now delete subnet V6N1 from Router R1.

14. Make sure all IPv6 FIB Entries of V6N1 disappear.

15. Then delete subnet V4N2 from Router R1.

16. Make sure all the IPv4 FIB Entries of V4N2 disappear.

17. Delete the Router R1. Make sure VRF Table for R1 disappears.

<Test Ends>

Comment by Philippe Guibert [ 19/Oct/17 ]

Hi,
I did some basic testing with head branch of nitrogen.

  • 13aa5277638c - (HEAD, gerrit/stable/nitrogen) BUG 9221: Improve logical SFF handling (3 days ago) <Jaime Caamaño
  • 87fa9a02ee34 - BUG 9220: don't use tun_gpe_np as match field (3 days ago) <Jaime Caamaño Ruiz>

Please see log file https://jira.opendaylight.org/secure/attachment/14194/ticket_9138_traceopenstack_odl.txt.

According to above plan, I could observe the following:

  • at step 7, even if ipv4 subnet is not attached to router, the IPv4s are displayed in the FIB, along with IPv6.
  • reversely at step 14, the ipv4 IPs of the first N1 network have been removed, when IPv6 subnet is detached.

Note also, that the plan differs in two points:

  • I did create only 3 and 1 VM for respectively N1 and N2.
  • for N2, I created the VM after having attached the ipv4 subnet of N2 to router1.
    Hope it does not impact the test result.
Comment by Philippe Guibert [ 19/Oct/17 ]

file https://jira.opendaylight.org/secure/attachment/14195/ticket_9138_traceopenstack_odl_test2.txt follows an other series of steps.
I followed the series, and did not detect any missing FIB entries,.., except at step 9, where almost all IPv6 entries are missing ( indeed, only gateway IPv6 was still present)

Comment by Philippe Guibert [ 19/Oct/17 ]

Adding to this, in file https://jira.opendaylight.org/secure/attachment/14196/ticket_9138_traceopenstack_odl_test3_binary_karaf7.txt, using following file:
https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/integration/karaf/0.7.0/karaf-0.7.0.tar.gz
date: Tue Sep 19 21:06:55

I could observe some missing entries

I ran 2 VMs with IPv4 Subnet ( VM1, VM2), then I ran 2 other VMs with IPv6 being added ( VM3, VM4).
I could see following differences:

At step 4, original VMs take more or less time to acquire IPv6 address ( VM2 takes more time than VM1)
At step 7, there is a missing IPv4 and IPv6 entry ( this is VM2 addressing that is missing)

As for before, the step 9 is flushing almost all IPv6.

Regards,

Comment by Philippe Guibert [ 19/Oct/17 ]

2 issues encountered:

  • issue 1 : some IPv6 entries not present
    seen only on karaf-0.7
    this is this issue.
    And As a test has been done with latest netvirt nitrogen, I can say that the issue is not reproduced anymore.
  • issue 2 : when dissociate ipv4 subnet, the ipv6 entries are removed too
    this should be part of a separate ticket.
Comment by Philippe Guibert [ 25/Oct/17 ]

Issue is FIXED in latest stable RC build.
closing the issue.

Generated at Wed Feb 07 20:22:45 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.