[OPNFLWPLUG-411] Flows are not cleaned up when mininet disconnects Created: 23/Apr/15 Updated: 27/Sep/21 Resolved: 06/Jun/15 |
|
| Status: | Resolved |
| Project: | OpenFlowPlugin |
| Component/s: | General |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | ||
| Reporter: | Jan Medved | Assignee: | Kamal Rameshan |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Operating System: All |
||
| External issue ID: | 3048 |
| Comments |
| Comment by Jan Medved [ 23/Apr/15 ] |
|
Flows and nodes are not cleaned up from the operational data store when mininet disconnects from the controller. |
| Comment by Kamal Rameshan [ 29/Apr/15 ] |
|
This might be related to this : https://bugs.opendaylight.org/show_bug.cgi?id=3085 |
| Comment by Abhijit Kumbhare [ 04/May/15 ] |
|
This is likely fixed - correct Kamal? |
| Comment by Kamal Rameshan [ 04/May/15 ] |
|
Too early to say abhijit, the patch i submitted might fix this, as i have encountered the same issue many times. There is one scenario, which vaclav mentioned, which might need some reconsideration to the patch i submitted. Still in progress. |
| Comment by Abhijit Kumbhare [ 06/May/15 ] |
|
Jan, Is this still happening and is it still a blocker? Thanks, |
| Comment by Jamo Luhrsen [ 03/Jun/15 ] |
|
Hi Jan, we have a job that tries to scale using mininet, and it's able to get to 300 switches before it runs in to trouble at 400. Part of the test is checking that all switches are gone after mininet is stopped, so at least with 300 we are seeing that they are removed. just fyi. JamO |
| Comment by Jamo Luhrsen [ 03/Jun/15 ] |
|
and, as I looked closer the reason the test did not scale to 400 switches was because there were just more fyi |
| Comment by Kamal Rameshan [ 04/Jun/15 ] |
|
Hi Jamo, Are you running your tests in a 3 node cluster? On a 3 node cluster, i have seen a single mininet not performing with more than 250+ nodes. I divided the nodes into 3 mininet vms and things were considerably smooth |
| Comment by Kamal Rameshan [ 05/Jun/15 ] |
|
Patch to be reviewed: https://git.opendaylight.org/gerrit/#/c/21915/ |
| Comment by Kamal Rameshan [ 05/Jun/15 ] |
|
The above patch might solve this issue. Please re-test. I have seen that duplicate NodeRemoved notifications causes the transaction chain to fail, which would cause other deserving node removals to fail well. I tried with 300 nodes and all the nodes got cleaned up. |
| Comment by Jan Medved [ 05/Jun/15 ] |
|
Can you please re-test with the lithium plugin? |
| Comment by Jamo Luhrsen [ 06/Jun/15 ] |
|
can we mark this fixed/resolved? the switch scalability test in CI is passing for stable/lithium: the same job is there for the lithium-redesign plugin and that made it past 400 ok. Unfortunately, |
| Comment by Abhijit Kumbhare [ 06/Jun/15 ] |
|
Good to know. |