[NETVIRT-589] stable/boron not usable in OPNFV-test framwork - DHCP timesout Created: 03/Apr/17  Updated: 20/Apr/17  Resolved: 20/Apr/17

Status: Resolved
Project: netvirt
Component/s: General
Affects Version/s: Boron
Fix Version/s: None

Type: Bug
Reporter: Nikolas Hermanns Assignee: Vyshakh Krishnan
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Attachments: Zip Archive full-inventory.zip     Zip Archive odl-dhcp-issue.zip    
External issue ID: 8142
Priority: High

 Description   

Hey,

We see big issues in using stable/boron (nearly SR-3) in the OPNFV test pipeline. This is the test what is done:
2017-03-31 15:51:36,569 - openstack_utils - INFO - Creating neutron network sdnvpn-8-1...
2017-03-31 15:51:36,919 - openstack_utils - DEBUG - Network '130d6058-1321-4d4b-bc89-99c62e6dc971' created successfully
2017-03-31 15:51:36,919 - openstack_utils - DEBUG - Creating Subnet....
2017-03-31 15:51:37,166 - openstack_utils - DEBUG - Subnet '98a442fa-01d8-4241-a07d-af5dc22b7259' created successfully
2017-03-31 15:51:37,166 - openstack_utils - DEBUG - Creating Router...
2017-03-31 15:51:37,300 - openstack_utils - DEBUG - Router '80e53bf6-1426-46f8-9c2f-ee96d6efc644' created successfully
2017-03-31 15:51:37,301 - openstack_utils - DEBUG - Adding router to subnet...
2017-03-31 15:51:38,899 - openstack_utils - DEBUG - Interface added successfully.
2017-03-31 15:51:38,899 - openstack_utils - DEBUG - Adding gateway to router...
2017-03-31 15:51:40,276 - openstack_utils - DEBUG - Gateway added successfully.
2017-03-31 15:51:40,276 - sndvpn_test_utils - DEBUG - Creating network sdnvpn-8-2
2017-03-31 15:51:40,497 - sndvpn_test_utils - DEBUG - Creating subnet sdnvpn-8-2-subnet in network 773cd3bd-24a5-4886-9cb0-87fe7426c818 with cidr 10.10.20.0/24
2017-03-31 15:51:41,175 - openstack_utils - INFO - Creating security group 'sdnvpn-sg'...
2017-03-31 15:51:41,308 - openstack_utils - DEBUG - Security group 'sdnvpn-sg' with ID=65206fea-7be2-49b6-abce-b32c3c46a61a created successfully.
2017-03-31 15:51:41,309 - openstack_utils - DEBUG - Adding ICMP rules in security group 'sdnvpn-sg'...
2017-03-31 15:51:41,309 - openstack_utils - DEBUG - Security_group format set (no port range mentioned)
2017-03-31 15:51:41,455 - openstack_utils - DEBUG - Adding SSH rules in security group 'sdnvpn-sg'...
2017-03-31 15:51:41,456 - openstack_utils - DEBUG - Security_group format set (port range included)
2017-03-31 15:51:41,610 - openstack_utils - DEBUG - Security_group format set (port range included)
2017-03-31 15:51:41,773 - openstack_utils - DEBUG - Security_group format set (no port range mentioned) Neutron server returns request_ids: ['req-3afdd79b-09c5-41c0-a3d5-edc077407a21']
2017-03-31 15:51:41,872 - openstack_utils - ERROR - Bad security group format.One of the port range is not properly set:range min: 80,range max: None
2017-03-31 15:51:41,872 - sndvpn_test_utils - INFO - Creating instance 'sdnvpn-8-2'...
2017-03-31 15:51:41,873 - sndvpn_test_utils - DEBUG - Configuration:
name=sdnvpn-8-2
flavor=m1.tiny
image=1018ece1-b44c-4fef-857e-bb82f2a61f2f
network=773cd3bd-24a5-4886-9cb0-87fe7426c818
secgroup=65206fea-7be2-49b6-abce-b32c3c46a61a
hypervisor=
fixed_ip=None
files=None
userdata=
None

2017-03-31 15:51:41,889 - keystoneauth.identity.v2 - DEBUG - Making authentication request to http://192.168.37.10:5000/v2.0/tokens
2017-03-31 15:51:43,592 - keystoneauth.identity.v2 - DEBUG - Making authentication request to http://192.168.37.10:5000/v2.0/tokens
2017-03-31 15:51:51,896 - sndvpn_test_utils - DEBUG - Instance 'sdnvpn-8-2' booted successfully. IP='10.10.20.3'.
2017-03-31 15:51:51,896 - sndvpn_test_utils - DEBUG - Adding 'sdnvpn-8-2' to security group 'sdnvpn-sg'...
2017-03-31 15:51:51,905 - keystoneauth.identity.v2 - DEBUG - Making authentication request to http://192.168.37.10:5000/v2.0/tokens
2017-03-31 15:51:53,080 - sndvpn_test_utils - INFO - Creating instance 'sdnvpn-8-1'...
2017-03-31 15:51:53,081 - sndvpn_test_utils - DEBUG - Configuration:
name=sdnvpn-8-1
flavor=m1.tiny
image=1018ece1-b44c-4fef-857e-bb82f2a61f2f
network=130d6058-1321-4d4b-bc89-99c62e6dc971
secgroup=65206fea-7be2-49b6-abce-b32c3c46a61a
hypervisor=
fixed_ip=None
files=None
userdata=
#!/bin/sh
set 10.10.20.3
while true; do
for i do
ip=$i
ping -c 1 $ip 2>&1 >/dev/null
RES=$?
if [ "Z$RES" = "Z0" ] ; then
echo ping $ip OK
else echo ping $ip KO
fi
done
sleep 1
done

2017-03-31 15:51:53,095 - keystoneauth.identity.v2 - DEBUG - Making authentication request to http://192.168.37.10:5000/v2.0/tokens
2017-03-31 15:51:54,357 - keystoneauth.identity.v2 - DEBUG - Making authentication request to http://192.168.37.10:5000/v2.0/tokens
2017-03-31 15:52:02,684 - sndvpn_test_utils - DEBUG - Instance 'sdnvpn-8-1' booted successfully. IP='10.10.10.7'.
2017-03-31 15:52:02,684 - sndvpn_test_utils - DEBUG - Adding 'sdnvpn-8-1' to security group 'sdnvpn-sg'...
2017-03-31 15:52:03,493 - sdnvpn-results - INFO - Create VPN with eRT==iRT
2017-03-31 15:52:03,649 - sdnvpn-testcase-8 - DEBUG - VPN created details: {u'bgpvpn': {u'export_targets': [u'88:88'], u'name': u'sdnvpn-7', u'route_distinguishers': [u'18:18'], u'routers': [], u'import_targets': [u'88:88'], u'networks':
[], u'tenant_id': u'a94d4d97de9d4200ba9beca8d05c83d5', u'route_targets': [], u'project_id': u'a94d4d97de9d4200ba9beca8d05c83d5', u'type': u'l3', u'id': u'07c91f3f-df7d-4177-87c4-6320127359e8'}}
2017-03-31 15:52:03,649 - sdnvpn-results - INFO - Associate router 'sdnvpn-8-1-router' and net 'sdnvpn-8-2' to the VPN.
2017-03-31 15:52:04,965 - sndvpn_test_utils - DEBUG - Waiting for router 07c91f3f-df7d-4177-87c4-6320127359e8 to associate with BGPVPN 80e53bf6-1426-46f8-9c2f-ee96d6efc644
2017-03-31 15:52:06,129 - sndvpn_test_utils - DEBUG - Waiting for network 07c91f3f-df7d-4177-87c4-6320127359e8 to associate with BGPVPN 773cd3bd-24a5-4886-9cb0-87fe7426c818
2017-03-31 15:52:07,311 - sndvpn_test_utils - INFO - Waiting for instance f6e5c6f9-dd81-491a-aab5-dad988fc42b7 to get a DHCP lease...
2017-03-31 15:55:13,490 - sndvpn_test_utils - ERROR - Instance f6e5c6f9-dd81-491a-aab5-dad988fc42b7 seems to have failed leasing an IP.
2017-03-31 15:55:13,491 - sndvpn_test_utils - INFO - Waiting for instance 431b3a75-2aad-40b4-8d5b-bc16cf29a1a0 to get a DHCP lease...
^CTraceback (most recent call last):
File "./run_tests.py", line 107, in <module>
main()
File "./run_tests.py", line 78, in main
result = t.main()
File "/home/opnfv/repos/sdnvpn/sdnvpn/test/functest/testcase_8.py", line 122, in main
instances_up = test_utils.wait_for_instances_up(vm_1, vm_2)
File "/home/opnfv/repos/sdnvpn/sdnvpn/lib/utils.py", line 270, in wait_for_instances_up
check = [wait_for_instance(instance) for instance in args]
File "/home/opnfv/repos/sdnvpn/sdnvpn/lib/utils.py", line 259, in wait_for_instance

The VM we are using does 3 dhcp request. That is around 120 seconds. All internal transport tunnels are created before already.

When I login into the vm alter then 120 seconds and I do "ifup eth0" then I get directly a ip.

Attached you find the flows from ovs when it is not working and when we have waited 2 minutes. The flow from the controller and the flows from the compute are both shown. What can be seen is that not even table0 contains the inport flow. One more interesting thing is either the dhcp request goes through directly (less than 10 seconds) or it needs this long time.



 Comments   
Comment by Nikolas Hermanns [ 03/Apr/17 ]

Attachment odl-dhcp-issue.zip has been added with description: tables

Comment by Kency Kurian [ 04/Apr/17 ]

Hi Nikolas,

Could you please let us know how many VMs are being spawned. Is it failing for all the VMs initially and works fine when ifup eth0 is done after 2 minutes.

>>One more interesting thing is either the dhcp request goes through directly (less than 10 seconds) or it needs this long time. <<

Do you mean to say that DHCP request is actually being send within 10 secs and not within 120 secs as we expect?

Comment by Nikolas Hermanns [ 04/Apr/17 ]

Hey Kency,

thanks for looking into this.

So I spawn 2 VMs both have the same issue is a probability from around 70%. Any additional VM has the same issue.

If in the rare cases that the flows are pushed directly the DHCP request gets answered directly then I see that the VM is botted up in about 10 seconds.

So 2 in short cases:
1. [broken YANGTOOLS-14% of the times] No connectivity at all for the first ~2 minutest
2. [working CONTROLLER-24% of the times] Direct connectivity and through that the DHCP request is answered directly.

You see in the attachment that I added 2 times all the flow tables. That is both times from 1. The not working one is directly after the VM is spawned. The working on is after ~2 mins.

Thanks! Nikolas

Comment by Nikolas Hermanns [ 05/Apr/17 ]

I have a new finding!

in ovs logs I can see a lot of this messages:
2017-04-05T10:32:59.834Z|00621|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:32:59.834Z|00622|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:32:59.924Z|00623|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.002Z|00624|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.013Z|00625|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.013Z|00626|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.014Z|00627|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.014Z|00628|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.026Z|00629|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.026Z|00630|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.045Z|00631|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.045Z|00632|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:00.045Z|00633|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPGMFC_GROUP_EXISTS error reply to OFPT_GROUP_MOD message
2017-04-05T10:33:10.045Z|00634|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: 316 flow_mods in the 9 s starting 10 s ago (316 adds)
2017-04-05T10:33:10.187Z|00635|connmgr|INFO|Dropped 8 log messages in last 11 seconds (most recently, 7 seconds ago) due to excessive rate
2017-04-05T10:33:10.187Z|00636|connmgr|INFO|br-int<->tcp:192.0.2.7:6653: sending OFPBAC_BAD_OUT_GROUP error reply to OFPT_FLOW_MOD message

This is related to this bug:
https://bugs.opendaylight.org/show_bug.cgi?id=8132

After a vm is delete groups and flows are not cleanup.

On a fresh deployed system with a clean odl and a clean ovs everything seems to be working. When creating and removeing vms and networks after a while we get a group entry in ovs which is bad:
group_id=210001,type=all
in ODL:
"buckets": {
"bucket": [
{
"action": [
{
"openflowplugin-extension-nicira-action:nx-resubmit":

{ "table": 55 }

,
"order": 1
},
{
"order": 0,
"set-field": {
"tunnel":

{ "tunnel-id": 1 }

}
}
],
"bucket-id": 0,
"watch_group": 4294967295,
"watch_port": 4294967295,
"weight": 0
}
]
},
"group-id": 210001,
"group-name": "3c94c809-663c-4ae0-9313-706d1aaf8310",
"group-type": "group-all"

So the group is not correctly synced!

So I think we have still 2 issues here:
1. no cleanup when VMs, networks and bgpvpns are delete
2. a group cannot be modified.

Comment by Nikolas Hermanns [ 05/Apr/17 ]

Just to state that one more time:
in ovs the group is really empty!

OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
group_id=210002,type=all,bucket=actions=group:210001,bucket=actions=load:0x200->NXM_NX_REG6[],resubmit(,220)
group_id=150004,type=all,bucket=actions=set_field:fa:16:3e:12:a0:e5->eth_src,set_field:fa:16:3e:0a:c4:e1->eth_dst,load:0x100->NXM_NX_REG6[],resubmit(,220)
group_id=5000,type=all,bucket=actions=CONTROLLER:65535,bucket=actions=resubmit(,17),bucket=actions=resubmit(,81)
group_id=200002,type=all,bucket=actions=output:48
group_id=210001,type=all
48(tun69773c7a1b0): addr:7a:48:85:85:fe:f9
49(tun441d8835c6b): addr:42:d9:3b:ca:c8:98
50(tape83f7b84-88): addr:fe:16:3e:0a:c4:e1
LOCAL(br-int): addr:e2:4a:ad:35:bc:47

check here the group_id=210001 it does not say anything behind. In odl it says:
"openflowplugin-extension-nicira-action:nx-resubmit": {
"table": 55
},
which means resubmit it to table 55 from where it is then matching on dhcp port and then send to the controller (Neutron dhcp).

Comment by Periyasamy Palanisamy [ 05/Apr/17 ]

In tables_not_working log, I don't see any flows/groups for neutron VMs present in both compute nodes and seeing flows/groups only for transparent port created on the DPN for the flat provider network. Looks like there is no VMs spawned. At this stage, group_id=210001 is empty because of no VMs.

What do you see in odl for group id 210001 ? Does it exist in inventory config ds or not ?

when you add new VM back into this network, Are you seeing only ELAN Local BC group (210001) not updated with bucket? what do you see in wireshark trace ? is it group_mod or group-add ? what about other flows for the VM ?

Comment by Nikolas Hermanns [ 05/Apr/17 ]

This is what I see in odl as already written in the comment above:
in ODL:
"buckets": {
"bucket": [
{
"action": [
{
"openflowplugin-extension-nicira-action:nx-resubmit":

{ "table": 55 }

,
"order": 1
},
{
"order": 0,
"set-field": {
"tunnel":

{ "tunnel-id": 1 }

}
}
],
"bucket-id": 0,
"watch_group": 4294967295,
"watch_port": 4294967295,
"weight": 0
}
]
},
"group-id": 210001,
"group-name": "3c94c809-663c-4ae0-9313-706d1aaf8310",
"group-type": "group-all"

Have in mind that I reproduced the error few times and what you do see here is not the original issue when the bug was found.

Comment by Nikolas Hermanns [ 05/Apr/17 ]

So very short before the vm is stated I see the following in ovs:

3 computes:
OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
group_id=200000,type=all,bucket=actions=set_field:00:72:2b:16:87:6c->eth_dst,load:0x300->NXM_NX_REG6[],resubmit(,220)
group_id=5000,type=all,bucket=actions=CONTROLLER:65535,bucket=actions=resubmit(,17),bucket=actions=resubmit(,81)
group_id=210002,type=all,bucket=actions=group:210001,bucket=actions=load:0x300->NXM_NX_REG6[],resubmit(,220)
group_id=210001,type=all
group_id=150010,type=all,bucket=actions=set_field:fa:16:3e:33:a2:cc->eth_src,set_field:fa:16:3e:51:eb:2d->eth_dst,load:0x1000->NXM_NX_REG6[],resubmit(,220)
group_id=150003,type=all,bucket=actions=set_field:fe:16:3e:0c:92:f0->eth_src,set_field:fa:16:3e:0c:92:f0->eth_dst,load:0x500->NXM_NX_REG6[],resubmit(,220)

in odl:
{
"buckets": {
"bucket": [
{
"action": [
{
"openflowplugin-extension-nicira-action:nx-resubmit":

{ "table": 220 }

,
"order": 2
},
{
"openflowplugin-extension-nicira-action:nx-reg-load": {
"dst":

{ "end": 31, "nx-reg": "nicira-match:nxm-nx-reg6", "start": 0 }

,
"value": 768
},
"order": 1
},
{
"order": 0,
"set-field": {
"ethernet-match": {
"ethernet-destination":

{ "address": "00:72:2b:16:87:6c" }

}
}
}
],
"bucket-id": 0,
"watch_group": 4294967295,
"watch_port": 4294967295,
"weight": 0
}
]
},
"group-id": 200000,
"group-name": "7e763a79-bf3f-409a-80c7-6ea478b783eb",
"group-type": "group-all"
},
{
"buckets": {},
"group-id": 210001,
"group-name": "271c0841-0427-409f-b37c-94aea5175748",
"group-type": "group-all"
},

see full inventory in logs.

Comment by Nikolas Hermanns [ 05/Apr/17 ]

Attachment full-inventory.zip has been added with description: full-inventory before vm is booted

Comment by Periyasamy Palanisamy [ 06/Apr/17 ]

group_id=210002,type=all,bucket=actions=group:210001,bucket=actions=load:0x300->NXM_NX_REG6[],resubmit(,220)
group_id=210001,type=all

It looks like the above groups are created for ELAN instance for flat provider network type and doesn't have any VMs in it.
But when add VM, group_mod is sent for 210001 with a bucket to OVS, but not sure why OVS throws OFPGMFC_GROUP_EXISTS error. This should be thrown only for group add OFPGC_ADD request and not for OFPGC_MODIFY.

Can we look at tcpdump to know what kind of group request is sent ?

Comment by Nikolas Hermanns [ 06/Apr/17 ]

Reproduced again this time fetching logs and thread dump while and after hanging odl:

https://drive.google.com/open?id=0B_Rr7XjF0yoHc2EwT1psVGQwRVE

Comment by Periyasamy Palanisamy [ 07/Apr/17 ]

I see there are 6 threads which blocked in BgpConfigurationManager while advertising route to bgp. Looks like there is an issue with establishing neighbour with bgp peer. This causes a thread which invokes BgpConfigurationManager#replay holds bgpconfigmgr's instrinsic lock for longer time, eventually other threads trying to advertise routes, etc. are blocked on this lock.
This makes around 7 threads to be unused by ODL (DjC, DCN notification etc.) and it leads to 2 mins hang and flows are not programmed in switches (in the attached log, table 17 flow rules are not programmed for some of the VMs)

We need to address the following.

1. Need to resolve establishing BGP neighbor issue. in karaf.log, i see lot of errors like:
2017-04-06 14:54:52,403 | ERROR | pool-46-thread-1 | BgpConfigurationManager | 318 - org.opendaylight.netvirt.bgpmanager-impl - 0.3.3.SNAPSHOT | Replay:startBgp() received exception: "org.apache.thrift.transport.TTransportException"

Is there any issue with setting up 6wind quagga with ODL ? can you look into it ?
If you resolve this, we may not see the thread blocking issue.

2. Holding lock bgpconfigmgr's intrinsic lock for all method invocations is incorrect. This will make system to be unusable at this situation. It has to be addressed. Suneelu/Siva, Can you have a look ?

Comment by Nikolas Hermanns [ 07/Apr/17 ]

Hey,

godd finding! yes quagga bgp is not working yet on this machine. This was a bug raised internally for OPNFV. I will put more effort on this bug then now. You are right we need to remove this sync in addVRF. Can we still pull that in SR-3?

Br Nikolas

Comment by Nikolas Hermanns [ 07/Apr/17 ]

just for more information!

  1. cat overcloud-controller-0.odl.thread.dump |grep BL
    OCKED -B 1 -A 6
    "ForkJoinPool-1-worker-0" #1072 daemon prio=5 os_prio=0 tid=0x00007f3fb4853000 nid=0x46521 waiting for monitor entry
    [0x00007f3f34568000]
    java.lang.Thread.State: BLOCKED (on object monitor)
    at org.opendaylight.netvirt.bgpmanager.BgpConfigurationManager.addPrefix(BgpConfigurationManager.java:1918)
  • waiting to lock <0x000000008e1fe548> (a org.opendaylight.netvirt.bgpmanager.BgpConfigurationManager)
    at org.opendaylight.netvirt.bgpmanager.BgpManager.advertisePrefix(BgpManager.java:122)
    at Proxyf51ee27f_be1b_4724_a075_2d313a52fe71.advertisePrefix(Unknown Source)
    at Proxy4be9080d_62c9_4555_b8c8_5c3f04181a72.advertisePrefix(Unknown Source)
    at org.opendaylight.netvirt.vpnmanager.VpnInterfaceManager.addPrefixToBGP(VpnInterfaceManager.java:965)

    "ForkJoinPool-1-worker-2" #1070 daemon prio=5 os_prio=0 tid=0x00007f3fb4856800 nid=0x46520 waiting for monitor entry
    [0x00007f3f0c347000]
    java.lang.Thread.State: BLOCKED (on object monitor)
    at org.opendaylight.netvirt.bgpmanager.BgpConfigurationManager.addPrefix(BgpConfigurationManager.java:1918)
  • waiting to lock <0x000000008e1fe548> (a org.opendaylight.netvirt.bgpmanager.BgpConfigurationManager)
    at org.opendaylight.netvirt.bgpmanager.BgpManager.advertisePrefix(BgpManager.java:122)
    at Proxyf51ee27f_be1b_4724_a075_2d313a52fe71.advertisePrefix(Unknown Source)
    at Proxy4be9080d_62c9_4555_b8c8_5c3f04181a72.advertisePrefix(Unknown Source)
    at org.opendaylight.netvirt.vpnmanager.VpnInterfaceManager.addPrefixToBGP(VpnInterfaceManager.java:965)

    "ForkJoinPool-1-worker-3" #1071 daemon prio=5 os_prio=0 tid=0x00007f3f78206000 nid=0x4651f waiting for monitor entry
    [0x00007f3f1bdfe000]
    java.lang.Thread.State: BLOCKED (on object monitor)
    at org.opendaylight.netvirt.bgpmanager.BgpConfigurationManager.addPrefix(BgpConfigurationManager.java:1918)
  • waiting to lock <0x000000008e1fe548> (a org.opendaylight.netvirt.bgpmanager.BgpConfigurationManager)
    at org.opendaylight.netvirt.bgpmanager.BgpManager.advertisePrefix(BgpManager.java:122)
    at Proxyf51ee27f_be1b_4724_a075_2d313a52fe71.advertisePrefix(Unknown Source)
    at Proxy4be9080d_62c9_4555_b8c8_5c3f04181a72.advertisePrefix(Unknown Source)
    at org.opendaylight.netvirt.vpnmanager.VpnInterfaceManager.addPrefixToBGP(VpnInterfaceManager.java:965)
Comment by Nikolas Hermanns [ 07/Apr/17 ]

I checked. if effects master branch as well in the same amount.

Comment by Periyasamy Palanisamy [ 12/Apr/17 ]

Vyshakh, Can you get https://git.opendaylight.org/gerrit/#/c/54578/ merged ? Also it has to be cherry picked into stable/boron.

Generated at Wed Feb 07 20:21:57 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.