Details
-
Bug
-
Status: Resolved
-
Resolution: Done
-
Boron
-
None
-
None
-
None
-
Operating System: All
Platform: All
-
7717
Description
ODL Version : distribution-karaf-0.5.2-Boron-SR2
Openstack Version : mitaka
Setup Details : 1 Control Node + 1 ODL node
ODL node:
Configuration:
Harddisk- 20 GB
Core - 4 CPU
RAM - 8 GB Ram
Control Node and Compute Node:
Configuration:
Harddisk- 30 GB
Core - 4 CPU
RAM - 15 GB Ram
Steps to reproduce the issue:
1) Start Openstack and ODL, use 2G for heap memory
2) Run script to continuously define large number of networks/subnets:
for j in
{1..20}do
for i in
do
neutron net-create vx-net$j-$i
neutron subnet-create vx-net$j-$i 10.$j.$i.0/24 -name vx-subnet$j$i --enable_dhcp --allocation-pool start=10.$j.$i.5,end=10.$j.$i.254
sleep 5
done
sleep 60
done
3) Monitor the heap memory usage using jmap or jconsole
4) Note if the flows are installed properly for the port creations.
5) Back to 1 - 3 with different heap memory size for ODL
The heap memory for each run (step 3) shows that the heap memory are quickly filled up as the number of network/subnet getting larger. Eventually heap memory runs out and OOM might occurs. At that points ODL stops functioning due to excessive GC, drops connections to OVS (or vice versa) and no more flows are installed for new port creations.
The following shows test results with ODL heap memory and the maximum number of networks/subnets that can be created before the ODL server runs out of memory:
o 2 G : 600 networks
o 4 G : 1000 networks
o 8 G: 1600 networks
o 16 G: 2000 networks
The heap dumps indicates that most of the heap memory usage are from this local cache:
ProviderNetworkManagerImpl::nodeToProviderMaping