[YANGTOOLS-165] OFplugin has a strong-referenced unbounded cache Created: 14/May/14 Updated: 10/Apr/22 Resolved: 16/May/14 |
|
| Status: | Verified |
| Project: | yangtools |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | ||
| Reporter: | Robert Varga | Assignee: | Michal Rehak |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Operating System: Linux |
||
| External issue ID: | 1006 |
| Description |
|
Durting performance testing we have found that OF plugin keeps a strong-referenced cache, which is only boiund by time and dominated by strong references. This means that as throughput goes up, so will the memory pressure. Make the depth of the cache configurable and make it use softvalues, so that it a) has an upper bound on memory retained irrespective of throughput 2) reacts to memory pressure by sacrificing this information instead of running out of memory. |
| Comments |
| Comment by Robert Varga [ 14/May/14 ] |
|
So the cache is used to support a 'replay' history thing. A cache is the wrong data structure to support this use case, as its performance decreases with elements retained. The time-based eviction policy incurs heavy penalty when retention time is exceeded. What you really want is a fixed-depth ArrayBlockingQueue. Make the depth configurable. Then to store a new packet do: ArrayBlockingQueue q; if (!q.offer(msg)) { which will give you a guaranteed replay buffer, with excellent O(1) performance. Given the high number of switches we want to support I would suggest to wrap the message in a SoftReference so we automatically throw debug data out before runnig out of memory. Note that the ArrayBlockingQueue uses a preallocated backing array, so be careful to use reasonable numbers. |
| Comment by Robert Varga [ 14/May/14 ] |
|
If you need time-based eviction, you can wrap the message into: class MessageTime { before inserting into the queue. Then have a background thread run the following periodically: long cutoff = System.nanoTime() - RETENTION_TIME; q.remove(m); // Because an enqueue thread may have removed it already |
| Comment by Michal Rehak [ 15/May/14 ] |
| Comment by Michal Rehak [ 15/May/14 ] |
|
removing bulk transaction cache |
| Comment by Michal Rehak [ 16/May/14 ] |
|
cbench is showing speed improvements (up to 5%), memory consumption is lower (200MB) |