[CONTROLLER-1920] CommitTransactionPayload results in humongous objects being allocated Created: 23/Sep/19 Updated: 10/Oct/19 Resolved: 10/Oct/19 |
|
| Status: | Resolved |
| Project: | controller |
| Component/s: | clustering |
| Affects Version/s: | Nitrogen SR3, Oxygen SR4, Sodium, Fluorine SR3, Neon SR2 |
| Fix Version/s: | Magnesium, Sodium SR1, Neon SR3 |
| Type: | Bug | Priority: | Medium |
| Reporter: | Robert Varga | Assignee: | Robert Varga |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Description |
|
When CDS is faced with large transactions it can easily allocate byte[]s whose size is couple of megabytes. This is problematic with G1GC, as such objects are allocated as humongous objects in the old generation – if they exceed one-half of region size. Region size varies between 1MiB and 32MiB and is recommended to be sized at 1/2048 of heap. The source of these is CommitTransactionPayload, which retains its serialized form – which is used to write it out either to persistence or to followers. We should be able to store this serialized form in a format which does not rely on such large objects.
|
| Comments |
| Comment by Robert Varga [ 07/Oct/19 ] |
|
https://www.oracle.com/technetwork/articles/java/g1gc-1984535.html details the handling of humongous objects and notes that the region size should be increased to eliminate such objects if they end up causing old gen fragmentation and back-to-back concurrent cycles. Given the overall overhead, it seems it is a much better strategy to just cap the size of the arrays we allocate here. |