[CONTROLLER-1872] Excessive byte array copy in Akka Created: 13/Nov/18 Updated: 04/Dec/18 Resolved: 04/Dec/18 |
|
| Status: | Resolved |
| Project: | controller |
| Component/s: | None |
| Affects Version/s: | Oxygen SR3 |
| Fix Version/s: | Neon, Fluorine SR2 |
| Type: | Bug | Priority: | Medium |
| Reporter: | Michael Vorburger | Assignee: | Tom Pantelis |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Description |
|
One of the very top "TLAB allocation" (cumulative many 10s of GiBs - that's a hell of a lot, no?) observed in a JFR from internal scale testing, which seems to contributed to excessive GC, is the code path below. Is this just "too bad" and "simply the way Akka has to work" and we accept this, or is there any way that we could (get Akka to / work with) to have this optimized to cause less object allocation? There are actually 2 distincts paths, the first bigger one is: byte[] java.util.Arrays.copyOf(byte[], int) 55539 void java.io.ByteArrayOutputStream.grow(int) 37343 void java.io.ByteArrayOutputStream.ensureCapacity(int) 37343 void java.io.ByteArrayOutputStream.write(byte[], int, int) 37343 void java.io.ObjectOutputStream$BlockDataOutputStream.write(byte[], int, int, boolean) 27262 void java.io.ObjectOutputStream.write(byte[]) 27262 void org.opendaylight.controller.cluster.datastore.persisted.CommitTransactionPayload$Proxy.writeExternal(ObjectOutput) 27262 void java.io.ObjectOutputStream.writeExternalData(Externalizable) 27262 void java.io.ObjectOutputStream.writeOrdinaryObject(Object, ObjectStreamClass, boolean) 27262 void java.io.ObjectOutputStream.writeObject0(Object, boolean) 27262 void java.io.ObjectOutputStream.writeObject(Object) 27262 void org.opendaylight.controller.cluster.raft.persisted.SimpleReplicatedLogEntry$Proxy.writeExternal(ObjectOutput) 27262 void java.io.ObjectOutputStream.writeExternalData(Externalizable) 27262 void java.io.ObjectOutputStream.writeOrdinaryObject(Object, ObjectStreamClass, boolean) 27262 void java.io.ObjectOutputStream.writeObject0(Object, boolean) 27262 void java.io.ObjectOutputStream.writeObject(Object) 27262 void akka.serialization.JavaSerializer.$anonfun$toBinary$1(Object, ObjectOutputStream) 27262 void akka.serialization.JavaSerializer$$Lambda$862.398419640.apply$mcV$sp() 27262 Object scala.runtime.java8.JFunction0$mcV$sp.apply() 27262 Object scala.util.DynamicVariable.withValue(Object, Function0) 27262 byte[] akka.serialization.JavaSerializer.toBinary(Object) 27262 MessageFormats$PersistentPayload$Builder akka.persistence.serialization.MessageSerializer.payloadBuilder$1(Object) 27262 MessageFormats$PersistentPayload$Builder akka.persistence.serialization.MessageSerializer.$anonfun$persistentPayloadBuilder$1(MessageSerializer, Object) 27262 Object akka.persistence.serialization.MessageSerializer$$Lambda$878.273383069.apply() 27262 Object scala.util.DynamicVariable.withValue(Object, Function0) 27262 MessageFormats$PersistentPayload$Builder akka.persistence.serialization.MessageSerializer.persistentPayloadBuilder(Object) 27262 MessageFormats$PersistentMessage$Builder akka.persistence.serialization.MessageSerializer.persistentMessageBuilder(PersistentRepr) 27262 byte[] akka.persistence.serialization.MessageSerializer.toBinary(Object) 27262 byte[] akka.serialization.Serialization.$anonfun$serialize$1(Serialization, Object) 27262 Object akka.serialization.Serialization$$Lambda$877.275215371.apply() 27262 Try scala.util.Try$.apply(Function0) 27262 Try akka.serialization.Serialization.serialize(Object) 27262 byte[] akka.persistence.journal.leveldb.LeveldbStore.persistentToBytes(PersistentRepr) 27262 byte[] akka.persistence.journal.leveldb.LeveldbStore.persistentToBytes$(LeveldbStore, PersistentRepr) 27262 byte[] akka.persistence.journal.leveldb.LeveldbJournal.persistentToBytes(PersistentRepr) 27262 void akka.persistence.journal.leveldb.LeveldbStore.addToMessageBatch(PersistentRepr, Set, WriteBatch) 27262 void akka.persistence.journal.leveldb.LeveldbStore.$anonfun$asyncWriteMessages$5(LeveldbStore, ObjectRef, WriteBatch, PersistentRepr) 27262 Object akka.persistence.journal.leveldb.LeveldbStore.$anonfun$asyncWriteMessages$5$adapted(LeveldbStore, ObjectRef, WriteBatch, PersistentRepr) 27262 Object akka.persistence.journal.leveldb.LeveldbStore$$Lambda$875.1221692884.apply(Object) 27262 void scala.collection.immutable.List.foreach(Function1) 27262 void akka.persistence.journal.leveldb.LeveldbStore.$anonfun$asyncWriteMessages$4(LeveldbStore, ObjectRef, ObjectRef, WriteBatch, AtomicWrite) 27262 void akka.persistence.journal.leveldb.LeveldbStore$$Lambda$874.735864755.apply$mcV$sp() 27262 Object scala.runtime.java8.JFunction0$mcV$sp.apply() 27262 Try scala.util.Try$.apply(Function0) 27262 Try akka.persistence.journal.leveldb.LeveldbStore.$anonfun$asyncWriteMessages$3(LeveldbStore, ObjectRef, ObjectRef, WriteBatch, AtomicWrite) 27262 Object akka.persistence.journal.leveldb.LeveldbStore$$Lambda$873.510803830.apply(Object) 27262 Builder scala.collection.TraversableLike.$anonfun$map$1(Function1, Builder, Object) 27262 Object scala.collection.TraversableLike$$Lambda$283.599282230.apply(Object) 27262 void scala.collection.Iterator.foreach(Function1) 27262 void scala.collection.Iterator.foreach$(Iterator, Function1) 27262 void scala.collection.AbstractIterator.foreach(Function1) 27262 void scala.collection.IterableLike.foreach(Function1) 27262 void scala.collection.IterableLike.foreach$(IterableLike, Function1) 27262 void scala.collection.AbstractIterable.foreach(Function1) 27262 Object scala.collection.TraversableLike.map(Function1, CanBuildFrom) 27262 Object scala.collection.TraversableLike.map$(TraversableLike, Function1, CanBuildFrom) 27262 Object scala.collection.AbstractTraversable.map(Function1, CanBuildFrom) 27262 Seq akka.persistence.journal.leveldb.LeveldbStore.$anonfun$asyncWriteMessages$2(LeveldbStore, Seq, ObjectRef, ObjectRef, WriteBatch) 27262 Object akka.persistence.journal.leveldb.LeveldbStore$$Lambda$872.1336396232.apply(Object) 27262 Object akka.persistence.journal.leveldb.LeveldbStore.withBatch(Function1) 27262 Object akka.persistence.journal.leveldb.LeveldbStore.withBatch$(LeveldbStore, Function1) 27262 Object akka.persistence.journal.leveldb.LeveldbJournal.withBatch(Function1) 27262 Seq akka.persistence.journal.leveldb.LeveldbStore.$anonfun$asyncWriteMessages$1(LeveldbStore, Seq, ObjectRef, ObjectRef) 27262 Object akka.persistence.journal.leveldb.LeveldbStore$$Lambda$871.1879377751.apply() 27262 The other one is: byte[] akka.protobuf.AbstractMessageLite.toByteArray() 27218 ByteString akka.remote.transport.AkkaPduProtobufCodec$.constructPayload(ByteString) 9987 boolean akka.remote.transport.AkkaProtocolHandle.write(ByteString) 9987 boolean akka.remote.EndpointWriter.writeSend(EndpointManager$Send) 9987 Object akka.remote.EndpointWriter$$anonfun$4.applyOrElse(Object, Function1) 9975 void akka.actor.Actor.aroundReceive(PartialFunction, Object) 9975 void akka.actor.Actor.aroundReceive$(Actor, PartialFunction, Object) 9975 void akka.remote.EndpointActor.aroundReceive(PartialFunction, Object) 9975 void akka.actor.ActorCell.receiveMessage(Object) 9975 void akka.actor.ActorCell.invoke(Envelope) 9975 void akka.dispatch.Mailbox.processMailbox(int, long) 9975 void akka.dispatch.Mailbox.run() 9975 boolean akka.dispatch.Mailbox.exec() 9975 int akka.dispatch.forkjoin.ForkJoinTask.doExec() 9975 void akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinTask) 9975 void akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool$WorkQueue) 9975 void akka.dispatch.forkjoin.ForkJoinWorkerThread.run() 9975 |
| Comments |
| Comment by Robert Varga [ 14/Nov/18 ] |
|
These are normal – we cannot know the final size of the allocation until we have traversed the object. Doing a two-phase traversal wastes CPU cycles more than the incurrect GC/copies, which use exponential resize. |
| Comment by Tom Pantelis [ 16/Nov/18 ] |
|
It would eliminate some re-allocations if JavaSerializer.toBinary specified a larger initial size to the ByteArrayOutputStream. The is similar to |
| Comment by Tom Pantelis [ 03/Dec/18 ] |