Uploaded image for project: 'controller'
  1. controller
  2. CONTROLLER-2073

Ditch Akka persistence from sal-akka-raft

XMLWordPrintable

    • Icon: Improvement Improvement
    • Resolution: Unresolved
    • Icon: Medium Medium
    • 10.0.0
    • None
    • clustering

      Our current interplay with persistence and replication is rather flawed, as we allow for way too much generic things to be pluggable through interfaces which are not designed to handle them.

      There are four aspects here:

      1. we need provide the ability to efficiently send journal entries to remote peers
      2. we need to persist those journal entries to storage, durability of which is optional, EXCEPT for ServerConfigurationPayloads, which are always durable
      3. we use a custom journal plugin, based on atomix.io's requirements – which boil down to providing RAFT on top of a rolling set of files. As Atomix has moved away from Java and do their stuff now with Go, we have adopted their code in CONTROLLER-2071.
      4. we use a custom snapshot plugin to store snapshots

      We use an on-heap cache of entries serialized to byte[] to serve (1) and we use Akka Peristence to provide (2) via a custom hack, which overrides the normal durability knobs – all the while plugging in (3) and (4) into it.

      So we are dealing with 4 layers of indirection:

      • sal-distributed-datastore, talks to
      • sal-akka-raft, talks to
      • AbstractPersistentActor, talks to
      • sal-akka-segmented journal, talks to
      • atomix-storage (which then talks to Kryo, but we are ditching that in CONTROLLER-2072)

      At the end of the day the serialized view of each journal entry is available, through familiar 64bit ID growing index, from atomix-storage, the contents of which is in a (set of) memory-mapped files. When we need to send a stream of entries to a peer we should be able to just replay them from there, without the need to keeping anything at all on heap.

      The second part of the picture here is Akka Persistence protocol – which requires each object to be stored into a byte[] before anything happens to it. This absolutely sucks when combined with huge datastore writes and G1 GC's treatment of humongous objects (for which we already provide a workaround).

      What we really need is a streaming input/output interface, where on one side we have a Payload (say, a DataTree delta, in its native form) and on the other side we have a sequence of  binary blocks (i.e. SegmentedJournal entries), where the translation between the two works in terms of DataInput/DataOutput (essentially), without intermediate byte[] or other buffers which would hold the entirety of the serialized form.

      So long story short, this issue is about ditching the use of AbstractPersistentActor and replace it with an explicit interface very much aligned to what atomix-storage provides, i.e. storage of bytes identified by logical numbers which we can read again if the need arises.

            Unassigned Unassigned
            rovarga Robert Varga
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: