Uploaded image for project: 'controller'
  1. controller
  2. CONTROLLER-1582

We should have a common event/message delivery configuration

    XMLWordPrintable

    Details

    • Type: Improvement
    • Status: Open
    • Resolution: Unresolved
    • Affects Version/s: unspecified
    • Fix Version/s: None
    • Component/s: clustering
    • Labels:
      None
    • Environment:

      Operating System: All
      Platform: All

      Description

      As it stands today, how an event message is delivered is largely not configurable, but instead depends on what kind of event/message it is. We have at least 7 kinds of events today. See the notes from here below:
      https://lists.opendaylight.org/pipermail/dev/2016-September/002805.html

      At best this is frustrating, at worst it's causing core parts of OpenDaylight to have many different code paths which are poorly understood and potentially broken.

      Ideally, we could have some common base class for all of our events/messages listeners/registrations/handlers and then have them include some configuration about how they should be delivered. Even if not all configurations for all kinds of events/messages are allowed, the common language and code paths would help a lot.

      > MD-SAL has (at least) 7 kinds of events:
      > * YANG Notifications
      > * delivered locally on the same node that raised them only
      > * best-effort delivery
      > * code triggered
      > * Data (Tree) Change Notifications
      > * delivered to the shard leader for the data that was changed
      > * when data is changed in the data store
      > * only triggered by the data store
      > * boundaries of writes aren't necessarily preserved
      > * on reboot, you get one big notification for all the data that was there before
      > * Clustered Data Change Notifications
      > * same as data (tree) change notification, but go to all nodes in the cluster
      > * need another mechanism to suppress it on some nodes
      > * singleton service does this for you
      > * unclear if it's 0 or more, at most once, at least once, or what
      > * Global RPCs (2 events)
      > * delivered locally on the same node where the call was
      > * Mounted RPCs ???
      > * routed to the node with a NETCONF connection and forwarded
      > * Mounted YANG Notifications
      > * can't get them via RESTCONF, but otherwise like YANG notifications
      > * Routed RPCs
      > * delivered to the (last or first, but effectively random) node that registered to handle it
      > * if you're careful about who registers, you can govern where it goes
      > * singleton service does this for
      > * otherwise,
      >
      >
      > Different delivery:
      > * we'd really like to have shard-leader deliver for improved performance
      > * RPCs/request end up where the data is
      >
      >
      >
      > If we agree this is a problem:
      > * We either need to clean up our mess
      > * or we could do that + rely on off-the-shelf message bus
      > * tracing, tapping, parsing, plugging into from outside are all well-defined
      > * we would get delivery semantics (both who, how many)
      > * ordering between events
      > * OSGi event system exists and can bridge to anything
      > * why don't we use this? at least first?
      >
      >
      >
      > * brokered vs. brokerless?
      > * brokered tends to give deliver requirements, but has external requirements
      >
      > Potential issues:
      > * ordering: only Data Change Notifications are ordered
      > * delivery semantics: only Data Change Notifications are guaranteed
      > * performance: latency vs. throughput
      > * could you make this a runtime, not compile time option to do Java function call vs. message?
      > * currently we have apps that will make use of O(10^6) "messages"/sec
      > * real users (AT&T) using O(10^3) in their deployment

        Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            People

            Assignee:
            Unassigned Unassigned
            Reporter:
            colindixon Colin Dixon
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Dates

              Created:
              Updated: