[NETCONF-861] Netconf device mount with invalid payloads Created: 21/Feb/22  Updated: 08/Nov/22  Resolved: 02/Nov/22

Status: Resolved
Project: netconf
Component/s: None
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Medium
Reporter: Rohini Ambika Assignee: Yaroslav Lastivka
Resolution: Cannot Reproduce Votes: 0
Labels: pt
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified


 Description   

Encountered couple of issues while mounting device to controller using clustered topology features(odl-restconf-all,odl-netconf-clustered-topology):

  1. Mounting a device with invalid ip in the payload -  Payload is not validated and the device is not prevented from mounting. Also no error messages logged
  2. Mounting with host name in payload - If we provided host name of the device in <host> field in payload, mounting is not happening.

Want to know if these are expected behavior or do we need to apply fixes for these scenarios.

Error logs for 1 & 2: 

07:24:45.492 ERROR [opendaylight-cluster-data-notification-dispatcher-56] member-1-shard-topology-config: Error notifying listener org.opendaylight.mdsal.binding.dom.adapter.BindingClusteredDOMDataTreeChangeListenerAdapter@8a00d20
java.lang.NullPointerException: null
at java.util.Objects.requireNonNull(Objects.java:221) ~[?:?]
at org.opendaylight.netconf.topology.singleton.impl.NetconfTopologyManager.startNetconfDeviceContext(NetconfTopologyManager.java:186) ~[?:?]
at org.opendaylight.netconf.topology.singleton.impl.NetconfTopologyManager.onDataTreeChanged(NetconfTopologyManager.java:159) ~[?:?]
at org.opendaylight.mdsal.binding.dom.adapter.BindingDOMDataTreeChangeListenerAdapter.onDataTreeChanged(BindingDOMDataTreeChangeListenerAdapter.java:37) ~[bundleFile:?]
at org.opendaylight.controller.cluster.datastore.DataTreeChangeListenerActor.dataTreeChanged(DataTreeChangeListenerActor.java:84) ~[bundleFile:?]
at org.opendaylight.controller.cluster.datastore.DataTreeChangeListenerActor.handleReceive(DataTreeChangeListenerActor.java:45) ~[bundleFile:?]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) [bundleFile:?]
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) [bundleFile:?]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:189) [bundleFile:?]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:188) [bundleFile:?]
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) [bundleFile:?]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:244) [bundleFile:?]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:245) [bundleFile:?]
at akka.actor.Actor.aroundReceive(Actor.scala:537) [bundleFile:?]
at akka.actor.Actor.aroundReceive$(Actor.scala:535) [bundleFile:?]
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) [bundleFile:?]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:580) [bundleFile:?]
at akka.actor.ActorCell.invoke(ActorCell.scala:548) [bundleFile:?]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) [bundleFile:?]
at akka.dispatch.Mailbox.run(Mailbox.scala:231) [bundleFile:?]
at akka.dispatch.Mailbox.exec(Mailbox.scala:243) [bundleFile:?]
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290) [?:?]
at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020) [?:?]
at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656) [?:?]
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594) [?:?]
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183) [?:?]


 Comments   
Comment by Rohini Ambika [ 28/Feb/22 ]

Hi rovarga , please could you check this issue if this is the expected behavior or we need fixes so that we can look in to it.

Comment by Rohini Ambika [ 05/Apr/22 ]

Added the below error messages in logs if a payload with invalid ip/invalid hostname is munted:

            ERROR | opendaylight-cluster-data-notification-dispatcher-34 | NetconfTopologyManager | 289 - org.opendaylight.netconf.topology-singleton - 2.0.14 | Unable to connect to device Uri{_nodeId} ,invalid payload

ERROR | opendaylight-cluster-data-notification-dispatcher-55 | NetconfTopologyManager | 289 - org.opendaylight.netconf.topology-singleton - 2.0.14 | Unable to connect to device Uri{_nodeId} ,invalid payload

 

Added error logs if a device is mounted with multiple node id's:

ERROR | opendaylight-cluster-data-notification-dispatcher-41 | NetconfTopologyManager | 289 - org.opendaylight.netconf.topology-singleton - 2.0.14 | RemoteDevice{Uri{_nodeId}} was already configured

Comment by Robert Varga [ 12/Jul/22 ]

This needs triaging on current master.

Comment by Yaroslav Lastivka [ 26/Oct/22 ]

I have sent a request with the payload as bellow

{
  "node": {
    "node-id": "test-node",
    "netconf-node-topology:host": "192.168.56.28",
    "netconf-node-topology:port": "17830",
    "netconf-node-topology:username":"admin",
    "netconf-node-topology:password":"admin",
    "netconf-node-topology:tcp-only":"false",
    "netconf-node-topology:keepalive-delay": "0"
  }
}

and with the hostname in the payload

{
  "node": {
    "node-id": "test-node",
    "netconf-node-topology:host": "admin4",
    "netconf-node-topology:port": "17830",
    "netconf-node-topology:username":"admin",
    "netconf-node-topology:password":"admin",
    "netconf-node-topology:tcp-only":"false",
    "netconf-node-topology:keepalive-delay": "0"
  }
} 

in both cases connection was established successfully, and everything works well

Comment by Ivan Hrasko [ 26/Oct/22 ]

host in payload is validated according to topology node YANG model, thus:

Mounting a device with invalid ip in the payload -  Payload is not validated and the device is not prevented from mounting. Also no error messages logged

is not valid bug report.

Comment by Ivan Hrasko [ 26/Oct/22 ]

We were able to mount device using hostname in host, thus:

Mounting with host name in payload - If we provided host name of the device in <host> field in payload, mounting is not happening.

is not valid bug report.

Comment by Yaroslav Lastivka [ 26/Oct/22 ]

I was able to create two nodes with the same IP and different node-id

{
    "network-topology:topology": [
        {
            "topology-id": "topology-netconf",
            "node": [
                {
                    "node-id": "test-node-1",
                    "netconf-node-topology:connection-status": "connected",
                    "netconf-node-topology:username": "admin",
                    "netconf-node-topology:password": "admin",
                    "netconf-node-topology:available-capabilities": {
                        "available-capability": [
                            {
                                "capability": "urn:ietf:params:netconf:capability:exi:1.0",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "urn:ietf:params:netconf:capability:candidate:1.0",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "urn:ietf:params:netconf:base:1.0",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "urn:ietf:params:netconf:base:1.1",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "(urn:opendaylight:params:xml:ns:yang:netconf:monitoring?revision=2022-07-18)odl-netconf-monitoring",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "(urn:ietf:params:xml:ns:yang:ietf-yang-types?revision=2013-07-15)ietf-yang-types",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "(urn:ietf:params:xml:ns:yang:ietf-inet-types?revision=2013-07-15)ietf-inet-types",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "(urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring?revision=2010-10-04)ietf-netconf-monitoring",
                                "capability-origin": "device-advertised"
                            }
                        ]
                    },
                    "netconf-node-topology:host": "192.168.56.28",
                    "netconf-node-topology:port": 17830,
                    "netconf-node-topology:clustered-connection-status": {
                        "netconf-master-node": "akka://opendaylight-cluster-data@192.168.56.25:2550"
                    },
                    "netconf-node-topology:tcp-only": false,
                    "netconf-node-topology:keepalive-delay": 0
                },
                {
                    "node-id": "test-node-2",
                    "netconf-node-topology:connection-status": "connected",
                    "netconf-node-topology:username": "admin",
                    "netconf-node-topology:password": "admin",
                    "netconf-node-topology:available-capabilities": {
                        "available-capability": [
                            {
                                "capability": "urn:ietf:params:netconf:capability:exi:1.0",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "urn:ietf:params:netconf:capability:candidate:1.0",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "urn:ietf:params:netconf:base:1.0",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "urn:ietf:params:netconf:base:1.1",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "(urn:opendaylight:params:xml:ns:yang:netconf:monitoring?revision=2022-07-18)odl-netconf-monitoring",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "(urn:ietf:params:xml:ns:yang:ietf-yang-types?revision=2013-07-15)ietf-yang-types",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "(urn:ietf:params:xml:ns:yang:ietf-inet-types?revision=2013-07-15)ietf-inet-types",
                                "capability-origin": "device-advertised"
                            },
                            {
                                "capability": "(urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring?revision=2010-10-04)ietf-netconf-monitoring",
                                "capability-origin": "device-advertised"
                            }
                        ]
                    },
                    "netconf-node-topology:host": "192.168.56.28",
                    "netconf-node-topology:port": 17830,
                    "netconf-node-topology:clustered-connection-status": {
                        "netconf-master-node": "akka://opendaylight-cluster-data@192.168.56.25:2550"
                    },
                    "netconf-node-topology:tcp-only": false,
                    "netconf-node-topology:keepalive-delay": 0
                }
            ]
        }
    ]
} 
Comment by Yaroslav Lastivka [ 28/Oct/22 ]

the error is caused by patch https://git.opendaylight.org/gerrit/c/netconf/+/100400

Comment by Yaroslav Lastivka [ 28/Oct/22 ]

rohiniambika Can you please provide steps to reproduce the NullPointerException?

Comment by Ivan Hrasko [ 02/Nov/22 ]

NPE is there because

final NetconfNode netconfNode = requireNonNull(node.augmentation(NetconfNode.class));

The node is not correct NetconfNode for unknown reasons. It can be missing feature installation or cluster misconfiguration. No idea...

Comment by Ivan Hrasko [ 02/Nov/22 ]

Cannot reproduce the issue. Config data are validated according to YANG model, devices are connected successfully. In addition, some sort of problems reported here are caused by "patch" provided by bug reporter.

Generated at Wed Feb 07 20:16:05 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.