[NETCONF-438] carbon: IllegalStateException: Can't create ProxyReadTransaction Created: 04/Jul/17  Updated: 31/Jan/22  Resolved: 31/Jan/22

Status: Resolved
Project: netconf
Component/s: netconf
Affects Version/s: None
Fix Version/s: None

Type: Bug
Reporter: Peter Gubka Assignee: Kostiantyn Nosach
Resolution: Cannot Reproduce Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Attachments: File [NETCONF-438] Steps to reproduce.rtf    
External issue ID: 8797

 Description   

It happenes when when accessing netconf-test-device's mount point.
It is present in the last few jobs https://jenkins.opendaylight.org/releng/view/netconf/job/netconf-csit-3node-clustering-all-carbon/.

E.g.
(#328) https://logs.opendaylight.org/releng/jenkins092/netconf-csit-3node-clustering-all-carbon/328/log.html.gz#s1-s3-t8-k2-k1-k2-k1-k4-k1
(#330) https://logs.opendaylight.org/releng/jenkins092/netconf-csit-3node-clustering-all-carbon/330/log.html.gz#s1-s3-t10-k2-k1-k2-k1-k4-k1
(#331) https://logs.opendaylight.org/releng/jenkins092/netconf-csit-3node-clustering-all-carbon/331/log.html.gz#s1-s8-t10-k2-k1-k2-k1-k4-k1

The netconf device is configured on node1 and the failures may happen probably on any node, because it happened on node1 (#328) and node3 (#330,#331).

Even the caused by exception says in #331
Caused by: akka.pattern.AskTimeoutException: Recipient[Actor[akka://opendaylight-cluster-data/user/akka.tcp:opendaylight-cluster-data@10.29.13.100:2550_netconf-test-device#-917568787]] had already been terminated. Sender[null] sent the message of type "org.opendaylight.netconf.topology.singleton.messages.transactions.NewReadTransactionRequest".

and in jobs #330, #328
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [5 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at scala.concurrent.Await.result(package.scala)
at org.opendaylight.netconf.topology.singleton.impl.ProxyDOMDataBroker.newReadOnlyTransaction(ProxyDOMDataBroker.java:68)

no akka problems (such as unreachable nodes) are present in the log files.



 Comments   
Comment by Vratko Polak [ 24/Oct/17 ]

"IllegalStateException: Can't create ProxyReadTransaction" of the "Recipient had already been terminated." kind happened once again [4], this time on Oxygen.

Looking at karaf log [5] I see an immediately preceding error:

2017-10-20 12:23:30,358 | ERROR | lt-dispatcher-39 | ImmediateFuture | 12 - com.google.guava - 22.0.0 | RuntimeException while executing runnable CallbackListener

{org.opendaylight.netconf.topology.singleton.impl.actors.NetconfNodeActor$3@7af56307}

with executor MoreExecutors.directExecutor()
java.lang.IllegalStateException: Mount point already exists
at com.google.common.base.Preconditions.checkState(Preconditions.java:456)[12:com.google.guava:22.0.0]
at org.opendaylight.controller.md.sal.dom.broker.impl.mount.DOMMountPointServiceImpl.createMountPoint(DOMMountPointServiceImpl.java:41)[256:org.opendaylight.controller.sal-broker-impl:1.7.0]
at Proxyd3b73199_9f0a_43a1_969f_51190269a8a8.createMountPoint(Unknown Source)[:]
at Proxybfaf194c_19df_4949_b4b0_544bb75affb1.createMountPoint(Unknown Source)[:]
at org.opendaylight.netconf.sal.connect.netconf.sal.NetconfDeviceSalProvider$MountInstance.onTopologyDeviceConnected(NetconfDeviceSalProvider.java:124)[313:org.opendaylight.netconf.sal-netconf-connector:1.7.0]
at org.opendaylight.netconf.topology.singleton.impl.SlaveSalFacade.registerSlaveMountPoint(SlaveSalFacade.java:49)[317:org.opendaylight.netconf.topology-singleton:1.4.0]
at org.opendaylight.netconf.topology.singleton.impl.actors.NetconfNodeActor$3.onSuccess(NetconfNodeActor.java:265)[317:org.opendaylight.netconf.topology-singleton:1.4.0]
at org.opendaylight.netconf.topology.singleton.impl.actors.NetconfNodeActor$3.onSuccess(NetconfNodeActor.java:261)[317:org.opendaylight.netconf.topology-singleton:1.4.0]
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1237)[12:com.google.guava:22.0.0]
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:399)[12:com.google.guava:22.0.0]
at com.google.common.util.concurrent.ImmediateFuture.addListener(ImmediateFuture.java:41)[12:com.google.guava:22.0.0]
at com.google.common.util.concurrent.Futures.addCallback(Futures.java:1209)[12:com.google.guava:22.0.0]
at org.opendaylight.netconf.topology.singleton.impl.actors.NetconfNodeActor.registerSlaveMountPoint(NetconfNodeActor.java:261)[317:org.opendaylight.netconf.topology-singleton:1.4.0]
at org.opendaylight.netconf.topology.singleton.impl.actors.NetconfNodeActor.onReceive(NetconfNodeActor.java:177)[317:org.opendaylight.netconf.topology-singleton:1.4.0]
at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:165)[99:com.typesafe.akka.actor:2.4.18]
at akka.actor.Actor$class.aroundReceive(Actor.scala:502)[99:com.typesafe.akka.actor:2.4.18]
at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:95)[99:com.typesafe.akka.actor:2.4.18]
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)[99:com.typesafe.akka.actor:2.4.18]
at akka.actor.ActorCell.invoke(ActorCell.scala:495)[99:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)[99:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.run(Mailbox.scala:224)[99:com.typesafe.akka.actor:2.4.18]
at akka.dispatch.Mailbox.exec(Mailbox.scala:234)[99:com.typesafe.akka.actor:2.4.18]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)[331:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)[331:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)[331:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)[331:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc]

I believe this is just another symptom (aside of NETCONF-479) of two netconf topology writers being installed, so a fix should be on Int/Dist side.

[4] https://logs.opendaylight.org/releng/jenkins092/netconf-csit-3node-clustering-all-oxygen/22/log.html.gz#s1-s13-t5-k3-k1-k2-k1-k4-k1
[5] https://logs.opendaylight.org/releng/jenkins092/netconf-csit-3node-clustering-all-oxygen/22/odl2_karaf.log.gz

Generated at Wed Feb 07 20:15:02 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.