<!-- 
RSS generated by JIRA (8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d) at Wed Feb 07 19:56:24 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>OpenDaylight JIRA</title>
    <link>https://jira.opendaylight.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>8.20.10</version>
        <build-number>820010</build-number>
        <build-date>22-06-2022</build-date>
    </build-info>


<item>
            <title>[CONTROLLER-1763] On restarting ODL on one node, ODL on another node dies in a clustered setup</title>
                <link>https://jira.opendaylight.org/browse/CONTROLLER-1763</link>
                <project id="10113" key="CONTROLLER">controller</project>
                    <description>&lt;p&gt;Description of problem:&lt;br/&gt;
On running low stress longevity tests using Browbeat+Rally (creating 40 neutron resources 2 at a time and deleting them, over and over again), in a clustered ODL setup, ODL on controller-1 hits OOM after about 42 hours into the test. ODL on controller-2 is functional at that point but ODL on controller-0 seems to be running and ports are up but is non-functional (see BZ &lt;a href=&quot;https://bugzilla.redhat.com/show_bug.cgi?id=1486060&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://bugzilla.redhat.com/show_bug.cgi?id=1486060&lt;/a&gt;). When ODL on controller-0 is restarted to make it functional again at around 16:01 UTC 08/28/2017, ODL on controller-2 dies at around 16:04 UTC 08/28/2017. ODL on controller-1 which hit OOM is left alone.&lt;/p&gt;

&lt;p&gt;Here we can see the karaf process count going to 0 on controller-2 around 16:04 UTC 08/28/2017: &lt;a href=&quot;https://snapshot.raintank.io/dashboard/snapshot/chxdQkhAw3X8l9LS2HNNzCZGQHQvWubO?orgId=2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://snapshot.raintank.io/dashboard/snapshot/chxdQkhAw3X8l9LS2HNNzCZGQHQvWubO?orgId=2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The heap is dumped before the process dies, however it can be clearly seen that the 2G heap is not reached here: &lt;a href=&quot;https://snapshot.raintank.io/dashboard/snapshot/RMuDksXZ61ql2kMA47wqBHUXQeYWG05g?orgId=2&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://snapshot.raintank.io/dashboard/snapshot/RMuDksXZ61ql2kMA47wqBHUXQeYWG05g?orgId=2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Max heap used is around 1.4G &lt;/p&gt;


&lt;p&gt;Setup:&lt;br/&gt;
3 ODLs&lt;br/&gt;
3 OpenStack Controllers&lt;br/&gt;
3 Compute nodes&lt;/p&gt;

&lt;p&gt;ODL RPM from upstream: python-networking-odl-11.0.0-0.20170806093629.2e78dca.el7ost.noarch&lt;/p&gt;

&lt;p&gt;Test:&lt;br/&gt;
Create 40 neutron resources (rotuers, networks etc) 2 at a time using Rally and delete them over and over again. This is a long running low stress test.&lt;/p&gt;



&lt;p&gt;Additional info:&lt;br/&gt;
ODL Controller-0 Logs:&lt;br/&gt;
&lt;a href=&quot;http://8.43.86.1:8088/smalleni/karaf-controller-0.log.tar.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://8.43.86.1:8088/smalleni/karaf-controller-0.log.tar.gz&lt;/a&gt;&lt;br/&gt;
ODL Controller-1 Logs:&lt;br/&gt;
&lt;a href=&quot;http://8.43.86.1:8088/smalleni/karaf-controller-1.log.tar.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://8.43.86.1:8088/smalleni/karaf-controller-1.log.tar.gz&lt;/a&gt;&lt;br/&gt;
&lt;a href=&quot;http://8.43.86.1:8088/smalleni/karaf-controller-1-rollover.log.tar.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://8.43.86.1:8088/smalleni/karaf-controller-1-rollover.log.tar.gz&lt;/a&gt;&lt;br/&gt;
ODL Controller-2 Logs:&lt;br/&gt;
&lt;a href=&quot;http://8.43.86.1:8088/smalleni/karaf-controller-2.log.tar.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://8.43.86.1:8088/smalleni/karaf-controller-2.log.tar.gz&lt;/a&gt;&lt;/p&gt;</description>
                <environment>&lt;p&gt;Operating System: All&lt;br/&gt;
Platform: All&lt;/p&gt;</environment>
        <key id="26317">CONTROLLER-1763</key>
            <summary>On restarting ODL on one node, ODL on another node dies in a clustered setup</summary>
                <type id="10104" iconUrl="https://jira.opendaylight.org/secure/viewavatar?size=xsmall&amp;avatarId=10303&amp;avatarType=issuetype">Bug</type>
                                                <status id="5" iconUrl="https://jira.opendaylight.org/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="green"/>
                                    <resolution id="10003">Cannot Reproduce</resolution>
                                        <assignee username="vorburger">Michael Vorburger</assignee>
                                    <reporter username="smalleni@redhat.com">Sai Sindhur Malleni</reporter>
                        <labels>
                    </labels>
                <created>Mon, 28 Aug 2017 22:14:19 +0000</created>
                <updated>Fri, 15 Sep 2017 03:41:35 +0000</updated>
                            <resolved>Fri, 15 Sep 2017 03:41:35 +0000</resolved>
                                    <version>Carbon</version>
                                                    <component>clustering</component>
                        <due></due>
                            <votes>0</votes>
                                    <watches>6</watches>
                                                                                                                <comments>
                            <comment id="52670" author="vorburger" created="Tue, 29 Aug 2017 09:26:47 +0000"  >&lt;p&gt;&amp;gt; ODL on controller-2 dies&lt;br/&gt;
&amp;gt; ODL Controller-2 Logs: karaf-controller-2.log.tar.gz&lt;/p&gt;

&lt;p&gt;Interestingly, this log does NOT even have the &quot;famous last words before suicide&quot; re. &quot;... shutting down JVM...&quot;, as in &lt;a href=&quot;https://jira.opendaylight.org/browse/CONTROLLER-1761&quot; title=&quot;Uncaught error from thread ... shutting down JVM since &amp;#39;akka.jvm-exit-on-fatal-error&amp;#39; is enabled&quot; class=&quot;issue-link&quot; data-issue-key=&quot;CONTROLLER-1761&quot;&gt;&lt;del&gt;CONTROLLER-1761&lt;/del&gt;&lt;/a&gt; ... so it would be interesting to first better understand what it actually died of.  For example, if it were a &quot;OutOfMemoryError: unable to create new native thread&quot; then that would not be in regular logs, but probably would have gone to STDOUT by the JVM, which means on a RPM installed ODL started by a systemd service, it should appear e.g. in something like &quot;systemctl status opendaylight&quot; (or a better command to consult the full systemd journal of that service?) - can you provide us with what that has?&lt;/p&gt;

&lt;p&gt;Also, that log does however have a ton of these:&lt;/p&gt;

&lt;p&gt;    akka.pattern.AskTimeoutException: Ask timed out on &lt;a href=&quot;#236873956)]&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;ActorSelection[Anchor(akka://opendaylight-cluster-data/), Path(/user/shardmanager-operational/member-2-shard-default-operational#236873956)]&lt;/a&gt; after &lt;span class=&quot;error&quot;&gt;&amp;#91;30000 ms&amp;#93;&lt;/span&gt;. Sender&lt;span class=&quot;error&quot;&gt;&amp;#91;null&amp;#93;&lt;/span&gt; sent message of type &quot;org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransaction&lt;/p&gt;

&lt;p&gt;and then this:&lt;/p&gt;

&lt;p&gt;    186 - com.typesafe.akka.slf4j - 2.4.18 | Remote connection to null failed with java.net.ConnectException: Connection refused: /172.16.0.18:2550&lt;/p&gt;

&lt;p&gt;meaning the cluster is in real bad shape here anyway...&lt;/p&gt;</comment>
                            <comment id="52671" author="smalleni@redhat.com" created="Tue, 29 Aug 2017 11:23:32 +0000"  >&lt;p&gt;Michael,&lt;/p&gt;

&lt;p&gt;We had to redeploy the environment so I no longer have access to the environment. Yeh and it doesn;t seem to be OOM sine we have the Grafana chart showing that heap didn&apos;t go anywhere close to 2G.&lt;/p&gt;</comment>
                            <comment id="52672" author="smalleni@redhat.com" created="Tue, 29 Aug 2017 11:24:24 +0000"  >&lt;p&gt;ODL RPM used for the sake of completeness: opendaylight-6.2.0-0.1.20170817rel1931.el7.noarch&lt;/p&gt;</comment>
                            <comment id="52673" author="vorburger" created="Wed, 6 Sep 2017 16:35:38 +0000"  >&lt;p&gt;Wondering if &lt;a href=&quot;https://jira.opendaylight.org/browse/CONTROLLER-1755&quot; title=&quot;RaftActor lastApplied index moves backwards&quot; class=&quot;issue-link&quot; data-issue-key=&quot;CONTROLLER-1755&quot;&gt;&lt;del&gt;CONTROLLER-1755&lt;/del&gt;&lt;/a&gt; may helped fix this - let&apos;s re-test and confirm is still seen.&lt;/p&gt;</comment>
                            <comment id="52674" author="vorburger" created="Wed, 13 Sep 2017 13:02:27 +0000"  >&lt;p&gt;&amp;gt; Jamo Luhrsen 2017-09-12 17:45:28 UTC&lt;br/&gt;
&amp;gt; *** This bug has been marked as a duplicate of &lt;a href=&quot;https://jira.opendaylight.org/browse/CONTROLLER-1756&quot; title=&quot;OOM due to huge Map in ShardDataTree&quot; class=&quot;issue-link&quot; data-issue-key=&quot;CONTROLLER-1756&quot;&gt;&lt;del&gt;CONTROLLER-1756&lt;/del&gt;&lt;/a&gt; ***&lt;/p&gt;

&lt;p&gt;Jamo, I&apos;m not 100% convinced that Sridhar and Stephen agree ...&lt;/p&gt;</comment>
                            <comment id="52675" author="vorburger" created="Wed, 13 Sep 2017 13:43:22 +0000"  >&lt;p&gt;Based on internal discussions, we would like to re-open this, remove the dupe to OOM which is possibly causing more confusion (we&apos;re NOT sure it is linked to OOM), and let Sridhar re-confirm if this is indeed solved now (by OOM) or still an issue..&lt;/p&gt;</comment>
                            <comment id="52676" author="vorburger" created="Wed, 13 Sep 2017 16:46:18 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.opendaylight.org/browse/CONTROLLER-1751&quot; title=&quot;Sporadic cluster failure when member is restarted in OF cluster test&quot; class=&quot;issue-link&quot; data-issue-key=&quot;CONTROLLER-1751&quot;&gt;&lt;del&gt;CONTROLLER-1751&lt;/del&gt;&lt;/a&gt; seems related, or this may even be a dupe of 9006, except Sridhar can now apparently easily reproduce it, it&apos;s not so &quot;sporadic&quot; as that one is reported to be (full details coming up from him)...&lt;/p&gt;</comment>
                            <comment id="52677" author="sgaddam@redhat.com" created="Wed, 13 Sep 2017 17:19:30 +0000"  >&lt;p&gt;In a fresh ODL Cluster setup with 3 controllers and 3 compute nodes, we observed that after bringing down two Controllers, the third controller was not responding to any curl requests&lt;span class=&quot;error&quot;&gt;&amp;#91;#&amp;#93;&lt;/span&gt; thought &quot;systemctl status opendaylight&quot; shows that its running.&lt;/p&gt;

&lt;p&gt;Steps used to reproduce the scenario:&lt;br/&gt;
1. In the stable cluster the leader was on controller-1&lt;br/&gt;
2. Stopped (i.e., systemctl stop opendaylight) ODL on controller-1 (i.e., cluster leader)&lt;br/&gt;
3. Observed that cluster leader moved to controller-2&lt;br/&gt;
4. Started (i.e., systemctl start opendaylight) ODL on controller-1. Observed that cluster leader remained on controller-2&lt;br/&gt;
5. Stopped/started ODL on controller-1 couple of times. Did not see any issue, things &lt;span class=&quot;error&quot;&gt;&amp;#91;#&amp;#93;&lt;/span&gt; looked normal.&lt;br/&gt;
6. Now stopped ODL on controller-1 and controller-2 (which was the leader).&lt;br/&gt;
7. I could see that ODL running on Controller-0 is no longer responding to any curl requests&lt;span class=&quot;error&quot;&gt;&amp;#91;#&amp;#93;&lt;/span&gt;, eventhough systemctl shows that its running.&lt;/p&gt;

&lt;p&gt;PFA &lt;span class=&quot;error&quot;&gt;&amp;#91;*&amp;#93;&lt;/span&gt; the logs from controller-0, 1 and 2.&lt;br/&gt;
Step 6 (i.e., ODL on controller-2 was stopped) at &quot;2017-09-13 12:13:03&quot; time.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;#&amp;#93;&lt;/span&gt; curl -s -u &quot;admin:admin&quot; -X GET &lt;a href=&quot;http://$ODL_IP:8081/restconf/operational/network-topology:network-topology/topology/netvirt:1&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://$ODL_IP:8081/restconf/operational/network-topology:network-topology/topology/netvirt:1&lt;/a&gt; | python -mjson.tool&lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;*&amp;#93;&lt;/span&gt; controller0-karaf.log, controller1-karaf.log and controller2-karaf.log&lt;/p&gt;

&lt;p&gt;PS: We could see the following exceptions in Karaf.log&lt;br/&gt;
307:2017-09-13 10:35:34,770 | WARN  | saction-28-30&apos;}} | DeadlockMonitor                  | 134 - org.opendaylight.controller.config-manager - 0.6.2.SNAPSHOT | ModuleIdentifier&lt;/p&gt;
{factoryName=&apos;binding-broker-impl&apos;, instanceName=&apos;binding-broker-impl&apos;}
&lt;p&gt; did not finish after 9964 ms&lt;/p&gt;

&lt;p&gt;akka.pattern.AskTimeoutException: Ask timed out on &lt;a href=&quot;#1759700434)]&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;ActorSelection[Anchor(akka://opendaylight-cluster-data/), Path(/user/shardmanager-operational/member-0-shard-default-operational#1759700434)]&lt;/a&gt; after &lt;span class=&quot;error&quot;&gt;&amp;#91;30000 ms&amp;#93;&lt;/span&gt;. Sender&lt;span class=&quot;error&quot;&gt;&amp;#91;null&amp;#93;&lt;/span&gt; sent message of type &quot;org.opendaylight.controller.cluster.datastore.messages.ReadyLocalTransaction&quot;.&lt;br/&gt;
        at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)&lt;span class=&quot;error&quot;&gt;&amp;#91;185:com.typesafe.akka.actor:2.4.18&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)&lt;span class=&quot;error&quot;&gt;&amp;#91;185:com.typesafe.akka.actor:2.4.18&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)&lt;span class=&quot;error&quot;&gt;&amp;#91;181:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)&lt;span class=&quot;error&quot;&gt;&amp;#91;181:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)&lt;span class=&quot;error&quot;&gt;&amp;#91;181:org.scala-lang.scala-library:2.11.11.v20170413-090219-8a413ba7cc&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)&lt;span class=&quot;error&quot;&gt;&amp;#91;185:com.typesafe.akka.actor:2.4.18&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)&lt;span class=&quot;error&quot;&gt;&amp;#91;185:com.typesafe.akka.actor:2.4.18&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)&lt;span class=&quot;error&quot;&gt;&amp;#91;185:com.typesafe.akka.actor:2.4.18&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)&lt;span class=&quot;error&quot;&gt;&amp;#91;185:com.typesafe.akka.actor:2.4.18&amp;#93;&lt;/span&gt;&lt;br/&gt;
        at java.lang.Thread.run(Thread.java:748)&lt;span class=&quot;error&quot;&gt;&amp;#91;:1.8.0_141&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;2017-09-13 12:13:29,166 | WARN  | lt-dispatcher-67 | NettyTransport                   | 186 - com.typesafe.akka.slf4j - 2.4.18 | Remote connection to null failed with java.net.ConnectException: Connection refused: /172.16.0.21:2550&lt;/p&gt;</comment>
                            <comment id="52678" author="tpantelis" created="Wed, 13 Sep 2017 17:22:41 +0000"  >&lt;p&gt;With 2 nodes down, there&apos;s no longer a majority so datastore requests will fail.&lt;/p&gt;</comment>
                            <comment id="52684" author="sgaddam@redhat.com" created="Wed, 13 Sep 2017 17:24:22 +0000"  >&lt;p&gt;Attachment controller0-karaf.tar.gz has been added with description: Karaf.log on Controller0 nodee&lt;/p&gt;</comment>
                            <comment id="52685" author="sgaddam@redhat.com" created="Wed, 13 Sep 2017 17:25:39 +0000"  >&lt;p&gt;Attachment controller1-karaf.tar.gz has been added with description: Karaf.log on Controller1 node&lt;/p&gt;</comment>
                            <comment id="52686" author="sgaddam@redhat.com" created="Wed, 13 Sep 2017 17:26:06 +0000"  >&lt;p&gt;Attachment controller2-karaf.tar.gz has been added with description: Karaf.log on Controller2 node&lt;/p&gt;</comment>
                            <comment id="52679" author="tpantelis" created="Wed, 13 Sep 2017 17:30:42 +0000"  >&lt;p&gt;2017-09-13 12:13:48,723 | ERROR | lt-dispatcher-57 | LocalThreePhaseCommitCohort      | 211 - org.opendaylight.controller.sal-distributed-datastore - 1.5.2.SNAPSHOT | Failed to prepare transaction member-0-datastore-operational-fe-0-chn-8-txn-0-0 on backend&lt;br/&gt;
ReadFailedException{message=Error executeRead ReadData for path /(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)network-topology/topology/topology[&lt;/p&gt;
{(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)topology-id=ovsdb:1}
&lt;p&gt;]/node/node[&lt;/p&gt;
{(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)node-id=ovsdb://uuid/8f8c62e7-b501-46ef-b462-c430bedaa2a2}
&lt;p&gt;], errorList=[RpcError [message=Error executeRead ReadData for path /(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)network-topology/topology/topology[&lt;/p&gt;
{(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)topology-id=ovsdb:1}
&lt;p&gt;]/node/node[&lt;/p&gt;
{(urn:TBD:params:xml:ns:yang:network-topology?revision=2013-10-21)node-id=ovsdb://uuid/8f8c62e7-b501-46ef-b462-c430bedaa2a2}
&lt;p&gt;], severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.controller.md.sal.common.api.data.DataStoreUnavailableException: Could not commit transaction member-0-datastore-operational-fe-0-chn-8-txn-0-0. Shard member-0-shard-default-operational currently has no leader. Try again later.]]}&lt;/p&gt;

&lt;p&gt;That&apos;s expected with 2 nodes down. Datastore requests will time out after a period of time after some retries.&lt;/p&gt;</comment>
                            <comment id="52680" author="tpantelis" created="Thu, 14 Sep 2017 20:10:41 +0000"  >&lt;p&gt;Based on my last comment, is there still an issue here that needs to be addressed or can we close it?&lt;/p&gt;</comment>
                            <comment id="52681" author="vorburger" created="Thu, 14 Sep 2017 22:52:08 +0000"  >&lt;p&gt;This issue now seems to have confused 2 different things, as far as I understand:&lt;/p&gt;

&lt;p&gt;Sridhar&apos;s #c9 is a non-issue according to Tom&apos;s explanation in #c10 &amp;amp; #c14.&lt;/p&gt;

&lt;p&gt;Sai&apos;s original #c0 is about a mysterious death of a cluster node.  That seems curious and would be interesting to understand better, but is not actionable without further information requested by me in #c1.&lt;/p&gt;

&lt;p&gt;To unblock Nitrogen release, and reduce general confusion, I&apos;m therefore temporarily closing this as RESOLVED INVALID.  I&apos;m intending to re-open this issue if and when new information (req. in #c1) re. the mysterious death of a node becomes available.&lt;/p&gt;</comment>
                            <comment id="52682" author="smalleni@redhat.com" created="Fri, 15 Sep 2017 00:58:08 +0000"  >&lt;p&gt;Agree with Michael in Comment #16, these are two separate issues. I haven&apos;t been able to reproduce the scenario of mysterious death of ODL on one node when ODL on another node is restarted (original issue addressed by this bug). I am fine with resolving the bug as invalid for now and will hopefully get more actionable data in the next round of scale testing.&lt;/p&gt;</comment>
                            <comment id="52683" author="sgaddam@redhat.com" created="Fri, 15 Sep 2017 03:41:35 +0000"  >&lt;p&gt;(In reply to Sai Sindhur Malleni from comment #17)&lt;br/&gt;
&amp;gt; Agree with Michael in Comment #16, these are two separate issues. I haven&apos;t&lt;br/&gt;
&amp;gt; been able to reproduce the scenario of mysterious death of ODL on one node&lt;br/&gt;
&amp;gt; when ODL on another node is restarted (original issue addressed by this&lt;br/&gt;
&amp;gt; bug). I am fine with resolving the bug as invalid for now and will hopefully&lt;br/&gt;
&amp;gt; get more actionable data in the next round of scale testing.&lt;/p&gt;

&lt;p&gt;I&apos;m fine to close this bug as Invalid and we can revisit this later (if reproduced again)&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10000">
                    <name>Blocks</name>
                                                                <inwardlinks description="is blocked by">
                                        <issuelink>
            <issuekey id="26309">CONTROLLER-1755</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="26310">CONTROLLER-1756</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="13682" name="controller0-karaf.tar.gz" size="122440" author="SridharG" created="Wed, 13 Sep 2017 17:24:22 +0000"/>
                            <attachment id="13683" name="controller1-karaf.tar.gz" size="115768" author="SridharG" created="Wed, 13 Sep 2017 17:25:39 +0000"/>
                            <attachment id="13684" name="controller2-karaf.tar.gz" size="82267" author="SridharG" created="Wed, 13 Sep 2017 17:26:06 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                            <customfield id="customfield_11400" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10208" key="com.atlassian.jira.plugin.system.customfieldtypes:textfield">
                        <customfieldname>External issue ID</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9064</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10201" key="com.atlassian.jira.plugin.system.customfieldtypes:url">
                        <customfieldname>External issue URL</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[https://bugs.opendaylight.org/show_bug.cgi?id=9064]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                <customfield id="customfield_10204" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>ODL SR Target Milestone</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10320"><![CDATA[Nitrogen]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10202" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Priority</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10313"><![CDATA[Highest]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10000" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>0|i02skv:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>