<!-- 
RSS generated by JIRA (8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d) at Wed Feb 07 20:23:01 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>OpenDaylight JIRA</title>
    <link>https://jira.opendaylight.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>8.20.10</version>
        <build-number>820010</build-number>
        <build-date>22-06-2022</build-date>
    </build-info>


<item>
            <title>[NETVIRT-1015] NatEvpnUtil: getExtNwProvTypeFromRouterName : external network UUID is not available for router 53678e60-fadf-4eac-ac7c-23fad8c99dc2</title>
                <link>https://jira.opendaylight.org/browse/NETVIRT-1015</link>
                <project id="10144" key="NETVIRT">netvirt</project>
                    <description>&lt;p&gt;3node tests where ODL1 is taken down and brought back to service. Below happens on bringing ODL1 back up.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://logs.opendaylight.org/releng/jenkins092/netvirt-csit-3node-openstack-ocata-upstream-stateful-carbon/184/odl_1/odl1_err_warn_exception.log.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://logs.opendaylight.org/releng/jenkins092/netvirt-csit-3node-openstack-ocata-upstream-stateful-carbon/184/odl_1/odl1_err_warn_exception.log.gz&lt;/a&gt;&lt;/p&gt;


&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2017-11-20 03:43:35,420 | ERROR | eChangeHandler-0 | NeutronvpnManager                | 338 - org.opendaylight.netvirt.neutronvpn-impl - 0.4.3.SNAPSHOT | createSubnetmapNode: Subnetmap node for subnet ID 1b71e2fd-772e-4984-ac49-091ffcd5f8ec already exists, returning
2017-11-20 03:43:35,482 | WARN  | eChangeHandler-0 | CentralizedSwitchChangeListener  | 334 - org.opendaylight.netvirt.vpnmanager-impl - 0.4.3.SNAPSHOT | No router data found for router id 846ad560-cd27-4e81-a81f-f97248ff8509
2017-11-20 03:43:35,501 | ERROR | eChangeHandler-0 | NeutronvpnManager                | 338 - org.opendaylight.netvirt.neutronvpn-impl - 0.4.3.SNAPSHOT | createSubnetmapNode: Subnetmap node for subnet ID 22a1844a-1435-4738-b9d6-0fed96d8c59e already exists, returning
2017-11-20 03:43:35,558 | ERROR | eChangeHandler-0 | AsyncDataTreeChangeListenerBase  | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Thread terminated due to uncaught exception: AsyncDataTreeChangeListenerBase-DataTreeChangeHandler-0
java.lang.NullPointerException
2017-11-20 03:43:45,484 | ERROR | eChangeHandler-0 | NatEvpnUtil                      | 343 - org.opendaylight.netvirt.natservice-impl - 0.4.3.SNAPSHOT | getExtNwProvTypeFromRouterName : external network UUID is not available for router 53678e60-fadf-4eac-ac7c-23fad8c99dc2
2017-11-20 03:43:45,491 | ERROR | eChangeHandler-0 | NatEvpnUtil                      | 343 - org.opendaylight.netvirt.natservice-impl - 0.4.3.SNAPSHOT | getExtNwProvTypeFromRouterName : external network UUID is not available for router 53678e60-fadf-4eac-ac7c-23fad8c99dc2
2017-11-20 03:43:45,492 | ERROR | eChangeHandler-0 | NatEvpnUtil                      | 343 - org.opendaylight.netvirt.natservice-impl - 0.4.3.SNAPSHOT | getExtNwProvTypeFromRouterName : external network UUID is not available for router 53678e60-fadf-4eac-ac7c-23fad8c99dc2
2017-11-20 03:43:45,492 | ERROR | eChangeHandler-0 | NatEvpnUtil                      | 343 - org.opendaylight.netvirt.natservice-impl - 0.4.3.SNAPSHOT | getExtNwProvTypeFromRouterName : external network UUID is not available for router 53678e60-fadf-4eac-ac7c-23fad8c99dc2
2017-11-20 03:43:45,493 | ERROR | eChangeHandler-0 | NatEvpnUtil                      | 343 - org.opendaylight.netvirt.natservice-impl - 0.4.3.SNAPSHOT | getExtNwProvTypeFromRouterName : external network UUID is not available for router 53678e60-fadf-4eac-ac7c-23fad8c99dc2
2017-11-20 03:43:45,493 | ERROR | eChangeHandler-0 | NatEvpnUtil                      | 343 - org.opendaylight.netvirt.natservice-impl - 0.4.3.SNAPSHOT | getExtNwProvTypeFromRouterName : external network UUID is not available for router 53678e60-fadf-4eac-ac7c-23fad8c99dc2
2017-11-20 03:43:45,559 | WARN  | nPool-1-worker-0 | NeutronPortChangeListener        | 338 - org.opendaylight.netvirt.neutronvpn-impl - 0.4.3.SNAPSHOT | Interface 58c61dda-b7fb-45b1-a02e-ebcfc133425a is already present
2017-11-20 03:43:45,592 | ERROR | eChangeHandler-0 | VpnSubnetRouteHandler            | 334 - org.opendaylight.netvirt.vpnmanager-impl - 0.4.3.SNAPSHOT | SUBNETROUTE: onSubnetAddedToVpn: SubnetOpDataEntry for subnet 1b71e2fd-772e-4984-ac49-091ffcd5f8ec with ip 90.0.0.0/24 and vpn 53678e60-fadf-4eac-ac7c-23fad8c99dc2 already detected to be present
2017-11-20 03:43:45,692 | ERROR | eChangeHandler-0 | VpnSubnetRouteHandler            | 334 - org.opendaylight.netvirt.vpnmanager-impl - 0.4.3.SNAPSHOT | SUBNETROUTE: onSubnetAddedToVpn: SubnetOpDataEntry for subnet 22a1844a-1435-4738-b9d6-0fed96d8c59e with ip 100.0.0.0/24 and vpn 53678e60-fadf-4eac-ac7c-23fad8c99dc2 already detected to be present
2017-11-20 03:43:45,728 | WARN  | nPool-1-worker-2 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Job VPN-53678e60-fadf-4eac-ac7c-23fad8c99dc2 took 10175ms to complete
2017-11-20 03:43:45,732 | WARN  | nPool-1-worker-1 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Job VPN-ba01eac7-29b1-481a-807d-f67c14abc058 took 10173ms to complete
2017-11-20 03:43:45,844 | WARN  | nPool-1-worker-3 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Job 420d899d-379d-4a5e-8cba-9bdfe0ee0fd9 took 10287ms to complete
2017-11-20 03:43:45,899 | ERROR | nPool-1-worker-2 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Exception when executing jobEntry: JobEntry{key=&apos;afec949a-c73a-4006-a10a-89fc7ee70721&apos;, mainWorker=ItmTepAddWorker  { Configured Dpn List : [DPNTEPsInfo [_dPNID=88256775084985, _key=DPNTEPsInfoKey [_dPNID=88256775084985], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=88256775084985:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=afec949a-c73a-4006-a10a-89fc7ee70721], _zoneName=afec949a-c73a-4006-a10a-89fc7ee70721, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=247085905742467, _key=DPNTEPsInfoKey [_dPNID=247085905742467], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=247085905742467:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=afec949a-c73a-4006-a10a-89fc7ee70721], _zoneName=afec949a-c73a-4006-a10a-89fc7ee70721, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=176858005830847, _key=DPNTEPsInfoKey [_dPNID=176858005830847], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=176858005830847:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=afec949a-c73a-4006-a10a-89fc7ee70721], _zoneName=afec949a-c73a-4006-a10a-89fc7ee70721, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]]] }, rollbackWorker=null, retryCount=0, futures=null}
java.util.ConcurrentModificationException
2017-11-20 03:43:45,942 | ERROR | nPool-1-worker-3 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Exception when executing jobEntry: JobEntry{key=&apos;d7bb65a1-4ae7-4691-907b-8cdbf54a2d76&apos;, mainWorker=ItmTepAddWorker  { Configured Dpn List : [DPNTEPsInfo [_dPNID=88256775084985, _key=DPNTEPsInfoKey [_dPNID=88256775084985], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=88256775084985:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=d7bb65a1-4ae7-4691-907b-8cdbf54a2d76], _zoneName=d7bb65a1-4ae7-4691-907b-8cdbf54a2d76, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=247085905742467, _key=DPNTEPsInfoKey [_dPNID=247085905742467], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=247085905742467:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=d7bb65a1-4ae7-4691-907b-8cdbf54a2d76], _zoneName=d7bb65a1-4ae7-4691-907b-8cdbf54a2d76, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=176858005830847, _key=DPNTEPsInfoKey [_dPNID=176858005830847], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=176858005830847:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=d7bb65a1-4ae7-4691-907b-8cdbf54a2d76], _zoneName=d7bb65a1-4ae7-4691-907b-8cdbf54a2d76, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]]] }, rollbackWorker=null, retryCount=0, futures=null}
java.util.ConcurrentModificationException
2017-11-20 03:43:45,944 | ERROR | nPool-1-worker-3 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Exception when executing jobEntry: JobEntry{key=&apos;53678e60-fadf-4eac-ac7c-23fad8c99dc2&apos;, mainWorker=ItmTepAddWorker  { Configured Dpn List : [DPNTEPsInfo [_dPNID=88256775084985, _key=DPNTEPsInfoKey [_dPNID=88256775084985], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=88256775084985:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=53678e60-fadf-4eac-ac7c-23fad8c99dc2], _zoneName=53678e60-fadf-4eac-ac7c-23fad8c99dc2, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=247085905742467, _key=DPNTEPsInfoKey [_dPNID=247085905742467], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=247085905742467:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=53678e60-fadf-4eac-ac7c-23fad8c99dc2], _zoneName=53678e60-fadf-4eac-ac7c-23fad8c99dc2, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=176858005830847, _key=DPNTEPsInfoKey [_dPNID=176858005830847], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=176858005830847:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=53678e60-fadf-4eac-ac7c-23fad8c99dc2], _zoneName=53678e60-fadf-4eac-ac7c-23fad8c99dc2, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]]] }, rollbackWorker=null, retryCount=0, futures=null}
java.util.ConcurrentModificationException
2017-11-20 03:43:45,947 | ERROR | nPool-1-worker-3 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Exception when executing jobEntry: JobEntry{key=&apos;d8ef35ed-8282-42f9-ab00-5c424c01ddbf&apos;, mainWorker=ItmTepAddWorker  { Configured Dpn List : [DPNTEPsInfo [_dPNID=88256775084985, _key=DPNTEPsInfoKey [_dPNID=88256775084985], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=88256775084985:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=d8ef35ed-8282-42f9-ab00-5c424c01ddbf], _zoneName=d8ef35ed-8282-42f9-ab00-5c424c01ddbf, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=247085905742467, _key=DPNTEPsInfoKey [_dPNID=247085905742467], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=247085905742467:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=d8ef35ed-8282-42f9-ab00-5c424c01ddbf], _zoneName=d8ef35ed-8282-42f9-ab00-5c424c01ddbf, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=176858005830847, _key=DPNTEPsInfoKey [_dPNID=176858005830847], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=176858005830847:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=d8ef35ed-8282-42f9-ab00-5c424c01ddbf], _zoneName=d8ef35ed-8282-42f9-ab00-5c424c01ddbf, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]]] }, rollbackWorker=null, retryCount=0, futures=null}
java.util.ConcurrentModificationException
2017-11-20 03:43:45,954 | ERROR | nPool-1-worker-3 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Exception when executing jobEntry: JobEntry{key=&apos;1c6a94be-45cd-4c3a-b86e-3b9217ab9205&apos;, mainWorker=ItmTepAddWorker  { Configured Dpn List : [DPNTEPsInfo [_dPNID=88256775084985, _key=DPNTEPsInfoKey [_dPNID=88256775084985], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=88256775084985:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=1c6a94be-45cd-4c3a-b86e-3b9217ab9205], _zoneName=1c6a94be-45cd-4c3a-b86e-3b9217ab9205, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=247085905742467, _key=DPNTEPsInfoKey [_dPNID=247085905742467], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=247085905742467:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=1c6a94be-45cd-4c3a-b86e-3b9217ab9205], _zoneName=1c6a94be-45cd-4c3a-b86e-3b9217ab9205, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=176858005830847, _key=DPNTEPsInfoKey [_dPNID=176858005830847], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=176858005830847:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=1c6a94be-45cd-4c3a-b86e-3b9217ab9205], _zoneName=1c6a94be-45cd-4c3a-b86e-3b9217ab9205, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]]] }, rollbackWorker=null, retryCount=0, futures=null}
java.util.ConcurrentModificationException
2017-11-20 03:43:45,986 | ERROR | nPool-1-worker-2 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Exception when executing jobEntry: JobEntry{key=&apos;332fa0a2-c555-4219-bb74-85a0f54857fe&apos;, mainWorker=ItmTepAddWorker  { Configured Dpn List : [DPNTEPsInfo [_dPNID=88256775084985, _key=DPNTEPsInfoKey [_dPNID=88256775084985], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=88256775084985:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=332fa0a2-c555-4219-bb74-85a0f54857fe], _zoneName=332fa0a2-c555-4219-bb74-85a0f54857fe, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=247085905742467, _key=DPNTEPsInfoKey [_dPNID=247085905742467], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=247085905742467:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=332fa0a2-c555-4219-bb74-85a0f54857fe], _zoneName=332fa0a2-c555-4219-bb74-85a0f54857fe, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=176858005830847, _key=DPNTEPsInfoKey [_dPNID=176858005830847], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=176858005830847:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=332fa0a2-c555-4219-bb74-85a0f54857fe], _zoneName=332fa0a2-c555-4219-bb74-85a0f54857fe, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]]] }, rollbackWorker=null, retryCount=0, futures=null}
java.util.ConcurrentModificationException
2017-11-20 03:43:45,990 | ERROR | nPool-1-worker-2 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Exception when executing jobEntry: JobEntry{key=&apos;27ac5507-66fa-4df8-9658-8649f377afbc&apos;, mainWorker=ItmTepAddWorker  { Configured Dpn List : [DPNTEPsInfo [_dPNID=88256775084985, _key=DPNTEPsInfoKey [_dPNID=88256775084985], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=88256775084985:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=27ac5507-66fa-4df8-9658-8649f377afbc], _zoneName=27ac5507-66fa-4df8-9658-8649f377afbc, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=247085905742467, _key=DPNTEPsInfoKey [_dPNID=247085905742467], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=247085905742467:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.23]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=27ac5507-66fa-4df8-9658-8649f377afbc], _zoneName=27ac5507-66fa-4df8-9658-8649f377afbc, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=176858005830847, _key=DPNTEPsInfoKey [_dPNID=176858005830847], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=176858005830847:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=27ac5507-66fa-4df8-9658-8649f377afbc], _zoneName=27ac5507-66fa-4df8-9658-8649f377afbc, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]]] }, rollbackWorker=null, retryCount=0, futures=null}
java.util.ConcurrentModificationException
2017-11-20 03:43:46,013 | ERROR | nPool-1-worker-2 | DataStoreJobCoordinator          | 293 - org.opendaylight.genius.mdsalutil-api - 0.2.3.SNAPSHOT | Exception when executing jobEntry: JobEntry{key=&apos;5a70e14d-58f3-4f46-ab00-722055aa5b78&apos;, mainWorker=ItmTepAddWorker  { Configured Dpn List : [DPNTEPsInfo [_dPNID=88256775084985, _key=DPNTEPsInfoKey [_dPNID=88256775084985], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=88256775084985:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.15.91]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=5a70e14d-58f3-4f46-ab00-722055aa5b78], _zoneName=5a70e14d-58f3-4f46-ab00-722055aa5b78, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]], DPNTEPsInfo [_dPNID=176858005830847, _key=DPNTEPsInfoKey [_dPNID=176858005830847], _tunnelEndPoints=[TunnelEndPoints [_gwIpAddress=IpAddress [_ipv4Address=Ipv4Address [_value=0.0.0.0]], _interfaceName=176858005830847:tunnel_port:0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _key=TunnelEndPointsKey [_portname=tunnel_port, _vLANID=0, _ipAddress=IpAddress [_ipv4Address=Ipv4Address [_value=10.29.13.247]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan], _optionTunnelTos=0, _portname=tunnel_port, _subnetMask=IpPrefix [_ipv4Prefix=Ipv4Prefix [_value=0.0.0.0/0]], _tunnelType=class org.opendaylight.yang.gen.v1.urn.opendaylight.genius.interfacemanager.rev160406.TunnelTypeVxlan, _tzMembership=[TzMembership [_key=TzMembershipKey [_zoneName=5a70e14d-58f3-4f46-ab00-722055aa5b78], _zoneName=5a70e14d-58f3-4f46-ab00-722055aa5b78, augmentation=[]]], _vLANID=0, _optionOfTunnel=false, augmentation=[]]], augmentation=[]]] }, rollbackWorker=null, retryCount=0, futures=null}
java.util.ConcurrentModificationException
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</description>
                <environment></environment>
        <key id="28842">NETVIRT-1015</key>
            <summary>NatEvpnUtil: getExtNwProvTypeFromRouterName : external network UUID is not available for router 53678e60-fadf-4eac-ac7c-23fad8c99dc2</summary>
                <type id="10104" iconUrl="https://jira.opendaylight.org/secure/viewavatar?size=xsmall&amp;avatarId=10303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="5" iconUrl="https://jira.opendaylight.org/images/icons/priorities/trivial.svg">Lowest</priority>
                        <status id="5" iconUrl="https://jira.opendaylight.org/images/icons/statuses/resolved.png" description="A resolution has been taken, and it is awaiting verification by reporter. From here issues are either reopened, or are closed.">Resolved</status>
                    <statusCategory id="3" key="done" colorName="green"/>
                                    <resolution id="10000">Done</resolution>
                                        <assignee username="shague">Sam Hague</assignee>
                                    <reporter username="shague">Sam Hague</reporter>
                        <labels>
                            <label>csit:3node</label>
                    </labels>
                <created>Mon, 20 Nov 2017 15:48:26 +0000</created>
                <updated>Mon, 17 Sep 2018 19:13:24 +0000</updated>
                            <resolved>Mon, 17 Sep 2018 19:13:24 +0000</resolved>
                                    <version>Oxygen</version>
                    <version>Fluorine</version>
                                                    <component>General</component>
                        <due></due>
                            <votes>0</votes>
                                    <watches>5</watches>
                                                                                                                <comments>
                            <comment id="62177" author="shague@redhat.com" created="Fri, 6 Apr 2018 12:36:18 +0000"  >&lt;p&gt;still present:&#160;&lt;a href=&quot;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-openstack-queens-upstream-stateful-oxygen/233/odl_1/odl1_karaf.log.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-openstack-queens-upstream-stateful-oxygen/233/odl_1/odl1_karaf.log.gz&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="63515" author="shague@redhat.com" created="Tue, 19 Jun 2018 21:18:00 +0000"  >&lt;p&gt;Still seen: &lt;a href=&quot;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-openstack-queens-upstream-stateful-fluorine/119/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-openstack-queens-upstream-stateful-fluorine/119/&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="63849" author="vorburger" created="Mon, 2 Jul 2018 16:35:59 +0000"  >&lt;p&gt;I&apos;ve picked up on the DataStoreJobCoordinator ConcurrentModificationException shown above because I thought were curious, but grepping for &quot;ConcurrentModificationException&quot; in odl&lt;span class=&quot;error&quot;&gt;&amp;#91;1-3&amp;#93;&lt;/span&gt;_karaf.log.gz in this job I don&apos;t see those anymore, so that seems to have gotten fixed, and this is only about that other ERROR from NatEvpnUtil from &quot;external network UUID is not available&quot;.  Perhaps edit Description?&lt;/p&gt;</comment>
                            <comment id="64577" author="shague@redhat.com" created="Tue, 7 Aug 2018 13:12:17 +0000"  >&lt;p&gt;This error looks to be coming because the neutron northbound ds is down. The ODL nodes do not have a leader and are quarantined so the ds operations are failing.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-0cmb-1ctl-2cmp-openstack-queens-upstream-stateful-fluorine/23/odl_1/odl1_err_warn_exception.log.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-0cmb-1ctl-2cmp-openstack-queens-upstream-stateful-fluorine/23/odl_1/odl1_err_warn_exception.log.gz&lt;/a&gt;&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2018-08-06T15:43:11,882 | WARN  | opendaylight-cluster-data-akka.actor.default-dispatcher-2 | ReliableDeliverySupervisor       | 41 - com.typesafe.akka.slf4j - 2.5.11 | Association with remote system [akka.tcp://opendaylight-cluster-data@10.30.170.219:2550] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://opendaylight-cluster-data@10.30.170.219:2550]] Caused by: [Connection refused: /10.30.170.219:2550]
2018-08-06T15:43:14,002 | WARN  | qtp1641743688-115 | ServletHandler                   | 164 - org.eclipse.jetty.util - 9.3.21.v20170918 | 
javax.servlet.ServletException: org.opendaylight.neutron.spi.ReadFailedRuntimeException: ReadFailedException{message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, errorList=[RpcError [message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: Shard member-1-shard-default-config currently has no leader. Try again later.]]}
Caused by: org.opendaylight.neutron.spi.ReadFailedRuntimeException: ReadFailedException{message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, errorList=[RpcError [message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: Shard member-1-shard-default-config currently has no leader. Try again later.]]}
Caused by: org.opendaylight.controller.md.sal.common.api.data.ReadFailedException: Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks
	at org.opendaylight.controller.sal.core.compat.ReadFailedExceptionAdapter.newWithCause(ReadFailedExceptionAdapter.java:28) ~[?:?]
	at org.opendaylight.controller.sal.core.compat.ReadFailedExceptionAdapter.newWithCause(ReadFailedExceptionAdapter.java:18) ~[?:?]
	at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:91) ~[?:?]
	at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:40) ~[?:?]
	at org.opendaylight.controller.md.sal.common.api.MappingCheckedFuture.mapException(MappingCheckedFuture.java:60) ~[?:?]
	at org.opendaylight.controller.md.sal.common.api.MappingCheckedFuture.wrapInExecutionException(MappingCheckedFuture.java:64) ~[?:?]
	at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:713) ~[?:?]
	at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:713) ~[?:?]
	at com.google.common.util.concurrent.SettableFuture.setException(SettableFuture.java:54) ~[?:?]
Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: Shard member-1-shard-default-config currently has no leader. Try again later.
Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-default-config currently has no leader. Try again later.
	at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.createNoShardLeaderException(ShardManager.java:955) ~[?:?]
2018-08-06T15:43:14,035 | WARN  | qtp1641743688-115 | HttpChannel                      | 164 - org.eclipse.jetty.util - 9.3.21.v20170918 | //10.30.170.217:8181/controller/nb/v2/neutron/networks
javax.servlet.ServletException: javax.servlet.ServletException: org.opendaylight.neutron.spi.ReadFailedRuntimeException: ReadFailedException{message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, errorList=[RpcError [message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: Shard member-1-shard-default-config currently has no leader. Try again later.]]}
Caused by: javax.servlet.ServletException: org.opendaylight.neutron.spi.ReadFailedRuntimeException: ReadFailedException{message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, errorList=[RpcError [message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: Shard member-1-shard-default-config currently has no leader. Try again later.]]}
Caused by: org.opendaylight.neutron.spi.ReadFailedRuntimeException: ReadFailedException{message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, errorList=[RpcError [message=Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks, severity=ERROR, errorType=APPLICATION, tag=operation-failed, applicationTag=null, info=null, cause=org.opendaylight.mdsal.common.api.DataStoreUnavailableException: Shard member-1-shard-default-config currently has no leader. Try again later.]]}
Caused by: org.opendaylight.controller.md.sal.common.api.data.ReadFailedException: Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks
	at org.opendaylight.controller.sal.core.compat.ReadFailedExceptionAdapter.newWithCause(ReadFailedExceptionAdapter.java:28) ~[?:?]
	at org.opendaylight.controller.sal.core.compat.ReadFailedExceptionAdapter.newWithCause(ReadFailedExceptionAdapter.java:18) ~[?:?]
	at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:91) ~[?:?]
	at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:40) ~[?:?]
	at org.opendaylight.controller.md.sal.common.api.MappingCheckedFuture.mapException(MappingCheckedFuture.java:60) ~[?:?]
	at org.opendaylight.controller.md.sal.common.api.MappingCheckedFuture.wrapInExecutionException(MappingCheckedFuture.java:64) ~[?:?]
	at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:713) ~[?:?]
	at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:713) ~[?:?]
	at com.google.common.util.concurrent.SettableFuture.setException(SettableFuture.java:54) ~[?:?]
Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: Shard member-1-shard-default-config currently has no leader. Try again later.
Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-default-config currently has no leader. Try again later.
	at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.createNoShardLeaderException(ShardManager.java:955) ~[?:?]
2018-08-06T15:43:15,212 | WARN  | qtp1641743688-116 | BrokerFacade                     | 328 - org.opendaylight.netconf.restconf-nb-bierman02 - 1.8.0 | Error reading /(urn:opendaylight:neutron?revision=2015-07-12)neutron/hostconfigs from datastore OPERATIONAL
org.opendaylight.controller.md.sal.common.api.data.ReadFailedException: Error executeRead ReadData for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/hostconfigs
	at org.opendaylight.controller.sal.core.compat.ReadFailedExceptionAdapter.newWithCause(ReadFailedExceptionAdapter.java:28) [234:org.opendaylight.controller.sal-core-compat:1.8.0]
	at org.opendaylight.controller.sal.core.compat.ReadFailedExceptionAdapter.newWithCause(ReadFailedExceptionAdapter.java:18) [234:org.opendaylight.controller.sal-core-compat:1.8.0]
	at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:91) [414:org.opendaylight.yangtools.util:2.0.9]
	at org.opendaylight.yangtools.util.concurrent.ExceptionMapper.apply(ExceptionMapper.java:40) [414:org.opendaylight.yangtools.util:2.0.9]
	at org.opendaylight.controller.md.sal.common.api.MappingCheckedFuture.mapException(MappingCheckedFuture.java:60) [229:org.opendaylight.controller.sal-common-api:1.8.0]
	at org.opendaylight.controller.md.sal.common.api.MappingCheckedFuture.wrapInExecutionException(MappingCheckedFuture.java:64) [229:org.opendaylight.controller.sal-common-api:1.8.0]
Caused by: org.opendaylight.mdsal.common.api.DataStoreUnavailableException: Shard member-1-shard-default-operational currently has no leader. Try again later.
Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-1-shard-default-operational currently has no leader. Try again later.
	at org.opendaylight.controller.cluster.datastore.shardmanager.ShardManager.createNoShardLeaderException(ShardManager.java:955) ~[?:?]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;</comment>
                            <comment id="64580" author="vorburger" created="Tue, 7 Aug 2018 13:32:50 +0000"  >&lt;p&gt;&amp;gt; This error looks to be coming because the neutron northbound ds is down.&lt;br/&gt;
&amp;gt; The ODL nodes do not have a leader and are quarantined so the ds operations are failing.&lt;/p&gt;


&lt;p&gt;erm, wait; you&apos;re saying that NatEvpnUtil error (above) is due to this? You sure? Or this is an entirely new thing now?  &lt;/p&gt;

&lt;p&gt;Anyway, the this new error isn&apos;t anything to &quot;fix&quot; in Neutron - question now is why cluster died in this CSIT.&lt;/p&gt;</comment>
                            <comment id="64586" author="shague@redhat.com" created="Tue, 7 Aug 2018 14:58:27 +0000"  >&lt;p&gt;Yeah, I mean the whole clustering is down so reads are failing - just happens to be these are sometimes northbound reads. As to what to fix - that depends. If we think the lower layers can be changed to be more reliable then that is a fix. If not, then the apps will need to change to handle the issues like reading later when the cluster is available or using caches maybe. I think this falls in the MDSAL best practices work where the applications are not well-designed to handle problems.&lt;/p&gt;</comment>
                            <comment id="64588" author="jluhrsen" created="Tue, 7 Aug 2018 15:36:44 +0000"  >&lt;p&gt;here is the full &lt;a href=&quot;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-0cmb-1ctl-2cmp-openstack-queens-upstream-stateful-fluorine/23/odl_1/odl1_karaf.log.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;karaf.log &lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="64634" author="vorburger" created="Tue, 14 Aug 2018 17:20:20 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.opendaylight.org/secure/ViewProfile.jspa?name=shague&quot; class=&quot;user-hover&quot; rel=&quot;shague&quot;&gt;shague&lt;/a&gt; and &lt;a href=&quot;https://jira.opendaylight.org/secure/ViewProfile.jspa?name=jluhrsen&quot; class=&quot;user-hover&quot; rel=&quot;jluhrsen&quot;&gt;jluhrsen&lt;/a&gt; re. the point we discussed during the weekly kernel projects call today about this issue, where I promised that I would&#160;look into&#160;improving the propagation of datastore errors from neutron to the driver so that it (OpenStack driver) can notify operators and retry, I just saw that we actually already did this, but quite recently - that was &lt;a href=&quot;https://git.opendaylight.org/gerrit/#/c/72735/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;c/72735&lt;/a&gt;&#160;for&#160;&lt;a href=&quot;https://jira.opendaylight.org/browse/NEUTRON-157&quot; title=&quot;Conflicting modification for path /(urn:opendaylight:neutron?revision=2015-07-12)neutron/networks/network/network[{(urn:opendaylight:neutron?revision=2015-07-12)uuid=b674297c-3ae3-4940-b3b8-149cb2da8161}&quot; class=&quot;issue-link&quot; data-issue-key=&quot;NEUTRON-157&quot;&gt;&lt;del&gt;NEUTRON-157&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="64701" author="tpantelis" created="Tue, 21 Aug 2018 21:29:31 +0000"  >&lt;p&gt;That log is from odl1 and apparently was restarted 6 times in a 3 hour period:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2018-08-06T12:39:29,643 | INFO  | opendaylight-cluster-data-akka.actor.default-dispatcher-3 | Slf4jLogger                      | 41 - com.typesafe.akka.slf4j - 2.5.11 | Slf4jLogger started
2018-08-06T14:23:10,238 | INFO  | opendaylight-cluster-data-akka.actor.default-dispatcher-2 | Slf4jLogger                      | 41 - com.typesafe.akka.slf4j - 2.5.11 | Slf4jLogger started
2018-08-06T14:49:20,546 | INFO  | opendaylight-cluster-data-akka.actor.default-dispatcher-2 | Slf4jLogger                      | 41 - com.typesafe.akka.slf4j - 2.5.11 | Slf4jLogger started
2018-08-06T15:05:43,794 | INFO  | opendaylight-cluster-data-akka.actor.default-dispatcher-3 | Slf4jLogger                      | 41 - com.typesafe.akka.slf4j - 2.5.11 | Slf4jLogger started
2018-08-06T15:39:18,545 | INFO  | opendaylight-cluster-data-akka.actor.default-dispatcher-3 | Slf4jLogger                      | 41 - com.typesafe.akka.slf4j - 2.5.11 | Slf4jLogger started
2018-08-06T15:48:57,768 | INFO  | opendaylight-cluster-data-akka.actor.default-dispatcher-3 | Slf4jLogger                      | 41 - com.typesafe.akka.slf4j - 2.5.11 | Slf4jLogger started
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;IPs:&lt;/p&gt;

&lt;p&gt;odl1: 10.30.170.226&lt;br/&gt;
odl2: 10.30.170.218&lt;br/&gt;
odl3: 10.30.170.219&lt;/p&gt;

&lt;p&gt;After the first startup, the other 2 nodes joined and the clustered was formed around 12:39:48. At 14:23:10, odl1 restarted. At 14:25:41, odl1 lost connection with odl2 (10.30.170.218) and then odl2 re-joined at 14:29:57. At 14:35:25, odl1 lost connection with odl3 (10.30.170.219) and then odl3 re-joined at 14:39:38, odl2 re-joined. At 14:49:20, odl1 was restarted again.&lt;/p&gt;

&lt;p&gt;The DataStoreUnavailableExceptions via neutron occurred between 15:42 - 15:44 after the 15:39:18 restart. odl1 re-joined odl2 at 15:39:21. At 15:41:25, odl1 lost connection to both odl1 and odl2:&lt;/p&gt;

&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2018-08-06T15:41:25,443 | WARN  | opendaylight-cluster-data-akka.actor.default-dispatcher-54 | ReliableDeliverySupervisor       | 41 - com.typesafe.akka.slf4j - 2.5.11 | Association with remote system [akka.tcp://opendaylight-cluster-data@10.30.170.218:2550] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
2018-08-06T15:41:26,275 | WARN  | opendaylight-cluster-data-akka.actor.default-dispatcher-54 | ReliableDeliverySupervisor       | 41 - com.typesafe.akka.slf4j - 2.5.11 | Association with remote system [akka.tcp://opendaylight-cluster-data@10.30.170.219:2550] has failed, address is now gated for [5000] ms. Reason: [Disassociated] 
2018-08-06T15:41:29,830 | WARN  | opendaylight-cluster-data-akka.actor.default-dispatcher-38 | ClusterCoreDaemon                | 41 - com.typesafe.akka.slf4j - 2.5.11 | Cluster Node [akka.tcp://opendaylight-cluster-data@10.30.170.226:2550] - Marking node(s) as UNREACHABLE [Member(address = akka.tcp://opendaylight-cluster-data@10.30.170.218:2550, status = Up)]. Node roles [member-1, dc-default]
2018-08-06T15:41:30,485 | WARN  | opendaylight-cluster-data-akka.actor.default-dispatcher-38 | NettyTransport                   | 41 - com.typesafe.akka.slf4j - 2.5.11 | Remote connection to [null] failed with java.net.ConnectException: Connection refused: /10.30.170.218:2550
2018-08-06T15:41:30,491 | WARN  | opendaylight-cluster-data-akka.actor.default-dispatcher-38 | ReliableDeliverySupervisor       | 41 - com.typesafe.akka.slf4j - 2.5.11 | Association with remote system [akka.tcp://opendaylight-cluster-data@10.30.170.218:2550] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://opendaylight-cluster-data@10.30.170.218:2550]] Caused by: [Connection refused: /10.30.170.218:2550]
2018-08-06T15:41:30,826 | WARN  | opendaylight-cluster-data-akka.actor.default-dispatcher-54 | ClusterCoreDaemon                | 41 - com.typesafe.akka.slf4j - 2.5.11 | Cluster Node [akka.tcp://opendaylight-cluster-data@10.30.170.226:2550] - Marking node(s) as UNREACHABLE [Member(address = akka.tcp://opendaylight-cluster-data@10.30.170.219:2550, status = Up)]. Node roles [member-1, dc-default]
2018-08-06T15:41:31,300 | WARN  | opendaylight-cluster-data-akka.actor.default-dispatcher-54 | NettyTransport                   | 41 - com.typesafe.akka.slf4j - 2.5.11 | Remote connection to [null] failed with java.net.ConnectException: Connection refused: /10.30.170.219:2550
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;odl2 re-joined at 15:45:06:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2018-08-06T15:45:06,921 | INFO  | opendaylight-cluster-data-akka.actor.default-dispatcher-65 | Cluster(akka://opendaylight-cluster-data) | 41 - com.typesafe.akka.slf4j - 2.5.11 | Cluster Node [akka.tcp://opendaylight-cluster-data@10.30.170.226:2550] - Node [akka.tcp://opendaylight-cluster-data@10.30.170.218:2550] is JOINING, roles [member-2, dc-default]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;odl3 re-joined at 15:45:20:&lt;/p&gt;
&lt;div class=&quot;preformatted panel&quot; style=&quot;border-width: 1px;&quot;&gt;&lt;div class=&quot;preformattedContent panelContent&quot;&gt;
&lt;pre&gt;2018-08-06T15:45:20,125 | INFO  | opendaylight-cluster-data-akka.actor.default-dispatcher-63 | Cluster(akka://opendaylight-cluster-data) | 41 - com.typesafe.akka.slf4j - 2.5.11 | Cluster Node [akka.tcp://opendaylight-cluster-data@10.30.170.226:2550] - Node [akka.tcp://opendaylight-cluster-data@10.30.170.219:2550] is JOINING, roles [member-3, dc-default]
&lt;/pre&gt;
&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So the neutron requests failed as expected since odl1 lost connection to both odl2 and odl3. So is this expected for the tests?&lt;/p&gt;</comment>
                            <comment id="64702" author="shague@redhat.com" created="Tue, 21 Aug 2018 21:31:12 +0000"  >&lt;p&gt;Still present in jobs, need to add more triage info: &lt;a href=&quot;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-0cmb-1ctl-2cmp-openstack-queens-upstream-stateful-oxygen/32/odl_1/odl1_err_warn_exception.log.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-0cmb-1ctl-2cmp-openstack-queens-upstream-stateful-oxygen/32/odl_1/odl1_err_warn_exception.log.gz&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="64703" author="tpantelis" created="Tue, 21 Aug 2018 21:44:58 +0000"  >&lt;p&gt;What is the test suite actually doing? From my analysis above, it is taking down 2 nodes in which case you would expect DS DataStoreUnavailableException/NoShardLeaderException failures. Notice the &quot;java.net.ConnectException: Connection refused&quot; messages - that indicates&#160;it could connect to the node&#160;IP&#160;but the port isn&apos;t open, ie the process isn&apos;t running. Can someone verify that is the case?&lt;/p&gt;</comment>
                            <comment id="64738" author="xcheara" created="Thu, 23 Aug 2018 12:23:14 +0000"  >&lt;p&gt;Hi All,&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;I did analyse on the below points(tracing for shard-default-config) and steps performed as part of this Suite there below exceptions are observed.&lt;/p&gt;

&lt;p&gt;Caused by: org.opendaylight.controller.cluster.datastore.exceptions.NoShardLeaderException: Shard member-3-shard-default-config currently has no leader. Try again later.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;1) As part of ha_l2 suite, the ODL-1(member-1) been first elected as Leader node for default-config shard.(as per &lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt; from member-1,member-2 and member-3 section output)&lt;/p&gt;

&lt;p&gt;2) First ODL-1 is brought down and ODL-2(member-2) been elected as leader node for default-config shard.(as per &lt;span class=&quot;error&quot;&gt;&amp;#91;2&amp;#93;&lt;/span&gt; of member-2 and member-3 section output)&lt;/p&gt;

&lt;p&gt;3) ODL-1 is brought up later and it also have member-2 as leader node for default-config shard. (as per &lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; from member-1 section output)&lt;/p&gt;

&lt;p&gt;4) Now ODL-2 is brought down and now the member-1 got re-elected back as leader node(as per &lt;span class=&quot;error&quot;&gt;&amp;#91;4&amp;#93;&lt;/span&gt; of member-1 section and &lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; of member-1 section output)&lt;/p&gt;

&lt;p&gt;5) After this ODL-2 is brought back and member-1 continuing as leader node(as per &lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; from member-2 section output)&lt;/p&gt;

&lt;p&gt;6) finally ODL-3 is brought down and at this point, still member-1 is an leader node for the default-config shard.&lt;/p&gt;

&lt;p&gt;7) Now the ODL-3 is brought up and at this point I still see member-1 continuing as leader node(as per &lt;span class=&quot;error&quot;&gt;&amp;#91;4&amp;#93;&lt;/span&gt; from member-3 section output)&lt;/p&gt;

&lt;p&gt;8) As part of ODL-3 up, all the Listener are registered and triggered. As part of this, then READ operations are done on DS.&lt;/p&gt;

&lt;p&gt;9) Mean time, the Suite brought down both ODL-1 and ODL-2 and during this time ODL-3 not elected as leader node(as per &lt;span class=&quot;error&quot;&gt;&amp;#91;5&amp;#93;&lt;/span&gt; from member-3 section output) and an READ operations during this period been thrown with below observed exception.&lt;/p&gt;

&lt;p&gt;10) These exception are observed until ODL-1 and ODL-2 are brought back and new leader node is re-elected which is member-3(as per &lt;span class=&quot;error&quot;&gt;&amp;#91;6&amp;#93;&lt;/span&gt; from member-3 and member-1 section output)&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Basically, I&apos;m seeing where when two nodes are brought down, the lone left over node is not been re-elected as leader node and any READ operations during this time are resulting in this exception.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;**Below are the few key logs captured from karaf logs frorm all ODLs.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Ref : &lt;a href=&quot;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-0cmb-1ctl-2cmp-openstack-queens-upstream-stateful-fluorine/23/robot-plugin/log_06_ha_l2.html.gz&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://logs.opendaylight.org/releng/vex-yul-odl-jenkins-1/netvirt-csit-3node-0cmb-1ctl-2cmp-openstack-queens-upstream-stateful-fluorine/23/robot-plugin/log_06_ha_l2.html.gz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Thanks,&lt;/p&gt;

&lt;p&gt;Chetan&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt; Line 7203: 2018-08-06T14:21:18,827 | INFO&#160; | pipe-log:log &quot;ROBOT MESSAGE: Killing ODL1 10.30.170.226&quot; | core&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 122 - org.apache.karaf.log.core - 4.1.5 | ROBOT MESSAGE: Killing ODL1 10.30.170.226&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;2&amp;#93;&lt;/span&gt; Line 7357: 2018-08-06T14:22:37,300 | INFO&#160; | pipe-log:log &quot;ROBOT MESSAGE: Starting test 06 ha l2.Bring Up ODL1&quot; | core&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 122 - org.apache.karaf.log.core - 4.1.5 | ROBOT MESSAGE: Starting test 06 ha l2.Bring Up ODL1&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; Line 7476: 2018-08-06T14:25:30,247 | INFO&#160; | pipe-log:log &quot;ROBOT MESSAGE: Starting test 06 ha l2.Take Down ODL2&quot; | core&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 122 - org.apache.karaf.log.core - 4.1.5 | ROBOT MESSAGE: Starting test 06 ha l2.Take Down ODL2&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;4&amp;#93;&lt;/span&gt; Line 7997: 2018-08-06T14:29:13,181 | INFO&#160; | pipe-log:log &quot;ROBOT MESSAGE: Starting test 06 ha l2.Bring Up ODL2&quot; | core&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 122 - org.apache.karaf.log.core - 4.1.5 | ROBOT MESSAGE: Starting test 06 ha l2.Bring Up ODL2&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;5&amp;#93;&lt;/span&gt; Line 8199: 2018-08-06T14:35:05,041 | INFO&#160; | pipe-log:log &quot;ROBOT MESSAGE: Starting test 06 ha l2.Take Down ODL3&quot; | core&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 122 - org.apache.karaf.log.core - 4.1.5 | ROBOT MESSAGE: Starting test 06 ha l2.Take Down ODL3&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;6&amp;#93;&lt;/span&gt; ODL3 is brought up back 2018-08-06T14:38:55,105&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;member-1/ODL-1 (10.30.170.226)&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt; 2018-08-06T12:39:51,446 | INFO&#160; | opendaylight-cluster-data-akka.actor.default-dispatcher-29 | RoleChangeNotifier&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 228 - org.opendaylight.controller.sal-clustering-commons - 1.8.0 | RoleChangeNotifier for member-1-shard-default-config , received role change from Candidate to Leader&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;2&amp;#93;&lt;/span&gt; 2018-08-06T14:25:51,191 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-56 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-1-shard-default-config, leaderId=member-1-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; 2018-08-06T14:23:13,749 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-53 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-1-shard-default-config, leaderId=member-2-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;4&amp;#93;&lt;/span&gt; 2018-08-06T14:25:51,191 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-56 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-1-shard-default-config, leaderId=member-1-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;5&amp;#93;&lt;/span&gt; 2018-08-06T14:49:25,702 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-56 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-1-shard-default-config, leaderId=member-3-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;6&amp;#93;&lt;/span&gt; 2018-08-06T14:49:25,702 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-56 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-1-shard-default-config, leaderId=member-3-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;member-2/ODL-2 (10.30.170.218)&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt; 2018-08-06T12:39:43,561 | INFO&#160; | opendaylight-cluster-data-akka.actor.default-dispatcher-20 | RoleChangeNotifier&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 228 - org.opendaylight.controller.sal-clustering-commons - 1.8.0 | RoleChangeNotifier for member-2-shard-default-config , received role change from Follower to Candidate&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;2&amp;#93;&lt;/span&gt; 2018-08-06T14:21:28,922 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-2-shard-default-config, leaderId=member-2-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; 2018-08-06T14:29:49,881 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-58 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-2-shard-default-config, leaderId=member-1-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;4&amp;#93;&lt;/span&gt; 2018-08-06T14:49:21,530 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-62 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-2-shard-default-config, leaderId=member-3-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;5&amp;#93;&lt;/span&gt; 2018-08-06T14:49:21,530 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-62 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-2-shard-default-config, leaderId=member-3-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;member-3/ODL-3 (10.30.170.219)&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;1&amp;#93;&lt;/span&gt; 2018-08-06T12:39:51,448 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-25 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-3-shard-default-config, leaderId=member-1-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;2&amp;#93;&lt;/span&gt; 2018-08-06T14:21:28,926 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-39 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-3-shard-default-config, leaderId=member-2-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;3&amp;#93;&lt;/span&gt; 2018-08-06T14:25:51,192 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-41 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-3-shard-default-config, leaderId=member-1-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;4&amp;#93;&lt;/span&gt; 2018-08-06T14:39:29,755 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-36 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-3-shard-default-config, leaderId=member-1-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;5&amp;#93;&lt;/span&gt; 2018-08-06T14:41:26,947 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-27 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-3-shard-default-config, leaderId=null, leaderPayloadVersion=-1&amp;#93;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;error&quot;&gt;&amp;#91;6&amp;#93;&lt;/span&gt; 2018-08-06T14:49:21,526 | INFO&#160; | opendaylight-cluster-data-shard-dispatcher-62 | ShardManager&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; | 236 - org.opendaylight.controller.sal-distributed-datastore - 1.8.0 | shard-manager-config: Received LeaderStateChanged message: LeaderStateChanged &lt;span class=&quot;error&quot;&gt;&amp;#91;memberId=member-3-shard-default-config, leaderId=member-3-shard-default-config, leaderPayloadVersion=9&amp;#93;&lt;/span&gt;&lt;/p&gt;</comment>
                            <comment id="64739" author="tpantelis" created="Thu, 23 Aug 2018 13:53:37 +0000"  >&lt;p&gt;&lt;b&gt;&amp;gt; Basically, I&apos;m seeing where when two nodes are brought down, the lone left over node is not been re-elected as leader node and any READ&#160; operations during this time are resulting in this exception.&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;yes - that is the expected behavior as I mentioned in my prior comment.&#160;At least&#160;2 nodes&#160;in a 3 node cluster are needed for consensus.&lt;/p&gt;</comment>
                            <comment id="64740" author="jluhrsen" created="Thu, 23 Aug 2018 15:13:22 +0000"  >&lt;p&gt;ok, so we are learning and making progress here, but this begs the question. How is someone supposed to know that they&lt;br/&gt;
are looking at ugly scary logs like this from a node that happened to be the only one up in the cluster? At this point, it feels&lt;br/&gt;
like this is just tribal knowledge.&lt;/p&gt;</comment>
                            <comment id="64750" author="vorburger" created="Thu, 23 Aug 2018 20:21:44 +0000"  >&lt;p&gt;Isn&apos;t another question one may ask here why we are having a CSIT test for a totally unsupported scenario? So it tests to two of three nodes down, and then expects to confirm that... what, actually? I&apos;m not trying to be funny, but isn&apos;t this is simply a bad test?&lt;/p&gt;</comment>
                            <comment id="64752" author="jluhrsen" created="Thu, 23 Aug 2018 21:13:03 +0000"  >&lt;p&gt;even if two ODLs are down, the dataplane (as it was) is expected to still work. So, openstack instances should still&lt;br/&gt;
be able to have connectivity, etc. it&apos;s a valid test, but our CSIT is also wired up to be more than just&lt;br/&gt;
black box testing. Every test case teardown is looking for unexpected exceptions, logging data&lt;br/&gt;
models, etc.  Those kinds of things might have to be dialed back, or ignored if they produce failures&lt;br/&gt;
that we are supposed to ignore. I still wish we didn&apos;t have to get such ugly exceptions and logs in&lt;br/&gt;
a single node, just because it happens to be the last one standing. But, as was mentioned on the TSC today&lt;br/&gt;
it sounds like others are fine with that and people need to develop tooling/monitoring on top of&lt;br/&gt;
the cluster to let admins understand when they need to ignore these scary types of logs.&lt;/p&gt;</comment>
                            <comment id="64829" author="shague@redhat.com" created="Tue, 28 Aug 2018 16:45:42 +0000"  >&lt;p&gt;Yes, this is a valid test for the dataplane connectivity. We can do better in how the test is written as far as reducing the errors by gracefully shutting down odl3, then bringing it back and waiting until all the mdsal updates have finished. Log this better in the logs so we know when this is happening. This will reduce the errors since the odl3 won&apos;t be doing any work while 1 and 2 are down. Once odl3 is fully up and processed all the mdsal updates we can take down odl1 and 2 and check the dataplane.&lt;/p&gt;

&lt;p&gt;To Jamo&apos;s point, though, this is still an issue if this happens with customers. All they will see are errors in the log and they will start to file cases. We need tooling to make it very clear and bubble up the no leader status and leading to errors. The logs are there but it is difficult to relate them.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10003">
                    <name>Relates</name>
                                            <outwardlinks description="relates to">
                                        <issuelink>
            <issuekey id="30163">NETVIRT-1324</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                            <customfield id="customfield_11400" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10002" key="com.pyxis.greenhopper.jira:gh-epic-link">
                        <customfieldname>Epic Link</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>NETVIRT-996</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10000" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>0|i0385j:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                </customfields>
    </item>
</channel>
</rss>