[NETVIRT-37] Single Vm Instance Assigned With two ips Created: 28/Jun/16  Updated: 09/Mar/18  Resolved: 07/Sep/16

Status: Resolved
Project: netvirt
Component/s: None
Affects Version/s: Beryllium
Fix Version/s: None

Type: Bug
Reporter: Priya Ramasubbu Assignee: Unassigned
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Attachments: Zip Archive archive.zip    
External issue ID: 6119

 Description   

While testing Netvirt 1 node openstack in csit, I could see two ips assigned to a single vm instance.

Steps to be followed to reproduce:

1. Create Network
2. Create subnet
3. Boot three vm instances in the same subnet

Observation:

first two vm instances properly assigned with ips (vm1 as 30.0.0.3 and vm2 as 30.0.0.4), where as the third vm instance assigned with two ips as (30.0.0.5
and 3.0.0.6) since the subnet range is 30.0.0.0/24 and Gateway is enabled.

after creating all three vm instances i printed them with nova show commands.
i could see the below output for third vm instance.

----------------------------------------------------------------------------------------------------+

Property Value

----------------------------------------------------------------------------------------------------+

OS-DCF:diskConfig MANUAL
OS-EXT-AZ:availability_zone nova
OS-EXT-SRV-ATTR:host centos7-devstack-721
OS-EXT-SRV-ATTR:hostname mythirdinstance-1
OS-EXT-SRV-ATTR:hypervisor_hostname centos7-devstack-721
OS-EXT-SRV-ATTR:instance_name instance-00000003
OS-EXT-SRV-ATTR:kernel_id 55ce53be-9fd0-4df7-ab55-5e78a296376c
OS-EXT-SRV-ATTR:launch_index 0
OS-EXT-SRV-ATTR:ramdisk_id 179666d0-e4f4-4a9b-bbf3-3e8f23fd5488
OS-EXT-SRV-ATTR:reservation_id r-k85fjf88
OS-EXT-SRV-ATTR:root_device_name /dev/vda
OS-EXT-SRV-ATTR:user_data
OS-EXT-STS:power_state 1
OS-EXT-STS:task_state
OS-EXT-STS:vm_state active
OS-SRV-USG:launched_at 2016-06-28T07:10:10.000000
OS-SRV-USG:terminated_at
accessIPv4  
accessIPv6  
config_drive True
created 2016-06-28T07:10:05Z
description
flavor m1.nano (42)
hostId f7c621bc0d8cb4621e5a33dc1fdf12b1fd0adcfd96abc51eaf7c6e62
host_status UP
id f7d294d6-9b23-40f5-9151-1d7a1f8168e8
image cirros-0.3.4-x86_64-uec (05d7928d-4221-4e7f-ad27-f9d0bcf3a742)
key_name
l2_network_1 network 30.0.0.5, 30.0.0.6
locked False
metadata {}
name MyThirdInstance_1
os-extended-volumes:volumes_attached []
progress 0
security_groups default
status ACTIVE
tenant_id d99ecfb314dd4731ab47d73ccb5e6c9e
updated 2016-06-28T07:10:10Z
user_id 7f36ff94d6194308b647e720382b1f33

----------------------------------------------------------------------------------------------------+
[jenkins@centos7-devstack-9113 devstack]>

when i check the ifconfig for the particular vm i could see the second ip (30.0.0.6) assigned to it.

where as in dump flows there are entries for both the ips.
as below:

cookie=0x0, duration=16.270s, table=20, n_packets=0, n_bytes=0, priority=1024,arp,tun_id=0x43f,arp_tpa=30.0.0.6,arp_op=1 actions=move:NXM_OF_ETH_SRC[]>NXM_OF_ETH_DST[],set_field:fa:16:3e:b6:13:3e>eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]>NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]>NXM_OF_ARP_TPA[],load:0xfa163eb6133e->NXM_NX_ARP_SHA[],load:0x1e000006->NXM_OF_ARP_SPA[],IN_PORT

cookie=0x0, duration=16.252s, table=20, n_packets=0, n_bytes=0, priority=1024,arp,tun_id=0x43f,arp_tpa=30.0.0.5,arp_op=1 actions=move:NXM_OF_ETH_SRC[]>NXM_OF_ETH_DST[],set_field:fa:16:3e:5b:6a:6f>eth_src,load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]>NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]>NXM_OF_ARP_TPA[],load:0xfa163e5b6a6f->NXM_NX_ARP_SHA[],load:0x1e000005->NXM_OF_ARP_SPA[],IN_PORT

Please find the sandbox link for more details:
https://jenkins.opendaylight.org/sandbox/job/netvirt-csit-1node-openstack-mitaka-openstack-beryllium/11/

Attached the logs for reference, kindly let us know if any more details required.



 Comments   
Comment by Priya Ramasubbu [ 28/Jun/16 ]

Attachment archive.zip has been added with description: Karaf and openstack logs

Comment by Sam Hague [ 26/Jul/16 ]

Priya,

have you seen this again or do you have copies of the logs? The sandbox jobs cleanup every Saturday so the logs are no longer there.

Sam

Comment by Priya Ramasubbu [ 07/Sep/16 ]

Sam,

The issue doesn't exist now.

Generated at Wed Feb 07 20:20:32 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.