[NETVIRT-544] CSIT Sporadic failures - tempest.api.network failing RoutersNegativeIpV6Test and RoutersNegativeTest Created: 16/Mar/17  Updated: 19/Oct/17  Resolved: 03/Apr/17

Status: Resolved
Project: netvirt
Component/s: General
Affects Version/s: Carbon
Fix Version/s: None

Type: Bug
Reporter: Jamo Luhrsen Assignee: Unassigned
Resolution: Cannot Reproduce Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Operating System: All
Platform: All


Attachments: File tempest_results.html.gz    
External issue ID: 8012

 Description   

this happened in back to back jobs:
https://logs.opendaylight.org/releng/jenkins092/netvirt-csit-1node-openstack-mitaka-upstream-stateful-boron/223/archives/log.html.gz
https://logs.opendaylight.org/releng/jenkins092/netvirt-csit-1node-openstack-mitaka-upstream-stateful-boron/224/archives/log.html.gz

better looking report to find tracebacks:

https://logs.opendaylight.org/releng/jenkins092/netvirt-csit-1node-openstack-mitaka-upstream-stateful-boron/224/archives/log.html.gz

TL;DR

tempest.lib.exceptions.ServerFault: Got server fault
Details: Request Failed: internal server error while processing your request.

neutron log:
https://logs.opendaylight.org/releng/jenkins092/netvirt-csit-1node-openstack-mitaka-upstream-stateful-boron/224/archives/control/q-svc.log.2017-03-16-202257.gz

karaf log:
https://logs.opendaylight.org/releng/jenkins092/netvirt-csit-1node-openstack-mitaka-upstream-stateful-boron/224/archives/odl1_karaf.log.gz



 Comments   
Comment by Jamo Luhrsen [ 20/Mar/17 ]

testing in sandbox with older distribution that was passing these test
cases before:

https://jenkins.opendaylight.org/sandbox/job/netvirt-csit-1node-openstack-mitaka-jamo-upstream-stateful-carbon/1/

I also noticed that this started failing around the same time we lost
our devstack images in the infra. To fix that, we just spun up newer
images (based off the same packer provisioning). The images should be
similar, but they will have some updates (e.g. "yum update"). That is
one difference to consider.

When looking back at the daily csit email, I noticed that this failure
only came on our mitaka jobs, and not on any of the newton jos.

https://lists.opendaylight.org/pipermail/netvirt-dev/2017-March/003783.html

Comment by Jamo Luhrsen [ 21/Mar/17 ]

Attachment tempest_results.html.gz has been added with description: tempest report

Comment by Jamo Luhrsen [ 21/Mar/17 ]

(In reply to Jamo Luhrsen from comment #1)
> testing in sandbox with older distribution that was passing these test
> cases before:
>
> https://jenkins.opendaylight.org/sandbox/job/netvirt-csit-1node-openstack-
> mitaka-jamo-upstream-stateful-carbon/1/

this job also failed, which makes me think the focus should be around
what is new in the devstack VM since it was updated. I attached the tempest
report from this job above.

> I also noticed that this started failing around the same time we lost
> our devstack images in the infra. To fix that, we just spun up newer
> images (based off the same packer provisioning). The images should be
> similar, but they will have some updates (e.g. "yum update"). That is
> one difference to consider.
>
> When looking back at the daily csit email, I noticed that this failure
> only came on our mitaka jobs, and not on any of the newton jos.
>
>
> https://lists.opendaylight.org/pipermail/netvirt-dev/2017-March/003783.html

Generated at Wed Feb 07 20:21:50 UTC 2024 using Jira 8.20.10#820010-sha1:ace47f9899e9ee25d7157d59aa17ab06aee30d3d.