Test Cases: Negative test cases for Server Metadata

Bug #1010014 reported by Sapan Kona
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
tempest
Won't Fix
Undecided
meenakshi m

Bug Description

Following Test cases are to be added for Server Metadata:

1)test_delete_server_invalid_metadata_item

2)test_delete_server_empty_metadata_item

3)test_get_server_metadata_item_empty_key

4)test_get_server_metadata_item_invalid_key

5)test_set_nonexistant_server_metadata_item

6)test_set_server_metadata_item_empty_key

Sapan Kona (sapan-kona)
Changed in tempest:
assignee: nobody → Sapan Kona (sapan-kona)
Revision history for this message
OpenStack Infra (hudson-openstack) wrote : Fix proposed to tempest (master)

Fix proposed to branch: master
Review: https://review.openstack.org/8280

Changed in tempest:
status: New → In Progress
Revision history for this message
David Kranz (david-kranz) wrote :

I am still trying to understand how we are deciding which argument variants for APIs are supposed to have a Tempest test. The nova unit test in nova/nova/tests/api/openstack/compute/test_server_metadata.py has these tests and a zillion more. I thought the point was to test things that unit tests *could not* because the unit tests splice out real vm or db operations in order to be fast, or because multiple components are involved. But all these test cases end up in the same code as the unit tests do, just more slowly.

It seems to me we should have a few cases to make sure the basic code path works, but only be testing lots of multiple argument variants when we expect that they could induce different behavior in parts of the system that the unit tests are stubbing out. What do other people think?

I don't mean to pick on this bug in particular. It applies to all classes of Tempest test cases.

Revision history for this message
Ravikumar Venkatesan (ravikumar-venkatesan) wrote :

David - I got your point.
1. We would like to have negative tests coverage so that those tests are run through application server in complex deployments to check stability of the integrated environment . We do not run unit tests in those deployments.
2. With annotataor , those negative tests can be chosen to run or skip.
3. I agree. we will balance and limit those negative tests .

Revision history for this message
David Kranz (david-kranz) wrote :

Thanks, Ravi. I am just trying to extract a principle for what it means for a Tempest test of some API to be "done". It is important that we document this and I am not saying it is trivial to do so.

Sapan Kona (sapan-kona)
Changed in tempest:
assignee: Sapan Kona (sapan-kona) → meenakshi m (meenakshi-m)
Revision history for this message
Ravikumar Venkatesan (ravikumar-venkatesan) wrote :

We are going to revisit negative test cases separately using a fuzz testing tool.

Changed in tempest:
status: In Progress → Won't Fix
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.