Migrations do not populate volume_id_mappings and instance_id_mappings completely
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
OpenStack Compute (nova) |
Invalid
|
High
|
Unassigned |
Bug Description
089_add_
107_add_
These two migrations create tables that easily map volume and instance IDs to UUIDs. Each have 6 columns: id, uuid, created_at, updated_at, deleted_at, deleted. Only the 'id' and 'uuid' column get populated with data copied from the volumes or instances table in the migration that creates them. From what I can tell, none of these fields are currently being used by code anywhere in Folsom, but this has already introduced a nasty Bug #1061166.
If the plan is to be able to query this info from this table in Grizzly, they should be fully populated with data for existing instances. If there is no plan to make use of them, they should be dropped.
Additionally, there should probably be some uniqueness constraints on those tables to ensure these mappings stay consistent if similar bugs exist elsewhere in the code base.
affects: | nova (Ubuntu) → nova |
Changed in nova: | |
status: | New → Confirmed |
importance: | Undecided → Critical |
Changed in nova: | |
assignee: | nobody → Diganta kumar sahoo (mail2digant) |
Changed in nova: | |
status: | Confirmed → In Progress |
Changed in nova: | |
status: | In Progress → Fix Committed |
Changed in nova: | |
milestone: | grizzly-2 → grizzly-3 |
Changed in nova: | |
assignee: | nobody → Yug Suo (yugsuo) |
Changed in nova: | |
milestone: | grizzly-3 → grizzly-rc1 |
Changed in nova: | |
status: | Triaged → In Progress |
tags: | added: db |
Changed in nova: | |
milestone: | grizzly-rc1 → havana-1 |
Changed in nova: | |
assignee: | nobody → ugvddm (271025598-9) |
Changed in nova: | |
assignee: | ugvddm (271025598-9) → nobody |
Changed in nova: | |
assignee: | nobody → Shane Wang (shane-wang) |
Changed in nova: | |
status: | Triaged → Invalid |
This is also an issue for the volume_id_mappings and snapshot_ id_mappings. It seems the same workaround for Bug #1061166 can be used there, too, but the tables should probably be populated correctly if that data is intended to be used.