maas provider releases all nodes it did not allocate [does not play well with others]
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
MAAS |
Invalid
|
Undecided
|
Unassigned | ||
juju-core |
Fix Released
|
Low
|
Julian Edwards | ||
1.16 |
Fix Released
|
High
|
Roger Peppe | ||
pyjuju |
Invalid
|
Wishlist
|
Unassigned | ||
juju-core (Ubuntu) |
Fix Released
|
Undecided
|
Unassigned | ||
Saucy |
Fix Released
|
High
|
Unassigned | ||
Trusty |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
[Impact]
juju destroy-environment destroys all machines allocated to the MAAS user being used in the environment, not just the ones owned by Juju.
[Test Case]
Allocate machines using maas-cli
juju bootstrap
juju destroy-environment
(all machines are terminated and powered off)
[Regression Potential]
The fix is limited to the MAAS provider in the codebase; so regression potential is limited in scope of provider.
[Original Bug Report]
juju/agents/
| """Ensure the currently running machines correspond to state.
|
| At the end of each process_machines execution, verify that all
| running machines within the provider correspond to machine_ids within
| the topology. If they don't then shut them down.
|
| Utilizes concurrent execution guard, to ensure that this is only being
| executed at most once per process
...
| # Terminate all unused juju machines running within the cluster.
This logic/description is clearly fundamentally flawed and means that a given maas user cannot have more than one juju environment on the same maas cluster.
It also means that if a user is using juju, then they cannot deploy a node in *any* other way, or juju bootstrap node will kill it for them.
I did not explicitly check, but I would suspect/hope that this behavior is not the same as the EC2 provider. Ie, I do not expect that juju kills all my running EC2 instances if I choose to type 'juju bootstrap'.
Related bugs:
* bug 1237398: "You'll need a separate MAAS key for each Juju environment" is wrong.
* bug 1229275: juju destroy-environment also destroys nodes that are not controlled by juju
* bug 1239488: Juju api client cannot distinguish between environments
Related branches
- Tim Penhey (community): Approve (conditional)
-
Diff: 712 lines (+186/-76)9 files modifiedprovider/maas/config.go (+16/-2)
provider/maas/config_test.go (+17/-3)
provider/maas/environ.go (+4/-2)
provider/maas/environ_test.go (+29/-34)
provider/maas/environprovider.go (+21/-1)
provider/maas/environprovider_test.go (+44/-3)
provider/maas/instance_test.go (+15/-13)
provider/maas/maas_test.go (+24/-2)
provider/maas/storage_test.go (+16/-16)
- Juju Engineering: Pending requested
-
Diff: 200 lines (+62/-9)3 files modifiedprovider/maas/environ_test.go (+7/-2)
provider/maas/storage.go (+22/-2)
provider/maas/storage_test.go (+33/-5)
summary: |
- maas provider terminates all unused systems + maas provider unallocates all nodes it did not allocate [does not play + well with others] |
summary: |
- maas provider unallocates all nodes it did not allocate [does not play - well with others] + maas provider releases all nodes it did not allocate [does not play well + with others] |
Changed in juju: | |
importance: | Undecided → Medium |
description: | updated |
Changed in juju: | |
status: | New → Triaged |
tags: | added: maas |
Changed in juju-core: | |
status: | New → Triaged |
importance: | Undecided → Low |
description: | updated |
Changed in juju-core: | |
assignee: | nobody → Julian Edwards (julian-edwards) |
status: | Triaged → In Progress |
Changed in juju-core: | |
milestone: | none → 1.17.0 |
Changed in juju-core: | |
status: | In Progress → Fix Committed |
tags: |
added: maas-provider removed: maas |
Changed in juju: | |
status: | Triaged → Invalid |
description: | updated |
Changed in juju-core (Ubuntu Saucy): | |
importance: | Undecided → High |
Changed in juju-core: | |
status: | Fix Committed → Fix Released |
I can confirm this is not the case for EC2 as I'm able to run multiple Juju environments and non-juju'd machines in the same zone.