On Mon, 2023-08-07 at 15:59 +0000, Tim Andersson wrote:
>
> ```
> rabbitmq-server/8* waiting idle 62 185.125.191.10 5672/tcp,15672/tcp
> Not reached target cluster-partition-handling mode
> ```
The bundle can be found at [0], this is using "cs:rabbitmq-server" (charmstore), there was a change
recently[1] where the default track is "3.9"[2].
When using "cs:rabbitmq-server" I get revision 118:
$ juju deploy --series focal cs:rabbitmq-server
Located charm "rabbitmq-server" in charm-store, revision 118
Deploying "rabbitmq-server" from charm-store charm "rabbitmq-server", revision 118 in channel stable
on focal
Now if we take the subject of this bug and use revision 177 (via charmhub), a single node rabbitmq-
server gets deployed.
$ juju deploy --series focal ch:rabbitmq-server rabbitmq-server-ch
Located charm "rabbitmq-server" in charm-hub, revision 177
Deploying "rabbitmq-server-ch" from charm-hub charm "rabbitmq-server", revision 177 in channel
3.9/stable on focal
$ juju status rabbitmq-server-ch
Model Controller Cloud/Region Version SLA Timestamp
rabbit serverstack serverstack/serverstack 2.9.44 unsupported 18:24:21-04:00
App Version Status Scale Charm Channel Rev Exposed Message
rabbitmq-server-ch 3.8.2 active 1 rabbitmq-server 3.9/stable 177 no Unit is ready
Unit Workload Agent Machine Public address Ports Message
rabbitmq-server-ch/0* active idle 1 10.5.3.135 5672/tcp,15672/tcp Unit is ready
Machine State Address Inst id Series AZ Message
1 started 10.5.3.135 7b2c9957-acd9-455c-8dfe-fafde0ccbba1 focal nova ACTIVE
I wonder if there is a race condition hidden, if you see this issue again please make sure to
capture the /var/log/juju/ directory and attach it to this bug.
On Mon, 2023-08-07 at 15:59 +0000, Tim Andersson wrote: partition- handling mode
>
> ```
> rabbitmq-server/8* waiting idle 62 185.125.191.10 5672/tcp,15672/tcp
> Not reached target cluster-
> ```
The bundle can be found at [0], this is using "cs:rabbitmq- server" (charmstore), there was a change
recently[1] where the default track is "3.9"[2].
When using "cs:rabbitmq- server" I get revision 118:
$ juju deploy --series focal cs:rabbitmq-server
Located charm "rabbitmq-server" in charm-store, revision 118
Deploying "rabbitmq-server" from charm-store charm "rabbitmq-server", revision 118 in channel stable
on focal
Now if we take the subject of this bug and use revision 177 (via charmhub), a single node rabbitmq-
server gets deployed.
$ juju deploy --series focal ch:rabbitmq-server rabbitmq-server-ch server- ch" from charm-hub charm "rabbitmq-server", revision 177 in channel serverstack 2.9.44 unsupported 18:24:21-04:00
Located charm "rabbitmq-server" in charm-hub, revision 177
Deploying "rabbitmq-
3.9/stable on focal
$ juju status rabbitmq-server-ch
Model Controller Cloud/Region Version SLA Timestamp
rabbit serverstack serverstack/
App Version Status Scale Charm Channel Rev Exposed Message
rabbitmq-server-ch 3.8.2 active 1 rabbitmq-server 3.9/stable 177 no Unit is ready
Unit Workload Agent Machine Public address Ports Message server- ch/0* active idle 1 10.5.3.135 5672/tcp,15672/tcp Unit is ready
rabbitmq-
Machine State Address Inst id Series AZ Message acd9-455c- 8dfe-fafde0ccbb a1 focal nova ACTIVE
1 started 10.5.3.135 7b2c9957-
I wonder if there is a race condition hidden, if you see this issue again please make sure to
capture the /var/log/juju/ directory and attach it to this bug.
[0] https:/ /git.launchpad. net/autopkgtest -cloud/ tree/mojo/ service- bundle# n252 /discourse. charmhub. io/t/request- please- change- the-default- channel- for-the- following- openstack- ovn-and- ceph-charms/ 11245 /charmhub. io/rabbitmq- server? channel= 3.9/stable
[1]
https:/
[2] https:/