Deploying a large charm times out with 'openstack' provider

Bug #1110607 reported by Sidnei da Silva
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
pyjuju
New
Undecided
Unassigned

Bug Description

This may or may not be environment specific.

Deploying a charm that contains a large tarball inside (35Mb) fails with either ConnectionLost or ConnectionDone after about 30s. Seems like the server is cutting the connection short after a timeout. Smaller charms do not have an issue, and using the 'openstack_s3' provider works too.

Revision history for this message
Kapil Thangavelu (hazmat) wrote :

<hazmat> sounds like that's basically the server calling it kaput on the connection
<hazmat> we can do a retry.. but its not going to change the result
<hazmat> there isn't really any abstraction around the put file on our end.. read file (into mem) and PUT object with contents
<hazmat> the real question is whether the key exists after the server closes the connection
<sidnei> into mem? zomg :)
<sidnei> good question
<hazmat> sadly both txaws and our twisted openstack client suck like that
<sidnei> we have some code for streaming uploads in u1, it's a bit messy but maybe i could take a look at that
<hazmat> well i'm looking at that as well, but its not the issue
<hazmat> ie we're sending a single PUT with contents..
<hazmat> whether its streaming to the put request or not.. isn't really the question
<hazmat> how the server is behaving with the PUT request is
<sidnei> sure
<hazmat> it would be good to check the contents of the key
<hazmat> it might be a local swift config, or it might be the middleware cut off the request, but the backend processes it successfully
<sidnei> whats the best way to do that?
<hazmat> python-swiftclient

to be continued... recording for posterity

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.