lmc doesn't create partition large enough

Bug #813296 reported by Paul Larson
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Linaro Image Tools
Fix Released
High
James Westby

Bug Description

In lava, we use LMC to generate an image from a hwpack and rootfs, then extract the boot and rootfs from it to push the test image to the development board. I noticed recently that ubuntu-desktop started to fail. You can see a good serial log of this here:
http://validation.linaro.org/jenkins/job/Lava%20Daily%20Beaglexm01/imagetype=ubuntu-desktop,target=beaglexm03%20omap3/50/console

It looks like perhaps the rootfs for ubuntu-desktop images are too large for the 2G default size now.

RELEASE NOTE:
  * The Ubuntu LEB images now take more than 2G of disk space. Ensure that you are using an SD card greater than this, or you specify --image_size with a parameter of more than 2G if you are writing an image file.

Revision history for this message
Ricardo Salveti (rsalveti) wrote : Re: [Bug 813296] [NEW] lmc doesn't create partition large enough, fails to deallocate loopback on failure

Do you know when this started to happen?

Revision history for this message
Paul Larson (pwlars) wrote : Re: lmc doesn't create partition large enough, fails to deallocate loopback on failure

It's not new for l-m-c, it's just that the image size for ubuntu-desktop images grew it seems, and now the 2G default is not big enough.

Revision history for this message
Guilherme Salgado (salgado) wrote : Re: [Bug 813296] Re: lmc doesn't create partition large enough, fails to deallocate loopback on failure

Would it be reasonable to just increase the default size? If so, would
2.5G be enough or should we go to 3GB?

Revision history for this message
Paul Larson (pwlars) wrote : Re: lmc doesn't create partition large enough, fails to deallocate loopback on failure

What I'm doing to work around it at the moment in the dispatcher, is just to call it with --image_size 3G. I think it's reasonable, but if someone uses cards with <3G, and tries to write to an image file using a smaller image, and then burn that image to SD, they will have to specify a smaller size. That's about the only case where I can think it might inconvenience someone at the moment. However I suspect most people are writing directly to cards, not to image files.

Revision history for this message
Guilherme Salgado (salgado) wrote : Re: [Bug 813296] Re: lmc doesn't create partition large enough, fails to deallocate loopback on failure

It might not make sense to use 3GB as the default for all image types
(although I don't think it's a big deal as people can easily specify a
smaller size when they need), but as the Ubuntu LEB no longer fits on a
2GB card we must require more than 2GB at least when creating an Ubuntu
image.

Revision history for this message
James Westby (james-w) wrote :

On Thu, 21 Jul 2011 14:22:22 -0000, Guilherme Salgado <email address hidden> wrote:
> Would it be reasonable to just increase the default size? If so, would
> 2.5G be enough or should we go to 3GB?

Given we don't have a way to express the default size that should be
used in the image we should just pick one. I would go for 3GB here.

Perhaps we could sum the sizes of the image file and hwpacks and pick a
default based on that.

There also seems to be a bug here about leaking loop mounts. I'll split
that out in to a different bug report.

Thanks,

James

Revision history for this message
Guilherme Salgado (salgado) wrote :

On Thu, 2011-07-21 at 19:44 +0000, James Westby wrote:
> On Thu, 21 Jul 2011 14:22:22 -0000, Guilherme Salgado <email address hidden> wrote:
> > Would it be reasonable to just increase the default size? If so, would
> > 2.5G be enough or should we go to 3GB?
>
> Given we don't have a way to express the default size that should be
> used in the image we should just pick one. I would go for 3GB here.

That's true, but we could easily hard-code a check to not allow attempts
to build Ubuntu images with less than 2GB.

> Perhaps we could sum the sizes of the image file and hwpacks and pick a
> default based on that.

That's certainly a more robust alternative. We should do something
similar for SD cards as well

Revision history for this message
James Westby (james-w) wrote :

On Thu, 21 Jul 2011 20:44:24 -0000, Guilherme Salgado <email address hidden> wrote:
> On Thu, 2011-07-21 at 19:44 +0000, James Westby wrote:
> > On Thu, 21 Jul 2011 14:22:22 -0000, Guilherme Salgado <email address hidden> wrote:
> > > Would it be reasonable to just increase the default size? If so, would
> > > 2.5G be enough or should we go to 3GB?
> >
> > Given we don't have a way to express the default size that should be
> > used in the image we should just pick one. I would go for 3GB here.
>
> That's true, but we could easily hard-code a check to not allow attempts
> to build Ubuntu images with less than 2GB.

I'm guessing we'd like to shrink that sometime, so that sort of check
will become counter-productive in time.

> > Perhaps we could sum the sizes of the image file and hwpacks and pick a
> > default based on that.
>
> That's certainly a more robust alternative. We should do something
> similar for SD cards as well

Compressed the latest ubuntu LEB image is 467M.

http://snapshots.linaro.org/11.05-daily/linaro-ubuntu-desktop/20110721/0/images/tar/

panda-x11-base hwpack is 86M

http://snapshots.linaro.org/11.05-daily/linaro-hwpacks/panda-x11-base/20110721/0/images/hwpack/

If the minimum size is 2.5G then they are >4x compression.

nano is currently 28M compressed, with the panda hwpack being 34M. That
combination currently requires between 256M and 512M.

This suggests that we may want to warn if

     (compressed)*4.5 > (image_size)

but I don't know how reliable that would be, especially if we change
something about the compression.

I don't think there's a way to know how big a gzipped stream is without
uncompressing it.

Thanks,

James

Revision history for this message
Dave Martin (dave-martin-arm) wrote :

> This suggests that we may want to warn if
>
> (compressed)*4.5 > (image_size)
>
> but I don't know how reliable that would be, especially if we change
> something about the compression.
>
> I don't think there's a way to know how big a gzipped stream is without
> uncompressing it.

Amortised across the whole filesystem, I suspect that the ratio is
relatively stable for a given compression algorithm.

Providing that we allow a substantial margin for error, it could still
work quite reliably.

Revision history for this message
James Westby (james-w) wrote :

On Fri, 22 Jul 2011 08:49:41 -0000, Dave Martin <email address hidden> wrote:
> Amortised across the whole filesystem, I suspect that the ratio is
> relatively stable for a given compression algorithm.

Each alogrithm can be compressed to different amounts as well. I agree
that we would want any heuristic to be based on the algorithm used.

My concern would be that in the future we may be doing something like
warning that nano can't be put in to a 128M image, when it has been
slimmed down and you can in fact fit it in 64M.

> Providing that we allow a substantial margin for error, it could still
> work quite reliably.

Yeah, and I think it should only be a warning, though I'm not sure that
will work for the validation lab. In reality it wouldn't make a whole
lot of difference except for failing quicker.

Thanks,

James

Revision history for this message
Dave Martin (dave-martin-arm) wrote :

On Fri, Jul 22, 2011 at 2:30 PM, James Westby <email address hidden> wrote:
> On Fri, 22 Jul 2011 08:49:41 -0000, Dave Martin <email address hidden> wrote:
>> Amortised across the whole filesystem, I suspect that the ratio is
>> relatively stable for a given compression algorithm.
>
> Each alogrithm can be compressed to different amounts as well. I agree
> that we would want any heuristic to be based on the algorithm used.
>
> My concern would be that in the future we may be doing something like
> warning that nano can't be put in to a 128M image, when it has been
> slimmed down and you can in fact fit it in 64M.
>
>> Providing that we allow a substantial margin for error, it could still
>> work quite reliably.
>
> Yeah, and I think it should only be a warning, though I'm not sure that
> will work for the validation lab. In reality it wouldn't make a whole
> lot of difference except for failing quicker.

It's hard to come up with a single answer which is guaranteed to work
for automation ... and there's a small risk that even if building the
image succeeds, it may have so little free space that the image may
fail to boot, or the desired validation actions won't work properly.

Could we actually do an image build as part of the packaging process,
as a way of priming the tools with more realistic size estimates?

Alternatively, l-m-c or the validation lab could need to implement
some auto-size option, where the image build is done repeatedly,
starting with an initial guesstimeate and growing the size each time
until an image with the desired amount of free space is created
successfully. Sounds like overkill though ... hopefully we wouldn't
need to go for something quite that complex.

Cheers
---Dave

summary: - lmc doesn't create partition large enough, fails to deallocate loopback
- on failure
+ lmc doesn't create partition large enough
description: updated
Revision history for this message
James Tunnicliffe (dooferlad) wrote :

At the very least we decompress everything to /tmp so we can definitely stop linaro-media-create early on if the target device/image is too small based on that data. We could have an early warning based on the gzipped size printed immediately when the command is run and a halt once the data has decompressed if the target size really is too small. If we are writing to a file we could query the user, asking if we should change the image size to [size based on unzipped data] and continue.

Revision history for this message
Tom Gall (tom-gall) wrote :

As I recall in the gzip standard the size of the uncompressed stream is contained in the header. Accessing that might be of value when deciding what to do. http://www.ietf.org/rfc/rfc1952.txt

Revision history for this message
Loïc Minier (lool) wrote :

I think I had suggested this in another bug report already, but can't find it: gzip -l should give the uncompressed size information we need.

The other part of the problem is computing size of a unpacked tar in a filesystem: fs adds overhead, and just the inode size (4 KiB or other) already affects the amount of space depending on the number of files.

James Westby (james-w)
Changed in linaro-image-tools:
status: New → Triaged
importance: Undecided → High
milestone: none → 2011.07
description: updated
Revision history for this message
Alexander Sack (asac) wrote :

moving not fixed bugs to "next " milestone should have happened during release post-mortem by RMs/PMs. Seems this one wasn't moved and I moved it to august milestone now.

Changed in linaro-image-tools:
milestone: 2011.07 → 2011.08
Revision history for this message
Alexander Sack (asac) wrote :

also subscribed linaro-release to let them decide if they care enough about this to track the progress on this.

Mattias Backman (mabac)
Changed in linaro-image-tools:
assignee: nobody → James Westby (james-w)
status: Triaged → Fix Committed
Mattias Backman (mabac)
Changed in linaro-image-tools:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.