go-run-lxc.sh fails to start container number 13. with "x-013 failed to boot"
Kernel memory has grown to 1076MB up from 400MB with
141816K dentry
564320K btrfs_inode
As the largest items.
3) However if I set
fs.inotify.max_user_watches = 8192
It fails again at exactly 13 containers.
So while in a real-world scenario, max_user_watches may come into play, a
"standard" desktop value of 512K is plenty (at least to have machines
provision). (I do believe I've seen max_user_watches come into play while
using Juju in the past, it just isn't the specific problem in the
go-lxc-run.sh script which takes Juju out of the picture.)
4) If I then play around with 'max_user_instances' with:
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 256
fs.inotify.max_user_watches = 524288
I then fail on the 19th container, with error: Failed to change ownership of:
<random filename here>
Kernel memory is up to: 1363MB
Top entries are:
178872K dentry
723104K btrfs_inode
And at this point, my machine is behaving poorly. Things like "lxd delete
--force" actually fails to cleanup instances, because
"error: unable to open database file".
And Term crashed at least one time.
I even tried at one point to set: fs.inotify.max_user_instances=2048
But it still failed at 19 (that was when Term crashed).
But I'm pretty confident that it means we're exhausting some other limit,
rather than inotify max_user_instances or max_user_watches, since changing
either of them doesn't actually increase the number of containers.
I'm going to run some more tests with Juju in the loop.
Michael and I played around with some different settings, and here are my notes.
1) Package kde-runtime seems to install sysctl. d/30-baloo- inotify- limits. conf which sets max_user_watches to 512*1024
/etc/
'slabtop' says that my baseline kernel memory is 380-420MB with no containers
running.
2) With max_queued_ events = 16384 max_user_ instances = 128 max_user_ watches = 524288
fs.inotify.
fs.inotify.
fs.inotify.
go-run-lxc.sh fails to start container number 13. with "x-013 failed to boot"
Kernel memory has grown to 1076MB up from 400MB with
141816K dentry
564320K btrfs_inode
As the largest items.
3) However if I set max_user_ watches = 8192
fs.inotify.
It fails again at exactly 13 containers.
So while in a real-world scenario, max_user_watches may come into play, a
"standard" desktop value of 512K is plenty (at least to have machines
provision). (I do believe I've seen max_user_watches come into play while
using Juju in the past, it just isn't the specific problem in the
go-lxc-run.sh script which takes Juju out of the picture.)
4) If I then play around with 'max_user_ instances' with: max_queued_ events = 16384 max_user_ instances = 256 max_user_ watches = 524288
fs.inotify.
fs.inotify.
fs.inotify.
I then fail on the 19th container, with error: Failed to change ownership of:
<random filename here>
Kernel memory is up to: 1363MB
Top entries are:
178872K dentry
723104K btrfs_inode
And at this point, my machine is behaving poorly. Things like "lxd delete
--force" actually fails to cleanup instances, because
"error: unable to open database file".
And Term crashed at least one time.
I even tried at one point to set: fs.inotify. max_user_ instances= 2048
But it still failed at 19 (that was when Term crashed).
But I'm pretty confident that it means we're exhausting some other limit,
rather than inotify max_user_instances or max_user_watches, since changing
either of them doesn't actually increase the number of containers.
I'm going to run some more tests with Juju in the loop.