Comment 45 for bug 1972159

Revision history for this message
In , mikhail.v.gavrilov (mikhail.v.gavrilov-redhat-bugs) wrote :

(In reply to Anita Zhang from comment #12)
> @Mikhail Was the system responsive and performing well at 54% pressure on
> the user service cgroup? Also can you try stopping systemd-oomd (sudo
> systemctl stop systemd-oomd) and recording what the highest tolerable
> pressure value was from
> `/sys/fs/cgroup/user.slice/user-$UID.slice/user@$UID.service/memory.
> pressure` while your container is running? We can't control for all
> workloads but it's worthwhile to see what pressure is tolerable or not.

$ cat /sys/fs/cgroup/user.slice/user-$UID.slice/user@$UID.service/memory.pressure
some avg10=0.00 avg60=0.00 avg300=0.13 total=1698253169
full avg10=0.00 avg60=0.00 avg300=0.11 total=1515028054

$ journalctl -b -u systemd-oomd --no-pager
-- Journal begins at Thu 2021-07-29 17:02:00 +05, ends at Wed 2021-09-08 00:51:09 +05. --
Sep 04 03:16:03 primary-ws systemd[1]: Starting Userspace Out-Of-Memory (OOM) Killer...
Sep 04 03:16:03 primary-ws systemd[1]: Started Userspace Out-Of-Memory (OOM) Killer.
Sep 08 00:23:23 primary-ws systemd-oomd[1552]: Killed /user.slice/user-1000.slice/user@1000.service/app.slice/app-org.gnome.Terminal.slice/vte-spawn-887e6f17-fa6d-44cd-aa80-798d5c0c71ce.scope due to memory pressure for /user.slice/user-1000.slice/user@1000.service being 52.46% > 50.00% for > 20s with reclaim activity