Comment 12 for bug 1893716

Revision history for this message
Christian Ehrhardt  (paelzer) wrote :

I wanted to get a better feeling about this before jumping to action.
Therefore I have created three 1G/1vcpu KVM guests Bionic/Focal/Jammy to test and compare this on.

I do not need hot-loop analysis or anything down to instructions, so no debug symbols needed.
For now I only want to know:
1. how much time a bunch of low effort logins take (we measure only the overhead)
2. how much cpu is utilized while doing that
3. how that work is spread across programs (disable them one by one and look at data)

The two I see most are:
- /usr/lib/update-notifier/update-motd-hwe-eol
    apt-config shell SourceList Dir::Etc::sourcelist
- /etc/update-motd.d/50-landscape-sysinfo
    /usr/bin/python3 /usr/bin/landscape-sysinfo

Very non-pro data gathering:

$ cat tracestart.sh
#!/bin/bash
sudo rm perf.data perf.log log.vmstat
nohup vmstat -wt 5 &> log.vmstat &
nohup perf record --event cpu-clock --all-cpus &> perf.log &
sleep 5

$ cat traceend.sh
#!/bin/bash
killall perf
killall vmstat
cat perf.log
cat log.vmstat
#perf report --sort comm --stdio

Most simple load involving those helpers.
for sys in login-bionic login-focal login-jammy; do ssh $sys "sudo ~/tracestart.sh"; time for i in $(seq 1 100); do ssh $sys "/bin/true"; done; ssh $sys "sudo ~/traceend.sh"; done

I'll get those numbers for Bionic/Focal/Jammy and enabling/disabling it all and/or individual elements.