Comment 8 for bug 1876230

Revision history for this message
Matthew Ruffell (mruffell) wrote :

> - apart from feedback given by @mruffell, to also check if any of librcu consumers are depending on a full membarrier - driven by kernel - for ** shared pages among different processes **

I agree with @ddstreet, I don't think liburcu gives that sort of guarantee when it comes to cross process synchronisation. It was my belief that liburcu targets synchronisation across a set of threads within the current process only.

Proof by contradiction.

Assume that a program compiles against liburcu and uses it to synchronise access to shared memory pages for IPC between a sister process.

If the program links against liburcu 0.9 or lower, the sys_membarrier syscall did not exist yet, and liburcu will use the default compiler based membarrier, which is only good within the current process. Synchronisation across shared memory pages fails. This is the case on Xenial, Trusty and the like.

If the program links against liburcu 0.11 or newer, the sys_membarrier syscall does exist, but MEMBARRIER_CMD_SHARED is only used if the current running kernel does not support MEMBARRIER_CMD_PRIVATE_EXPEDITED. There is no toggle option in the API at all, so for users with a kernel 4.14 or higher, MEMBARRIER_CMD_PRIVATE_EXPEDITED will be used, and synchronisation across shared memory pages will fail. This is the case on Eoan, Focal, Groovy.

If the program links against liburcu 0.10, and uses the -qsbr, -md and -signal variants, sys_membarrier is not used at all, and it falls back to the compiler based membarrier, which is only good within the current process. Synchronisation across shared memory pages will fail.

If the program links against liburcu 0.10, and is used within a container, with a kernel version less than 4.3 that does not support sys_membarrier, such as a Bionic container on a Trusty 3.13 host, or on a 3.10 RHEL host, the sys_membarrier syscall fails, and it falls back to the compiler based membarrier. Synchronisation across shared memory pages will fail.

Now, the upstream developers added MEMBARRIER_CMD_PRIVATE_EXPEDITED as the default in liburcu 0.11. They did not change the API to accommodate both MEMBARRIER_CMD_SHARED and MEMBARRIER_CMD_PRIVATE_EXPEDITED, and instead, if the kernel is greater than 4.14, MEMBARRIER_CMD_PRIVATE_EXPEDITED will be used. Upstream are well aware of their consumers, and they would not break everyone's usages out of the blue, without adding some sort of API provision for legacy users.

Thus, our initial assumption that liburcu can be used to synchronise access to shared memory pages for IPC between a sister process is wrong, since no one will create a program that potentially only works in one specific environment, which is bionic on bare metal and liburcu 0.10 only. I'm not even sure how you would co-ordinate liburcu over multiple processes either.

So, because of the above, I don't think any librcu consumers are depending on a full membarrier, driven by the kernel, for shared pages among different processes.

I still think this is safe to SRU.