Comment 31 for bug 24692

Revision history for this message
ruediix@gmail.com (ruedii) wrote :

Yes, but the space is also unallocated.
Allocation of physical memory is done far smaller memory blocks, hence the "zero space" is just empty allocation space. It's not using physical RAM. This is because memory mapping is done through sparse allocation on 64bit mapping, where the virtual memory layer is then mapped to a physical memory layer, in small pages, with most of those pages simply referencing nothing.

It's much like non existing phone numbers don't need lines and buildings to go to them. Unused sparse memory allocation is pretty much the same. The program is just like a business getting all the phone numbers in a 100 number range and then only using 25 of them. So long as there are enough phone numbers to go around it doesn't make there be any more lines being used.

Sure, it's a problem. There is no benefit, and it wastes resources to map in this manner. However, it doesn't hemorrhage memory like you would think.

More importantly, if any 32bit libraries are built in this manner, 32bit program virtual memory space is a very precious thing when running memory demanding 32bit legacy programs, such as games under wine that require a single 4GB flat memory, and wasting the shared virtual memory space of a 32bit program dramatically reduces the flat memory space accessible by such programs. This is probably a far bigger concern than 64bit virtual address space, which there is honestly so much of (64TB of allocation space in the current standard mapping, more on some large servers.)

It would be far more beneficial to map to 4K or 64K depending on the size of the library. Only insanely big libraries should be mapped to the 1MB barrier.

I can list the real consiquences of 1MB alignment of small libraries, and they are pretty bad:
1. Cache Prefetch performance issues.
2. Bulk Swap Transfers: If the shared libraries for a program aren't in contiguous space, they can't be swapped in bulk as one chunk.
3. Increased VM overhead.

You are right, there is no reason for the 1MB alignment and it wastes some resources. However, VM handling, particularly the D_Paged function which enables sparse memory mapping and dynamic paging, should prevent it from using nearly as much physical memory as you think.

Personally, I think that the memory alignment should be to 4KB, 64KB or 2MB depending on object size on systems with 4KB pages, and 64KB or 2MB on systems with 64KB pages. It should be using a waste factor calculation that weighs waste factor over increased allocation costs.

The reason for the option 2MB size is for large page allocation. Specifically, if a program is using hundreds of megabytes of memory, it is better to allocate them in 2MB aligned blocks so that the kernel can reallocate them dynamically into huge pages. This allows reduced VM overhead in both page indexing and processor overhead, reduces memory map fragmentation on both the physical and virtual layers, provides for faster cache transfers on modern CPUs, and faster transfers in and out of swap.

I agree that there is absolutely no reason to use a 1MB alignment. It is too big to be beneficial for small allocations, and too small to be beneficial for large allocations.

My reasons may be different, but I actually think it is an even bigger problem.