Support huge pages
Huge pages are heavier, slower to CoW, and possibly wasting memory, and would waste 0.4% being unused PageInfo
s. But they do almost always reduce TLB overhead, and most importantly, they require far less page table mappings and flushes than small pages. That means they can potentially, in some cases, be a huge improvement (512x for 2 MiB pages) in IPC latency and to a lesser extent, throughput. 1 GiB pages may also allow (recently-used) physical addresses to hopefully always reside in at least the L2DTLB in kernel mode, speeding up e.g. copying of pages. Jeremy measured the optimal buffer size for throughput, (for most schemes including redoxfs), was 4 MiB, with larger sizes being slower due to mapping/flushing and possibly TLB overhead.
Worth noting AArch64 supports two additional standard page sizes -- 16 KiB and 64 KiB, which for some if not most workloads is more efficient, and 16k could maybe even be a better default. Ironically, Zen3+ AMD CPUs also support a 16 KiB page size, by merging any 4 virtually-contiguous pages that are physically contiguous and naturally 16k-aligned, into a single 16 KiB TLB entry. Although it would waste 4x as much page table memory, that might also be worth looking into.