page cache
Linux kernel's in-memory cache of filesystem page contents; the reason repeat reads of a warm file hit DRAM speed and why O_DIRECT exists for workloads that want to bypass it.
The page cache is Linux’s unified buffer of filesystem page contents, kept in otherwise-free DRAM. When a process reads or writes a file, the data flows through page-sized (typically 4 KB) cache entries managed by the kernel. A second read of the same offset returns from DRAM at memory-speed; a write is staged in the cache and flushed to disk asynchronously (unless the caller forces sync via fsync, O_SYNC, or O_DIRECT).
The page cache is why “warm” reads of a file are free: cat big-file > /dev/null pulls it into page cache, and the second run returns instantly. It’s also why benchmarks that don’t control for cache state are worthless — you’re measuring the cache, not the storage.
For low-latency storage workloads there are two paths. The through-the-cache path (read, write, mmap, io_uring without O_DIRECT) gets the kernel’s readahead, writeback, and eviction policies for free. The direct path (O_DIRECT, SPDK, ublk) bypasses the cache entirely, giving the application responsibility for block alignment, write amplification, and caching strategy — useful when your app has a better cache model than the kernel (databases, for example).
Related syscalls: madvise(MADV_DONTNEED) evicts pages from cache; posix_fadvise hints at access patterns; mmap + MADV_SEQUENTIAL tunes readahead.