Coalescing is a small price to pay for large allocations since they happen less
frequently. This reduces fragmentation while also potentially improving
locality.
When allocating from dirty extents (which we always prefer if available), large
active extents can get split even if the new allocation is much smaller, in
which case the introduced fragmentation causes high long term damage. This new
option controls the threshold to reuse and split an existing active extent. We
avoid using a large extent for much smaller sizes, in order to reduce
fragmentation. In some workload, adding the threshold improves virtual memory
usage by >10x.
While working on #852, I noticed the prng state is atomic. This is the only
atomic use of prng in all of jemalloc. Instead, use a threadlocal prng
state if possible to avoid unnecessary cache line contention.
Added an upper bound on how many pages we can decay during the current run.
Without this, decay could have unbounded increase in stashed, since other
threads could add new pages into the extents.
This option controls the max size when grow_retained. This is useful when we
have customized extent hooks reserving physical memory (e.g. 1G huge pages).
Without this feature, the default increasing sequence could result in fragmented
and wasted physical memory.
This attempts to use VM_OVERCOMMIT OID - newly introduced in -CURRENT
few days ago, specifically for this purpose - instead of querying the
sysctl by its string name. Due to how syctlbyname(3) works, this means
we do one syscall during binary startup instead of two.
Signed-off-by: Edward Tomasz Napierala <trasz@FreeBSD.org>
This avoids sysctl(2) syscall during binary startup, using the value
passed in the ELF aux vector instead.
Signed-off-by: Edward Tomasz Napierala <trasz@FreeBSD.org>
We observed that arena 0 can have much more metadata allocated comparing to
other arenas. Tune the auto mode to only switch to huge page on the 5th block
(instead of 3 previously) for a0.
Before this commit, extent_recycle_split intermingles the splitting of an extent
and the return of parts of that extent to a given extents_t. After it, that
logic is separated. This will enable splitting extents that don't live in any
extents_t (as the grow retained region soon will).
On x86 Linux, we define our own MADV_FREE if madvise(2) is available, but no
MADV_FREE is detected. This allows the feature to be built in and enabled with
runtime detection.
Since we allocate rtree nodes from a0's base, it's pushed to over 1 block on
initialization right away, which makes the auto thp mode less effective on a0.
We change a0 to make the switch on the 3rd block instead.
Quoting from https://github.com/jemalloc/jemalloc/issues/761 :
[...] reading the Power ISA documentation[1], the assembly in [the CPU_SPINWAIT
macro] isn't correct anyway (as @marxin points out): the setting of the
program-priority register is "sticky", and we never undo the lowering.
We could do something similar, but given that we don't have testing here in the
first place, I'm inclined to simply not try. I'll put something up reverting the
problematic commit tomorrow.
[1] Book II, chapter 3 of the 2.07B or 3.0B ISA documents.
There does not seem to be any overlap between usage of
extent_avail and extent_heap, so we can use the same hook.
The only remaining usage of rb trees is in the profiling code,
which has some 'interesting' iteration constraints.
Fixes#888
In userspace ARM on Linux, zero-ing the high bits is the correct way to do this.
This doesn't fix the fact that we currently set LG_VADDR to 48 on ARM, when in
fact larger virtual address sizes are coming soon. We'll cross that bridge when
we come to it.
It's possible to build with lazy purge enabled but depoly to systems without
such support. In this case, rely on the boot time detection instead of keep
making unnecessary madvise calls (which all returns EINVAL).
We've been seeing strange errors in jemalloc_cpp.cpp since Travis upgraded from
precise to trusty as their default CI environment (seeming to stem from some
the new clang version finding the headers for an old version of libstdc++. In
the long run we'll have to deal with this "for real", but at that point we may
have a better C++ story in general, making it a moot point.
If we guarantee no malloc activity in extent hooks, it's possible to make
customized hooks working on arena 0. Remove the non-a0 assertion to enable such
use cases.
To avoid the high RSS caused by THP + low usage arena (i.e. THP becomes a
significant percentage), added a new "auto" option which will only start using
THP after a base allocator used up the first THP region. Starting from the
second hugepage (in a single arena), "auto" behaves the same as "always",
i.e. madvise hugepage right away.