8bb3198f72
Abstract arenas access to use arena_get() (or a0get() where appropriate) rather than directly reading e.g. arenas[ind]. Prior to the addition of the arenas.extend mallctl, the worst possible outcome of directly accessing arenas was a stale read, but arenas.extend may allocate and assign a new array to arenas. Add a tsd-based arenas_cache, which amortizes arenas reads. This introduces some subtle bootstrapping issues, with tsd_boot() now being split into tsd_boot[01]() to support tsd wrapper allocation bootstrapping, as well as an arenas_cache_bypass tsd variable which dynamically terminates allocation of arenas_cache itself. Promote a0malloc(), a0calloc(), and a0free() to be generally useful for internal allocation, and use them in several places (more may be appropriate). Abstract arena->nthreads management and fix a missing decrement during thread destruction (recent tsd refactoring left arenas_cleanup() unused). Change arena_choose() to propagate OOM, and handle OOM in all callers. This is important for providing consistent allocation behavior when the MALLOCX_ARENA() flag is being used. Prior to this fix, it was possible for an OOM to result in allocation silently allocating from a different arena than the one specified. |
||
---|---|---|
.. | ||
arena.h | ||
atomic.h | ||
base.h | ||
bitmap.h | ||
chunk_dss.h | ||
chunk_mmap.h | ||
chunk.h | ||
ckh.h | ||
ctl.h | ||
extent.h | ||
hash.h | ||
huge.h | ||
jemalloc_internal_decls.h | ||
jemalloc_internal_defs.h.in | ||
jemalloc_internal_macros.h | ||
jemalloc_internal.h.in | ||
mb.h | ||
mutex.h | ||
private_namespace.sh | ||
private_symbols.txt | ||
private_unnamespace.sh | ||
prng.h | ||
prof.h | ||
public_namespace.sh | ||
public_unnamespace.sh | ||
ql.h | ||
qr.h | ||
quarantine.h | ||
rb.h | ||
rtree.h | ||
size_classes.sh | ||
stats.h | ||
tcache.h | ||
tsd.h | ||
util.h | ||
valgrind.h |