Move the table of size classes from jemalloc.c to the manual page. When
manually formatting the manual page, it is now necessary to use:
nroff -man -t jemalloc.3
If multiple threads race to initialize malloc, the loser(s) busy-wait
until initialization is complete. Add a missing mutex lock so that the
loser(s) properly release the initialization mutex. Under some
race conditions, this flaw could have caused one or more threads to
become permanently blocked.
Reported by Terrell Magee.
Fix the libunwind version of prof_backtrace() to set the backtrace depth
for all possible code paths. This fixes the zero-length backtrace
problem when using libunwind.
When heap profiling is enabled but deactivated, there is no need to call
isalloc(ptr) in prof_{malloc,realloc}(). Avoid these calls, so that
profiling overhead under such conditions is negligible.
If there is more than one arena, initialize next_arena so that the
first and second threads to allocate memory use arenas 0 and 1, rather
than both using arena 0.
Use the size argument to tcache_dalloc_large() to control the number of
bytes set to 0x5a when junk filling is enabled, rather than accessing a
non-existent arena bin. This bug was capable of corrupting an
arbitrarily large memory region, depending on what followed the arena
data structure in memory (typically zeroed memory, another arena_t, or a
red-black tree node for a huge object).
Properly maintain tcache_bin_t's avail pointer such that it is NULL if
no objects are cached. This only caused problems during thread cache
destruction, since cache flushing otherwise never occurs on an empty
bin.
Fix arena_chunk_dealloc() to put the new spare in a consistent state before
dropping the arena mutex to deallocate the previous spare.
Fix arena_run_dalloc() to insert a newly dirtied chunk into the
chunks_dirty list before potentially deallocating the chunk, so that dirty
page accounting is self-consistent.
Initialize bt2cnt_tsd so that cleanup at thread exit actually happens.
Associate (prof_ctx_t *) with allocated objects, rather than
(prof_thr_cnt_t *). Each thread must always operate on its own
(prof_thr_cnt_t *), and an object may outlive the thread that allocated it.
Now that JEMALLOC_OPTIONS=P isn't the only way to cause stats_print() to
be called, opt_stats_print must actually be checked when reporting the
state of the P/p option.
Don't build with -march=native by default, because the generated code
may perform especially poorly on ABI-compatible, but internally
different, systems.
Fix divide-by-zero error in pprof. It is possible for sample contexts
to currently have no associated objects, but the cumulative statistics
are still useful, depending on how the user invokes pprof. Since
jemalloc intentionally does not filter such contexts, take care not to
divide by 0 when re-scaling for v2 heap sampling.
Install pprof as part of 'make install'.
Update pprof documentation.
Add the E/e options to control whether the application starts with
sampling active/inactive (secondary control to F/f). Add the
prof.active mallctl so that the application can activate/deactivate
sampling on the fly.
Make it possible to disable interval-triggered profile dumping, even if
profiling is enabled. This is useful if the user only wants a single
dump at exit, or if the application manually triggers profile dumps.
If the mean heap sampling interval is larger than one page, simulate
sampled small objects with large objects. This allows profiling context
pointers to be omitted for small objects. As a result, the memory
overhead for sampling decreases as the sampling interval is increased.
Fix a compilation error in the profiling code.
Properly set/clear CHUNK_MAP_ZEROED for all purged pages, according to
whether the pages are (potentially) file-backed or anonymous. This was
merely a performance pessimization for the anonymous mapping case, but
was a calloc()-related bug for the swap_enabled case.
Remove medium size classes, because concurrent dirty page purging is
no longer capable of purging inactive dirty pages inside active runs
(due to recent arena/bin locking changes).
Enhance tcache to support caching large objects, so that the same range
of size classes is still cached, despite the removal of medium size
class support.
Initialize small run header before dropping arena->lock,
arena_chunk_purge() relies on valid small run headers during run
iteration.
Add some assertions.
For bin-related allocation, protect data structures with bin locks
rather than arena locks. Arena locks remain for run
allocation/deallocation and other miscellaneous operations.
Restructure statistics counters to maintain per bin
allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics
via aggregation in the ctl code.
Use chains of cached objects, rather than using arrays of pointers.
Since tcache_bin_t is no longer dynamically sized, convert tcache_t's
tbin to an array of structures, rather than an array of pointers. This
implicitly removes tcache_bin_{create,destroy}(), which further
simplifies the fast path for malloc/free.
Use cacheline alignment for tcache_t allocations.
Remove runtime configuration option for number of tcache bin slots, and
replace it with a boolean option for enabling/disabling tcache.
Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX
and 2X the number of regions per run for the size class.
For GC-triggered flush, discard 3/4 of the objects below the low water
mark, rather than 1/2.