Do not enable lazy locking by default, because:
- It's fragile (applications can subvert detection of multi-threaded
mode).
- Thread caching amortizes locking overhead in the default
configuration.
tcache_get() is inlined, so do the config_tcache check inside
tcache_get() and simplify its callers.
Make arena_malloc() an inline function, since it is part of the malloc()
fast path.
Remove conditional logic that cause build issues if --disable-tcache was
specified.
Remove structure magic, because 1) it is no longer conditional, and 2)
it stopped being very effective at detecting memory corruption several
years ago.
Convert configuration-related cpp conditional logic to use static
constant variables, e.g.:
#ifdef JEMALLOC_DEBUG
[...]
#endif
becomes:
if (config_debug) {
[...]
}
The advantage is clearer, more concise code. The main disadvantage is
that data structures no longer have conditionally defined fields, so
they pay the cost of all fields regardless of whether they are used. In
practice, this is only a minor concern; config_stats will go away in an
upcoming change, and config_prof is the only other major feature that
depends on more than a few special-purpose fields.
Fix the logic in stats_print() such that if the "a" flag is passed in
without the "m" flag, merged statistics will be printed even if only one
arena is initialized.
Fix huge_ralloc() to remove the old memory region from tree of huge
allocations *before* calling mremap(2), in order to make sure that no
other thread acquires the old memory region via mmap() and encounters
stale metadata in the tree.
Reported by: Rich Prohaska
Refactor the SO and REV such that they are set via autoconf variables,
@so@ and @rev@. These variables are both needed by the jemalloc.sh
script, so this unifies their definitions.
Fix prof_lookup() to artificially raise curobjs for all paths through
the code that creates a new entry in the per thread bt2cnt hash table.
This fixes a race condition that could corrupt memory if prof_accum were
false, and a non-default lg_prof_tcmax were used and/or threads were
destroyed.
Add a missing prof_malloc() call in allocm(). Before this fix, negative
object/byte counts could be observed in heap profiles for applications
that use allocm().
Rewrite prof_alloc_prep() as a cpp macro, PROF_ALLOC_PREP(), in order to
remove any doubt as to whether an additional stack frame is created.
Prior to this change, it was assumed that inlining would reduce the
total number of frames in the backtrace, but in practice behavior wasn't
completely predictable.
Create imemalign() and call it from posix_memalign(), memalign(), and
valloc(), so that all entry points require the same number of stack
frames to be ignored during backtracing.
Properly handle boundary conditions for sampled region promotion in
rallocm(). Prior to this fix, some combinations of 'size' and 'extra'
values could cause erroneous behavior. Additionally, size class
recording for promoted regions was incorrect.
Fix assertions in arena_purge() to accurately reflect the constraints in
arena_maybe_purge(). There were two bugs here, one of which merely
weakened the assertion, and the other of which referred to an
uninitialized variable (typo; used npurgatory instead of
arena->npurgatory).
Add inline assembly implementations of atomic_{add,sub}_uint{32,64}()
for x86/x64, in order to support compilers that are missing the relevant
gcc intrinsics.
sa2u() returns 0 on overflow, but the profiling code was blindly calling
sa2u() and allowing the error to silently propagate, ultimately ending
in a later assertion failure. Refactor all ipalloc() callers to call
sa2u(), check for overflow before calling ipalloc(), and pass usize
rather than size. This allows ipalloc() to avoid calling sa2u() in the
common case.
Fix a regression due to:
Remove an arena_bin_run_size_calc() constraint.
2a6f2af6e4
The removed constraint required that small run headers fit in one page,
which indirectly limited runs such that they would not cause overflow in
arena_run_regind(). Add an explicit constraint to
arena_bin_run_size_calc() based on the largest number of regions that
arena_run_regind() can handle (2^11 as currently configured).
Dynamically adjust tcache fill count (number of objects allocated per
tcache refill) such that if GC has to flush inactive objects, the fill
count gradually decreases. Conversely, if refills occur while the fill
count is depressed, the fill count gradually increases back to its
maximum value.
pthread_mutex_lock() can call malloc() on OS X (!!!), which causes
deadlock. Work around this by using spinlocks that are built of more
primitive stuff.