Add JEMALLOC_INLINE_C and use it instead of JEMALLOC_INLINE in .c files,
so that the annotated functions are always static.
Remove SFMT's inline-related macros and use jemalloc's instead, so that
there's no danger of interactions with jemalloc's definitions that
disable inlining for debug builds.
Refactor tests to use explicit testing assertions, rather than diff'ing
test output. This makes the test code a bit shorter, more explicitly
encodes testing intent, and makes test failure diagnosis more
straightforward.
Fix malloc_tsd_dalloc() to bypass tcache when dallocating, so that there
is no danger of causing tcache reincarnation during thread exit.
Whether this infinite loop occurs depends on the pthreads TSD
implementation; it is known to occur on Solaris.
Submitted by Markus Eberspächer.
When using LinuxThreads pthread_setspecific triggers recursive
allocation on all threads. Work around this by creating a global linked
list of in-progress tsd initializations.
This modifies the _tsd_get_wrapper macro-generated function. When it has
to initialize an TSD object it will push the item to the linked list
first. If this causes a recursive allocation then the _get_wrapper
request is satisfied from the list. When pthread_setspecific returns the
item is removed from the list.
This effectively adds a very poor substitute for real TLS used only
during pthread_setspecific allocation recursion.
Signed-off-by: Crestez Dan Leonard <lcrestez@ixiacom.com>
Add a missing mutex unlock in a malloc_init_hard() error path (failed
mutex initialization). In practice this bug was very unlikely to ever
trigger, but if it did, application deadlock would likely result.
Reported by Pat Lynch.
Fix a compiler warning in chunk_record() that was due to reading node
rather than xnode. In practice this did not cause any correctness
issue, but dataflow analysis in some compilers cannot tell that node and
xnode are always equal in cases that the read is reached.
Fix a race condition in the "arenas.extend" mallctl that could lead to
internal data structure corruption. The race could be hit if one
thread called the "arenas.extend" mallctl while another thread
concurrently triggered initialization of one of the lazily created
arenas.
Fix a Valgrind integration flaw that caused Valgrind warnings about
reads of uninitialized memory in internal zero-initialized data
structures (relevant to tcache and prof code).
Add the JEMALLOC_ALWAYS_INLINE_C macro and use it for always-inlined
functions declared in .c files. This fixes a function attribute
inconsistency for debug builds that resulted in (harmless) compiler
warnings about functions not being inlinable.
Reported by Ricardo Nabinger Sanchez.
Fix chunk_record() to unlock chunks_mtx before deallocating a base
node, in order to avoid potential deadlock. This fix addresses the
second of two similar bugs.
Fix a chunk recycling bug that could cause the allocator to lose track
of whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could
cause corruption if allocating via sbrk(2) (unlikely unless running with
the "dss:primary" option specified). This was completely harmless on
Linux unless using mlockall(2) (and unlikely even then, unless the
--disable-munmap configure option or the "dss:primary" option was
specified). This regression was introduced in 3.1.0 by the
mlockall(2)/madvise(2) interaction fix.
Internal reallocation of the quarantined object array leaked the old array.
Reallocation failure for internal reallocation of the quarantined object
array (very unlikely) resulted in memory corruption.
Avoid writing to uninitialized TLS as a side effect of deallocation.
Initializing TLS during deallocation is unsafe because it is possible
that a thread never did any allocation, and that TLS has already been
deallocated by the threads library, resulting in write-after-free
corruption. These fixes affect prof_tdata and quarantine; all other
uses of TLS are already safe, whether intentionally (as for tcache) or
unintentionally (as for arenas).
Revert refactoring of opt_abort and opt_junk declarations. clang
accepts the config_*-based declarations (and generates correct code),
but gcc complains with:
error: initializer element is not constant
Update hash from MurmurHash2 to MurmurHash3, primarily because the
latter generates 128 bits in a single call for no extra cost, which
simplifies integration with cuckoo hashing.
Tighten valgrind integration such that immediately after memory is
validated or zeroed, valgrind is told to forget the memory's 'defined'
state. The only place newly allocated memory should be left marked as
'defined' is in the public functions (e.g. calloc() and realloc()).
Move validation of supposedly zeroed pages from chunk_alloc() to
chunk_recycle(). There is little point to validating newly mapped
memory returned by chunk_alloc_mmap(), and memory that comes from sbrk()
is explicitly zeroed, so there is little risk to assuming that
chunk_alloc_dss() actually does the zeroing properly.
This relaxation of validation can make a big difference to application
startup time and overall system usage on platforms that use jemalloc as
the system allocator (namely FreeBSD).
Submitted by Ian Lepore <ian@FreeBSD.org>.
This ensures POLA on FreeBSD (at least) as free(3) is generally assumed
to not fiddle around with errno.
Signed-off-by: Garrett Cooper <yanegomi@gmail.com>
Modify processing of the lg_chunk option so that it clips an
out-of-range input to the edge of the valid range. This makes it
possible to request the minimum possible chunk size without intimate
knowledge of allocator internals.
Submitted by Ian Lepore (see FreeBSD PR bin/174641).
Fix chunk_recycyle() to unconditionally inform Valgrind that returned
memory is undefined. This fixes Valgrind warnings that would result
from a huge allocation being freed, then recycled for use as an arena
chunk. The arena code would write metadata to the chunk header, and
Valgrind would consider these invalid writes.
Purge unused dirty pages in an order that first performs clean/dirty run
defragmentation, in order to mitigate available run fragmentation.
Remove the limitation that prevented purging unless at least one chunk
worth of dirty pages had accumulated in an arena. This limitation was
intended to avoid excessive purging for small applications, but the
threshold was arbitrary, and the effect of questionable utility.
Relax opt_lg_dirty_mult from 5 to 3. This compensates for increased
likelihood of allocating clean runs, given the same ratio of clean:dirty
runs, and reduces the potential for repeated purging in pathological
large malloc/free loops that push the active:dirty page ratio just over
the purge threshold.