Treat prof_tdata_t's bt2cnt as a comprehensive map of the thread's
extant allocation samples (do not limit the total number of entries).
This helps prepare the way for per thread heap profiling.
Fix runs_dirty-based purging to also purge dirty pages in the spare
chunk.
Refactor runs_dirty manipulation into arena_dirty_{insert,remove}(), and
move the arena->ndirty accounting into those functions.
Remove the u.ql_link field from arena_chunk_map_t, and get rid of the
enclosing union for u.rb_link, since only rb_link remains.
Remove the ndirty field from arena_chunk_t.
Fix the cactive statistic to decrease (rather than increase) when active
memory decreases. This regression was introduced by
aa5113b1fd (Refactor overly large/complex
functions) and first released in 3.5.0.
Some platforms, such as Google's Portable Native Client, use Newlib and
thus lack access to madvise(2). In those instances, pages_purge() is
transformed into a no-op.
Native Client doesn't allow forking, thus there is no need to catch
fork()ing events for Native Client.
Additionally, without this commit, jemalloc will introduce an unresolved
pthread_atfork() in PNaCl Rust bins.
Some platforms (like those using Newlib) don't have ffs/ffsl. This
commit adds a check to configure.ac for __builtin_ffsl if ffsl isn't
found. __builtin_ffsl performs the same function as ffsl, and has the
added benefit of being available on any platform utilizing
Gcc-compatible compiler.
This change does not address the used of ffs in the MALLOCX_ARENA()
macro.
Add size class computation capability, currently used only as validation
of the size class lookup tables. Generalize the size class spacing used
for bins, for eventual use throughout the full range of allocation
sizes.
Refactor huge allocation to be managed by arenas (though the global
red-black tree of huge allocations remains for lookup during
deallocation). This is the logical conclusion of recent changes that 1)
made per arena dss precedence apply to huge allocation, and 2) made it
possible to replace the per arena chunk allocation/deallocation
functions.
Remove the top level huge stats, and replace them with per arena huge
stats.
Normalize function names and types to *dalloc* (some were *dealloc*).
Remove the --enable-mremap option. As jemalloc currently operates, this
is a performace regression for some applications, but planned work to
logarithmically space huge size classes should provide similar amortized
performance. The motivation for this change was that mremap-based huge
reallocation forced leaky abstractions that prevented refactoring.
Add new mallctl endpoints "arena<i>.chunk.alloc" and
"arena<i>.chunk.dealloc" to allow userspace to configure
jemalloc's chunk allocator and deallocator on a per-arena
basis.
Simplify backtracing to not ignore any frames, and compensate for this
in pprof in order to increase flexibility with respect to function-based
refactoring even in the presence of non-deterministic inlining. Modify
pprof to blacklist all jemalloc allocation entry points including
non-standard ones like mallocx(), and ignore all allocator-internal
frames. Prior to this change, pprof excluded the specifically
blacklisted functions from backtraces, but it left allocator-internal
frames intact.
Forcefully disable tcache if running inside Valgrind, and remove
Valgrind calls in tcache-specific code.
Restructure Valgrind-related code to move most Valgrind calls out of the
fast path functions.
Take advantage of static knowledge to elide some branches in
JEMALLOC_VALGRIND_REALLOC().
Make dss non-optional on all platforms which support sbrk(2).
Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
"secondary" precedence is specified, but sbrk(2) is not supported.
Make promotion of sampled small objects to large objects mandatory, so
that profiling metadata can always be stored in the chunk map, rather
than requiring one pointer per small region in each small-region page
run. In practice the non-prof-promote code was only useful when using
jemalloc to track all objects and report them as leaks at program exit.
However, Valgrind is at least as good a tool for this particular use
case.
Furthermore, the non-prof-promote code is getting in the way of
some optimizations that will make heap profiling much cheaper for the
predominant use case (sampling a small representative proportion of all
allocations).
When you call free() we load chunk->arena even though that
data isn't used on the tcache hot path.
In profiling some FB applications, I found that ~30% of the
dTLB misses in the free() function come from this line. With
4 MB chunks, the arena_chunk_t->map is ~ 32 KB (1024 pages
in the chunk, 4 8 byte pointers in arena_chunk_map_t). This
means there's only a 1/8 chance of the page containing
chunk->arena also comtaining the map bits.
This happens when it fails to allocate a new chunk. Which
arena_chunk_alloc then passes into arena_avail_insert without any
checks. This then causes a crash when arena_avail_insert tries
to check chunk->ndirty.
This was introduced by the refactoring of arena_chunk_alloc
which previously would have returned NULL immediately after
calling chunk_alloc. This is now the return from
arena_chunk_init_hard so we need to check that return, and
not continue if it was NULL.
If mremap(2) is used for huge reallocation, physical pages are mapped to
new virtual addresses rather than data being copied to new pages. This
bypasses the normal junk filling that would happen during allocation, so
add junk filling that is specific to this case.
Refactor prof_dump() to use a two pass algorithm, and prof_leave() prior
to the second pass. This avoids write(2) system calls while holding
critical prof resources.
Fix prof_dump() to close the dump file descriptor for all relevant error
paths.
Minimize the size of prof-related static buffers when prof is disabled.
This saves roughly 65 KiB of application memory for non-prof builds.
Refactor prof_ctx_init() out of prof_lookup_global().
Refactor overly large functions by breaking out helper functions.
Refactor overly complex multi-purpose functions into separate more
specific functions.
Extract profiling code from malloc(), imemalign(), calloc(), realloc(),
mallocx(), rallocx(), and xallocx(). This slightly reduces the amount
of code compiled into the fast paths, but the primary benefit is the
combinatorial complexity reduction.
Simplify iralloc[t]() by creating a separate ixalloc() that handles the
no-move cases.
Further simplify [mrxn]allocx() (and by implication [mrn]allocm()) to
make request size overflows due to size class and/or alignment
constraints trigger undefined behavior (detected by debug-only
assertions).
Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling
backtrace creation in imemalign(). This bug impacted posix_memalign()
and aligned_alloc().
Add unit tests for pow2_ceil(), malloc_strtoumax(), and
malloc_snprintf().
Fix numerous bugs in malloc_strotumax() error handling/reporting. These
bugs could have caused application-visible issues for some seldom used
(0X... and 0... prefixes) or malformed MALLOC_CONF or mallctl() argument
strings, but otherwise they had no impact.
Fix numerous bugs in malloc_snprintf(). These bugs were not exercised
by existing malloc_*printf() calls, so they had no impact.
Reduce rtree memory usage by storing booleans (1 byte each) rather than
pointers. The rtree code is only used to record whether jemalloc manages
a chunk of memory, so there's no need to store pointers in the rtree.
Increase rtree node size to 64 KiB in order to reduce tree depth from 13
to 3 on 64-bit systems. The conversion to more compact leaf nodes was
enough by itself to make the rtree depth 1 on 32-bit systems; due to the
fact that root nodes are smaller than the specified node size if
possible, the node size change has no impact on 32-bit systems (assuming
default chunk size).
Verify that freed regions are quarantined, and that redzone corruption
is detected.
Introduce a testing idiom for intercepting/replacing internal functions.
In this case the replaced function is ordinarily a static function, but
the idiom should work similarly for library-private functions.
Don't junk fill reallocations for which the request size is less than
the current usable size, but not enough smaller to cause a size class
change. Unlike malloc()/calloc()/realloc(), *allocx() contractually
treats the full usize as the allocation, so a caller can ask for zeroed
memory via mallocx() and a series of rallocx() calls that all specify
MALLOCX_ZERO, and be assured that all newly allocated bytes will be
zeroed and made available to the application without danger of allocator
mutation until the size class decreases enough to cause usize reduction.
Refactor such that arena_prof_ctx_set() receives usize as an argument,
and use it to determine whether to handle ptr as a small region, rather
than reading the chunk page map.
Implement the *allocx() API, which is a successor to the *allocm() API.
The *allocx() functions are slightly simpler to use because they have
fewer parameters, they directly return the results of primary interest,
and mallocx()/rallocx() avoid the strict aliasing pitfall that
allocm()/rallocx() share with posix_memalign(). The following code
violates strict aliasing rules:
foo_t *foo;
allocm((void **)&foo, NULL, 42, 0);
whereas the following is safe:
foo_t *foo;
void *p;
allocm(&p, NULL, 42, 0);
foo = (foo_t *)p;
mallocx() does not have this problem:
foo_t *foo = (foo_t *)mallocx(42, 0);
Add JEMALLOC_INLINE_C and use it instead of JEMALLOC_INLINE in .c files,
so that the annotated functions are always static.
Remove SFMT's inline-related macros and use jemalloc's instead, so that
there's no danger of interactions with jemalloc's definitions that
disable inlining for debug builds.
Refactor tests to use explicit testing assertions, rather than diff'ing
test output. This makes the test code a bit shorter, more explicitly
encodes testing intent, and makes test failure diagnosis more
straightforward.
Fix malloc_tsd_dalloc() to bypass tcache when dallocating, so that there
is no danger of causing tcache reincarnation during thread exit.
Whether this infinite loop occurs depends on the pthreads TSD
implementation; it is known to occur on Solaris.
Submitted by Markus Eberspächer.
When using LinuxThreads pthread_setspecific triggers recursive
allocation on all threads. Work around this by creating a global linked
list of in-progress tsd initializations.
This modifies the _tsd_get_wrapper macro-generated function. When it has
to initialize an TSD object it will push the item to the linked list
first. If this causes a recursive allocation then the _get_wrapper
request is satisfied from the list. When pthread_setspecific returns the
item is removed from the list.
This effectively adds a very poor substitute for real TLS used only
during pthread_setspecific allocation recursion.
Signed-off-by: Crestez Dan Leonard <lcrestez@ixiacom.com>
Add a missing mutex unlock in a malloc_init_hard() error path (failed
mutex initialization). In practice this bug was very unlikely to ever
trigger, but if it did, application deadlock would likely result.
Reported by Pat Lynch.
Fix a compiler warning in chunk_record() that was due to reading node
rather than xnode. In practice this did not cause any correctness
issue, but dataflow analysis in some compilers cannot tell that node and
xnode are always equal in cases that the read is reached.
Fix a race condition in the "arenas.extend" mallctl that could lead to
internal data structure corruption. The race could be hit if one
thread called the "arenas.extend" mallctl while another thread
concurrently triggered initialization of one of the lazily created
arenas.
Fix a Valgrind integration flaw that caused Valgrind warnings about
reads of uninitialized memory in internal zero-initialized data
structures (relevant to tcache and prof code).
Add the JEMALLOC_ALWAYS_INLINE_C macro and use it for always-inlined
functions declared in .c files. This fixes a function attribute
inconsistency for debug builds that resulted in (harmless) compiler
warnings about functions not being inlinable.
Reported by Ricardo Nabinger Sanchez.
Fix chunk_record() to unlock chunks_mtx before deallocating a base
node, in order to avoid potential deadlock. This fix addresses the
second of two similar bugs.
Fix a chunk recycling bug that could cause the allocator to lose track
of whether a chunk was zeroed. On FreeBSD, NetBSD, and OS X, it could
cause corruption if allocating via sbrk(2) (unlikely unless running with
the "dss:primary" option specified). This was completely harmless on
Linux unless using mlockall(2) (and unlikely even then, unless the
--disable-munmap configure option or the "dss:primary" option was
specified). This regression was introduced in 3.1.0 by the
mlockall(2)/madvise(2) interaction fix.
Internal reallocation of the quarantined object array leaked the old array.
Reallocation failure for internal reallocation of the quarantined object
array (very unlikely) resulted in memory corruption.
Avoid writing to uninitialized TLS as a side effect of deallocation.
Initializing TLS during deallocation is unsafe because it is possible
that a thread never did any allocation, and that TLS has already been
deallocated by the threads library, resulting in write-after-free
corruption. These fixes affect prof_tdata and quarantine; all other
uses of TLS are already safe, whether intentionally (as for tcache) or
unintentionally (as for arenas).
Revert refactoring of opt_abort and opt_junk declarations. clang
accepts the config_*-based declarations (and generates correct code),
but gcc complains with:
error: initializer element is not constant
Update hash from MurmurHash2 to MurmurHash3, primarily because the
latter generates 128 bits in a single call for no extra cost, which
simplifies integration with cuckoo hashing.
Tighten valgrind integration such that immediately after memory is
validated or zeroed, valgrind is told to forget the memory's 'defined'
state. The only place newly allocated memory should be left marked as
'defined' is in the public functions (e.g. calloc() and realloc()).
Move validation of supposedly zeroed pages from chunk_alloc() to
chunk_recycle(). There is little point to validating newly mapped
memory returned by chunk_alloc_mmap(), and memory that comes from sbrk()
is explicitly zeroed, so there is little risk to assuming that
chunk_alloc_dss() actually does the zeroing properly.
This relaxation of validation can make a big difference to application
startup time and overall system usage on platforms that use jemalloc as
the system allocator (namely FreeBSD).
Submitted by Ian Lepore <ian@FreeBSD.org>.
This ensures POLA on FreeBSD (at least) as free(3) is generally assumed
to not fiddle around with errno.
Signed-off-by: Garrett Cooper <yanegomi@gmail.com>
Modify processing of the lg_chunk option so that it clips an
out-of-range input to the edge of the valid range. This makes it
possible to request the minimum possible chunk size without intimate
knowledge of allocator internals.
Submitted by Ian Lepore (see FreeBSD PR bin/174641).
Fix chunk_recycyle() to unconditionally inform Valgrind that returned
memory is undefined. This fixes Valgrind warnings that would result
from a huge allocation being freed, then recycled for use as an arena
chunk. The arena code would write metadata to the chunk header, and
Valgrind would consider these invalid writes.
Purge unused dirty pages in an order that first performs clean/dirty run
defragmentation, in order to mitigate available run fragmentation.
Remove the limitation that prevented purging unless at least one chunk
worth of dirty pages had accumulated in an arena. This limitation was
intended to avoid excessive purging for small applications, but the
threshold was arbitrary, and the effect of questionable utility.
Relax opt_lg_dirty_mult from 5 to 3. This compensates for increased
likelihood of allocating clean runs, given the same ratio of clean:dirty
runs, and reduces the potential for repeated purging in pathological
large malloc/free loops that push the active:dirty page ratio just over
the purge threshold.
Add the "arenas.extend" mallctl, so that it is possible to create new
arenas that are outside the set that jemalloc automatically multiplexes
threads onto.
Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
to explicitly allocate from a particular arena.
Add the "opt.dss" mallctl, which controls the default precedence of dss
allocation relative to mmap allocation.
Add the "arena.<i>.dss" mallctl, which makes it possible to set the
default dss precedence on a per arena or global basis.
Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".
Add the "stats.arenas.<i>.dss" mallctl.
Mozilla build hides everything by default using visibility pragma and
unhides only explicitly listed headers. But this doesn't work on FreeBSD
because _pthread_mutex_init_calloc_cb is neither documented nor exposed
via any header.
Fix mutex acquisition order inversion for the chunks rtree and the base
mutex. Chunks rtree acquisition was introduced by the previous commit,
so this bug was short-lived.
Add a library constructor for jemalloc that initializes the allocator.
This fixes a race that could occur if threads were created by the main
thread prior to any memory allocation, followed by fork(2), and then
memory allocation in the child process.
Fix the prefork/postfork functions to acquire/release the ctl, prof, and
rtree mutexes. This fixes various fork() child process deadlocks, but
one possible deadlock remains (intentionally) unaddressed: prof
backtracing can acquire runtime library mutexes, so deadlock is still
possible if heap profiling is enabled during fork(). This deadlock is
known to be a real issue in at least the case of libgcc-based
backtracing.
Reported by tfengjun.
mlockall(2) can cause purging via madvise(2) to fail. Fix purging code
to check whether madvise() succeeded, and base zeroed page metadata on
the result.
Reported by Olivier Lecomte.
Refactor code such that arena_mapbits_{large,small}_set() always
preserves the unzeroed flag, and manually manipulate the unzeroed flag
in the one case where it actually gets reset (in arena_chunk_purge()).
This fixes unzeroed preservation bugs in arena_run_split() and
arena_ralloc_large_grow(). These bugs caused large calloc() to return
non-zeroed memory under some circumstances.
Add the --enable-mremap option, and disable the use of mremap(2) by
default, for the same reason that freeing chunks via munmap(2) is
disabled by default on Linux: semi-permanent VM map fragmentation.
Fix chunk_recycle() to correctly compute trailsize and re-insert
trailing chunks. This fixes a major virtual memory leak.
Simplify chunk_record() to avoid dropping/re-acquiring chunks_mtx.
Simplify chunk_alloc_mmap() to no longer attempt map extension. The
extra complexity isn't warranted, because although in the success case
it saves one system call as compared to immediately falling back to
chunk_alloc_mmap_slow(), it also makes the failure case even more
expensive. This simplification removes two bugs:
- For Windows platforms, pages_unmap() wasn't being called for unaligned
mappings prior to falling back to chunk_alloc_mmap_slow(). This
caused permanent virtual memory leaks.
- For non-Windows platforms, alignment greater than chunksize caused
pages_map() to be called with size 0 when attempting map extension.
This always resulted in an mmap() error, and subsequent fallback to
chunk_alloc_mmap_slow().
If an application wants to override je_malloc_message, it is better to define
the symbol locally than to change its value in main(), which might be too late
for various reasons.
Due to je_malloc_message being initialized in util.c, statically linking
jemalloc with an application defining je_malloc_message fails due to
"multiple definition of" the symbol.
Defining it without a value (like je_malloc_conf) makes it more easily
overridable.
Further optimize arena_salloc() to only look at the binind chunk map
bits in the common case.
Add more sanity checks to arena_salloc() that detect chunk map
inconsistencies for large allocations (whether due to allocator bugs or
application bugs).
Embed the bin index for small page runs into the chunk page map, in
order to omit [...] in the following dependent load sequence:
ptr-->mapelm-->[run-->bin-->]bin_info
Move various non-critcal code out of the inlined function chain into
helper functions (tcache_event_hard(), arena_dalloc_small(), and
locking).
Theses newly added macros will be used to implement the equivalent under
MSVC. Also, move the definitions to headers, where they make more sense,
and for some, are even more useful there (e.g. malloc).
Using errno on win32 doesn't quite work, because the value set in a shared
library can't be read from e.g. an executable calling the function setting
errno.
At the same time, since buferror always uses errno/GetLastError, don't pass
it.
MSVC doesn't support C99, and building as C++ to be able to use them is
dangerous, as C++ and C99 are incompatible.
Introduce a VARIABLE_ARRAY macro that either uses VLA when supported,
or alloca() otherwise. Note that using alloca() inside loops doesn't
quite work like VLAs, thus the use of VARIABLE_ARRAY there is discouraged.
It might be worth investigating ways to check whether VARIABLE_ARRAY is
used in such context at runtime in debug builds and bail out if that
happens.
MSVC doesn't support C99, and as such doesn't support designated
initialization of structs and unions. As there is never a mix of
indexed and named nodes, it is pretty straightforward to use a
different type for each.
Fix a potential deadlock that could occur during interval- and
growth-triggered heap profile dumps.
Fix an off-by-one heap profile statistics bug that could be observed in
interval- and growth-triggered heap profiles.
Fix heap profile dump filename sequence numbers (regression during
conversion to malloc_snprintf()).
Remove mmap_unaligned, which was used to heuristically decide whether to
optimistically call mmap() in such a way that could reduce the total
number of system calls. If I remember correctly, the intention of
mmap_unaligned was to avoid always executing the slow path in the
presence of ASLR. However, that reasoning seems to have been based on a
flawed understanding of how ASLR actually works. Although ASLR
apparently causes mmap() to ignore address requests, it does not cause
total placement randomness, so there is a reasonable expectation that
iterative mmap() calls will start returning chunk-aligned mappings once
the first chunk has been properly aligned.
Fix chunk_alloc_dss() to zero memory when requested.
Fix chunk_dealloc() to avoid chunk_dealloc_mmap() for dss-allocated
memory.
Fix huge_palloc() to always junk fill when requested.
Improve chunk_recycle() to report that memory is zeroed as a side effect
of pages_purge().
Fix a memory corruption bug in chunk_alloc_dss() that was due to
claiming newly allocated memory is zeroed.
Reverse order of preference between mmap() and sbrk() to prefer mmap().
Clean up management of 'zero' parameter in chunk_alloc*().
These flags take unsigned values, but they were fed with signed values
taken with va_arg, and that led to sign extension in cases where the
corresponding value has the most significant bit set.
Clean up a few config-related conditionals to avoid unnecessary
dependencies on prof symbols. Use cassert() rather than assert()
everywhere that it's appropriate.
Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).
Change the "opt.prof_accum" default from true to false.
Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be
abused to disable final profile dumping.
Add the --disable-munmap option, remove the configure test that
attempted to detect the VM allocation quirk known to exist on Linux
x86[_64], and make --disable-munmap implicit on Linux.
Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.
Unify the chunk caching for mmap and dss.
Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
Always disable redzone by default, even when --enable-debug is
specified. The memory overhead for redzones can be substantial, which
makes this feature something that should only be opted into.
Normalize arena_palloc(), chunk_alloc_mmap_slow(), and
chunk_recycle_dss() to use the same algorithm for trimming
over-allocation.
Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and
ALIGNMENT_CEILING() macros, and use them where appropriate.
Remove the run_size_p parameter from sa2u().
Fix a potential deadlock in chunk_recycle_dss() that was introduced by
eae269036c (Add alignment support to
chunk_alloc()).
Implement Valgrind support, as well as the redzone and quarantine
features, which help Valgrind detect memory errors. Redzones are only
implemented for small objects because the changes necessary to support
redzones around large and huge objects are complicated by in-place
reallocation, to the point that it isn't clear that the maintenance
burden is worth the incremental improvement to Valgrind support.
Merge arena_salloc() and arena_salloc_demote().
Refactor i[v]salloc() to expose the 'demote' option.
Always initialize tcache data structures if the tcache configuration
option is enabled, regardless of opt_tcache. This fixes
"thread.tcache.enabled" mallctl manipulation in the case when opt_tcache
is false.
s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g.
Remove remnants of the dynamic-page-shift code.
Rename the "arenas.pagesize" mallctl to "arenas.page".
Remove the "arenas.chunksize" mallctl, which is redundant with
"opt.lg_chunk".
This reverts commit 96d4120ac0.
ivsalloc() depends on chunks_rtree being initialized. This can be
worked around via a NULL pointer check. However,
thread_allocated_tsd_get() also depends on initialization having
occurred, and there is no way to guard its call in free() that is
cheaper than checking whether ptr is NULL.
Generalize isalloc() to handle NULL pointers in such a way that the NULL
checking overhead is only paid when introspecting huge allocations (or
NULL). This allows free() and malloc_usable_size() to no longer check
for NULL.
Submitted by Igor Bukanov and Mike Hommey.
Remove code that validates malloc_vsnprintf() and malloc_strtoumax()
against their namesakes. The validation code has adequately served its
usefulness at this point, and it isn't worth dealing with the different
formatting for %p with glibc versus other implementations for NULL
pointers ("(nil)" vs. "0x0").
Reported by Mike Hommey.
Check for NULL ptr in malloc_usable_size(), rather than just asserting
that ptr is non-NULL. This matches behavior of other implementations
(e.g., glibc and tcmalloc).
It turns out some OSX system libraries (like CoreGraphics on 10.6) like
to call malloc_zone_* functions, but giving them pointers that weren't
allocated with the zone they are using.
Possibly, they do malloc_zone_malloc(malloc_default_zone()) before we
register the jemalloc zone, and malloc_zone_realloc(malloc_default_zone())
after. malloc_default_zone() returning a different value in both cases.
Use FreeBSD-specific functions (_pthread_mutex_init_calloc_cb(),
_malloc_{pre,post}fork()) to avoid bootstrapping issues due to
allocation in libc and libthr.
Add malloc_strtoumax() and use it instead of strtoul(). Disable
validation code in malloc_vsnprintf() and malloc_strtoumax() until
jemalloc is initialized. This is necessary because locale
initialization causes allocation for both vsnprintf() and strtoumax().
Force the lazy-lock feature on in order to avoid pthread_self(),
because it causes allocation.
Use syscall(SYS_write, ...) rather than write(...), because libthr wraps
write() and causes allocation. Without this workaround, it would not be
possible to print error messages in malloc_conf_init() without
substantially reworking bootstrapping.
Fix choose_arena_hard() to look at how many threads are assigned to the
candidate choice, rather than checking whether the arena is
uninitialized. This bug potentially caused more arenas to be
initialized than necessary.
Remove ephemeral mutexes from the prof machinery, and remove
malloc_mutex_destroy(). This simplifies mutex management on systems
that call malloc()/free() inside pthread_mutex_{create,destroy}().
Add atomic_*_u() for operation on unsigned values.
Fix prof_printf() to call malloc_vsnprintf() rather than
malloc_snprintf().
Implement tsd, which is a TLS/TSD abstraction that uses one or both
internally. Modify bootstrapping such that no tsd's are utilized until
allocation is safe.
Remove malloc_[v]tprintf(), and use malloc_snprintf() instead.
Fix %p argument size handling in malloc_vsnprintf().
Fix a long-standing statistics-related bug in the "thread.arena"
mallctl that could cause crashes due to linked list corruption.
I tested a build from 10.7 run on 10.7 and 10.6, and a build from 10.6
run on 10.6. The AC_COMPILE_IFELSE limbo is to avoid running a program
during configure, which presumably makes it work when cross compiling
for iOS.
Acquire/release arena bin locks as part of the prefork/postfork. This
bug made deadlock in the child between fork and exec a possibility.
Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so
that the child can reinitialize mutexes rather than unlocking them. In
practice, this bug tended not to cause problems.
Modify malloc_vsnprintf() validation code to verify that output is
identical to vsnprintf() output, even if both outputs are truncated due
to buffer exhaustion.
Implement aligned_alloc(), which was added in the C11 standard. The
function is weakly specified to the point that a minimally compliant
implementation would be painful to use (size must be an integral
multiple of alignment!), which in practice makes posix_memalign() a
safer choice.
Revert JE_COMPILABLE() so that it detects link errors. Cross-compiling
should still work as long as a valid configure cache is provided.
Clean up some comments/whitespace.
Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as
several other printing functions based on it, so that formatted printing
can be relied upon without concern for inducing a dependency on floating
point runtime support. Replace malloc_write() calls with
malloc_*printf() where doing so simplifies the code.
Add name mangling for library-private symbols in the data and BSS
sections. Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose
all opt_* variable use to cpp so that proper mangling occurs.
Remove the lg_tcache_gc_sweep option, because it is no longer
very useful. Prior to the addition of dynamic adjustment of tcache fill
count, it was possible for fill/flush overhead to be a problem, but this
problem no longer occurs.
Add the --with-mangling configure option, which can be used to specify
name mangling on a per public symbol basis that takes precedence over
--with-jemalloc-prefix.
Expose the memalign() and valloc() overrides even if
--with-jemalloc-prefix is specified. This change does no real harm, and
simplifies the code.
Add nallocm(), which computes the real allocation size that would result
from the corresponding allocm() call. nallocm() is a functional
superset of OS X's malloc_good_size(), in that it takes alignment
constraints into account.
When jemalloc is used as a libc malloc replacement (i.e. not prefixed),
some particular setups may end up inconsistently calling malloc from
libc and free from jemalloc, or the other way around.
glibc provides hooks to make its functions use alternative
implementations. Use them.
Submitted by Karl Tomlinson and Mike Hommey.
Do not enforce minimum alignment in memalign(). This is a non-standard
function, and there is disagreement over whether to enforce minimum
alignment. Solaris documentation (whence memalign() originated) says
that minimum alignment is required:
The value of alignment must be a power of two and must be greater than
or equal to the size of a word.
However, Linux's manual page says in its NOTES section:
memalign() may not check that the boundary parameter is correct.
This is descriptive rather than prescriptive, but applications with
bad assumptions about memalign() exist, so be as forgiving as possible.
Reported by Mike Hommey.
Program-generate small size class tables for all valid combinations of
LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT. Use the appropriate table to generate
all relevant data structures, and remove the distinction between
tiny/quantum/cacheline/subpage bins.
Remove --enable-dynamic-page-shift. This option didn't prove useful in
practice, and it prevented optimizations.
Add Tilera architecture support.
Remove opt.lg_prof_bt_max, and hard code it to 7. The original
intention of this option was to enable faster backtracing by limiting
backtrace depth. However, this makes graphical pprof output very
difficult to interpret. In practice, decreasing sampling frequency is a
better mechanism for limiting profiling overhead.
Remove the opt.lg_prof_tcmax option and hard-code a cache size of 1024.
This setting is something that users just shouldn't have to worry about.
If lock contention actually ends up being a problem, the simple solution
available to the user is to reduce sampling frequency.
Fix an interaction between arena_dissociate_bin_run() and
arena_bin_lower_run() that made it possible for bin->runcur to point to
a run other than the lowest non-full run. This bug violated jemalloc's
layout policy, but did not affect correctness.
When tiny size class support was first added, it was intended to support
truly tiny size classes (even 2 bytes). However, this wasn't very
useful in practice, so the minimum tiny size class has been limited to
sizeof(void *) for a long time now. This is too small to be standards
compliant, but other commonly used malloc implementations do not even
bother using a 16-byte quantum on systems with vector units (SSE2+,
AltiVEC, etc.). As such, it is safe in practice to support an 8-byte
tiny size class on 64-bit systems that support 16-byte types.
tcache_get() is inlined, so do the config_tcache check inside
tcache_get() and simplify its callers.
Make arena_malloc() an inline function, since it is part of the malloc()
fast path.
Remove conditional logic that cause build issues if --disable-tcache was
specified.
Remove structure magic, because 1) it is no longer conditional, and 2)
it stopped being very effective at detecting memory corruption several
years ago.
Convert configuration-related cpp conditional logic to use static
constant variables, e.g.:
#ifdef JEMALLOC_DEBUG
[...]
#endif
becomes:
if (config_debug) {
[...]
}
The advantage is clearer, more concise code. The main disadvantage is
that data structures no longer have conditionally defined fields, so
they pay the cost of all fields regardless of whether they are used. In
practice, this is only a minor concern; config_stats will go away in an
upcoming change, and config_prof is the only other major feature that
depends on more than a few special-purpose fields.
Fix the logic in stats_print() such that if the "a" flag is passed in
without the "m" flag, merged statistics will be printed even if only one
arena is initialized.
Fix huge_ralloc() to remove the old memory region from tree of huge
allocations *before* calling mremap(2), in order to make sure that no
other thread acquires the old memory region via mmap() and encounters
stale metadata in the tree.
Reported by: Rich Prohaska
Fix prof_lookup() to artificially raise curobjs for all paths through
the code that creates a new entry in the per thread bt2cnt hash table.
This fixes a race condition that could corrupt memory if prof_accum were
false, and a non-default lg_prof_tcmax were used and/or threads were
destroyed.
Add a missing prof_malloc() call in allocm(). Before this fix, negative
object/byte counts could be observed in heap profiles for applications
that use allocm().
Rewrite prof_alloc_prep() as a cpp macro, PROF_ALLOC_PREP(), in order to
remove any doubt as to whether an additional stack frame is created.
Prior to this change, it was assumed that inlining would reduce the
total number of frames in the backtrace, but in practice behavior wasn't
completely predictable.
Create imemalign() and call it from posix_memalign(), memalign(), and
valloc(), so that all entry points require the same number of stack
frames to be ignored during backtracing.
Properly handle boundary conditions for sampled region promotion in
rallocm(). Prior to this fix, some combinations of 'size' and 'extra'
values could cause erroneous behavior. Additionally, size class
recording for promoted regions was incorrect.
Fix assertions in arena_purge() to accurately reflect the constraints in
arena_maybe_purge(). There were two bugs here, one of which merely
weakened the assertion, and the other of which referred to an
uninitialized variable (typo; used npurgatory instead of
arena->npurgatory).