Commit Graph

801 Commits

Author SHA1 Message Date
Jason Evans
64e458f5cd Implement two-phase decay-based purging.
Split decay-based purging into two phases, the first of which uses lazy
purging to convert dirty pages to "muzzy", and the second of which uses
forced purging, decommit, or unmapping to convert pages to clean or
destroy them altogether.  Not all operating systems support lazy
purging, yet the application may provide extent hooks that implement
lazy purging, so care must be taken to dynamically omit the first phase
when necessary.

The mallctl interfaces change as follows:
- opt.decay_time --> opt.{dirty,muzzy}_decay_time
- arena.<i>.decay_time --> arena.<i>.{dirty,muzzy}_decay_time
- arenas.decay_time --> arenas.{dirty,muzzy}_decay_time
- stats.arenas.<i>.pdirty --> stats.arenas.<i>.p{dirty,muzzy}
- stats.arenas.<i>.{npurge,nmadvise,purged} -->
  stats.arenas.<i>.{dirty,muzzy}_{npurge,nmadvise,purged}

This resolves #521.
2017-03-15 13:13:47 -07:00
Jason Evans
38a5bfc816 Move arena_t's purging field into arena_decay_t. 2017-03-15 13:13:47 -07:00
Jason Evans
765edd67b4 Refactor decay-related function parametrization.
Refactor most of the decay-related functions to take as parameters the
decay_t and associated extents_t structures to operate on.  This
prepares for supporting both lazy and forced purging on different decay
schedules.
2017-03-15 13:13:47 -07:00
David Goldblatt
ee202efc79 Convert remaining arena_stats_t fields to atomics
These were all size_ts, so we have atomics support for them on all platforms, so
the conversion is straightforward.

Left non-atomic is curlextents, which AFAICT is not used atomically anywhere.
2017-03-13 18:22:33 -07:00
David Goldblatt
4fc2acf5ae Switch atomic uint64_ts in arena_stats_t to C11 atomics
I expect this to be the trickiest conversion we will see, since we want atomics
on 64-bit platforms, but are also always able to piggyback on some sort of
external synchronization on non-64 bit platforms.
2017-03-13 18:22:33 -07:00
Jason Evans
26d23da6cd Prefer pages_purge_forced() over memset().
This has the dual advantages of allowing for sparsely used large
allocations, and relying on the kernel to supply zeroed pages, which
tends to be very fast on modern systems.
2017-03-13 18:19:57 -07:00
Jason Evans
28078274c4 Add alignment/size assertions to pages_*().
These sanity checks prevent what otherwise might result in failed system
calls and unintended fallback execution paths.
2017-03-13 18:19:57 -07:00
Jason Evans
7cbcd2e2b7 Fix pages_purge_forced() to discard pages on non-Linux systems.
madvise(..., MADV_DONTNEED) only causes demand-zeroing on Linux, so fall
back to overlaying a new mapping.
2017-03-13 18:19:57 -07:00
David Goldblatt
21a68e2d22 Convert rtree code to use C11 atomics
In the process, I changed the implementation of rtree_elm_acquire so that it
won't even try to CAS if its initial read (getting the extent + lock bit)
indicates that the CAS is doomed to fail.  This can significantly improve
performance under contention.
2017-03-13 12:05:27 -07:00
Jason Evans
3a2b183d5f Convert arena_t's purging field to non-atomic bool.
The decay mutex already protects all accesses.
2017-03-10 10:14:30 -08:00
Qi Wang
ec532e2c5c Implement per-CPU arena.
The new feature, opt.percpu_arena, determines thread-arena association
dynamically based CPU id. Three modes are supported: "percpu", "phycpu"
and disabled.

"percpu" uses the current core id (with help from sched_getcpu())
directly as the arena index, while "phycpu" will assign threads on the
same physical CPU to the same arena. In other words, "percpu" means # of
arenas == # of CPUs, while "phycpu" has # of arenas == 1/2 * (# of
CPUs). Note that no runtime check on whether hyper threading is enabled
is added yet.

When enabled, threads will be migrated between arenas when a CPU change
is detected. In the current design, to reduce overhead from reading CPU
id, each arena tracks the thread accessed most recently. When a new
thread comes in, we will read CPU id and update arena if necessary.
2017-03-08 23:19:01 -08:00
Qi Wang
8721e19c04 Fix arena_prefork lock rank order for witness.
When witness is enabled, lock rank order needs to be preserved during
prefork, not only for each arena, but also across arenas. This change
breaks arena_prefork into further stages to ensure valid rank order
across arenas. Also changed test/unit/fork to use a manual arena to
catch this case.
2017-03-08 23:07:27 -08:00
David Goldblatt
8adab26972 Convert extents_t's npages field to use C11-style atomics
In the process, we can do some strength reduction, changing the fetch-adds and
fetch-subs to be simple loads followed by stores, since the modifications all
occur while holding the mutex.
2017-03-08 21:27:09 -08:00
Qi Wang
01f47f11a6 Store associated arena in tcache.
This fixes tcache_flush for manual tcaches, which wasn't able to find
the correct arena it associated with. Also changed the decay test to
cover this case (by using manually created arenas).
2017-03-07 12:58:11 -08:00
Jason Evans
cdce93e4a3 Use any-best-fit for cached extent allocation.
This simplifies what would be pairing heap operations to the equivalent
of LIFO queue operations.  This is a complementary optimization in the
context of delayed coalescing for cached extents.
2017-03-07 10:25:33 -08:00
Jason Evans
e201e24904 Perform delayed coalescing prior to purging.
Rather than purging uncoalesced extents, perform just enough incremental
coalescing to purge only fully coalesced extents.  In the absence of
cached extent reuse, the immediate versus delayed incremental purging
algorithms result in the same purge order.

This resolves #655.
2017-03-07 10:25:12 -08:00
David Goldblatt
4f1e94658a Change arena to use the atomic functions for ssize_t instead of the union strategy 2017-03-06 18:49:19 -08:00
David Goldblatt
e9852b5776 Disentangle assert and util
This is the first header refactoring diff, #533.  It splits the assert and util
components into separate, hermetic, header files.  In the process, it splits out
two of the large sub-components of util (the stdio.h replacement, and bit
manipulation routines) into their own components (malloc_io.h and bit_util.h).
This is mostly to break up cyclic dependencies, but it also breaks off a good
chunk of the catch-all-ness of util, which is nice.
2017-03-06 15:08:43 -08:00
Jason Evans
04d8fcb745 Optimize malloc_large_stats_t maintenance.
Convert the nrequests field to be partially derived, and the curlextents
to be fully derived, in order to reduce the number of stats updates
needed during common operations.

This change affects ndalloc stats during arena reset, because it is no
longer possible to cancel out ndalloc effects (curlextents would become
negative).
2017-03-04 08:18:31 -08:00
David Goldblatt
d4ac7582f3 Introduce a backport of C11 atomics
This introduces a backport of C11 atomics.  It has four implementations; ranked
in order of preference, they are:
- GCC/Clang __atomic builtins
- GCC/Clang __sync builtins
- MSVC _Interlocked builtins
- C11 atomics, from <stdatomic.h>

The primary advantages are:
- Close adherence to the standard API gives us a defined memory model.
- Type safety: atomic objects are now separate types from non-atomic ones, so
  that it's impossible to mix up atomic and non-atomic updates (which is
  undefined behavior that compilers are starting to take advantage of).
- Efficiency: we can specify ordering for operations, avoiding fences and
  atomic operations on strongly ordered architectures (example:
  `atomic_write_u32(ptr, val);` involves a CAS loop, whereas
  `atomic_store(ptr, val, ATOMIC_RELEASE);` is a plain store.

This diff leaves in the current atomics API (implementing them in terms of the
backport).  This lets us transition uses over piecemeal.

Testing:
This is by nature hard to test. I've manually tested the first three options on
Linux on gcc by futzing with the #defines manually, on freebsd with gcc and
clang, on MSVC, and on OS X with clang.  All of these were x86 machines though,
and we don't have any test infrastructure set up for non-x86 platforms.
2017-03-03 13:40:59 -08:00
Jason Evans
fd058f572b Immediately purge cached extents if decay_time is 0.
This fixes a regression caused by
54269dc0ed (Remove obsolete
arena_maybe_purge() call.), as well as providing a general fix.

This resolves #665.
2017-03-02 19:43:06 -08:00
Jason Evans
d61a5f76b2 Convert arena_decay_t's time to be atomically synchronized. 2017-03-02 19:43:06 -08:00
Qi Wang
aa1de06e3a Small style fix in ctl.c 2017-03-01 15:21:39 -08:00
Jason Evans
379dd44c57 Add casts to CONF_HANDLE_T_U().
This avoids signed/unsigned comparison warnings when specifying integer
constants as inputs.
2017-02-28 17:18:25 -08:00
Jason Evans
472fef2e12 Fix {allocated,nmalloc,ndalloc,nrequests}_large stats regression.
This fixes a regression introduced by
d433471f58 (Derive
{allocated,nmalloc,ndalloc,nrequests}_large stats.).
2017-02-27 11:18:07 -08:00
Jason Evans
079b8bee37 Tidy up extent quantization.
Remove obsolete unit test scaffolding for extent quantization.  Remove
redundant assertions.  Add an assertion to
extents_first_best_fit_locked() that should help prevent aligned
allocation regressions.
2017-02-27 11:17:47 -08:00
Jason Evans
8ac7937eb5 Remove remainder of mb (memory barrier).
This complements 94c5d22a4d (Remove mb.h,
which is unused).
2017-02-22 00:24:14 -08:00
Jason Evans
54269dc0ed Remove obsolete arena_maybe_purge() call.
Remove a call to arena_maybe_purge() that was necessary for ratio-based
purging, but is obsolete in the context of decay-based purging.
2017-02-21 12:46:41 -08:00
Jason Evans
2dfc5b5aac Disable coalescing of cached extents.
Extent splitting and coalescing is a major component of large allocation
overhead, and disabling coalescing of cached extents provides a simple
and effective hysteresis mechanism.  Once two-phase purging is
implemented, it will probably make sense to leave coalescing disabled
for the first phase, but coalesce during the second phase.
2017-02-16 20:11:50 -08:00
Jason Evans
c1ebfaa673 Optimize extent coalescing.
Refactor extent_can_coalesce(), extent_coalesce(), and extent_record()
to avoid needlessly repeating extent [de]activation operations.
2017-02-16 20:11:50 -08:00
Jason Evans
b0654b95ed Fix arena->stats.mapped accounting.
Mapped memory increases when extent_alloc_wrapper() succeeds, and
decreases when extent_dalloc_wrapper() is called (during purging).
2017-02-16 15:52:11 -08:00
Jason Evans
f8fee6908d Synchronize arena->decay with arena->decay.mtx.
This removes the last use of arena->lock.
2017-02-16 09:39:46 -08:00
Jason Evans
d433471f58 Derive {allocated,nmalloc,ndalloc,nrequests}_large stats.
This mildly reduces stats update overhead during normal operation.
2017-02-16 09:39:46 -08:00
Jason Evans
ab25d3c987 Synchronize arena->tcache_ql with arena->tcache_ql_mtx.
This replaces arena->lock synchronization.
2017-02-16 09:39:46 -08:00
Jason Evans
6b5cba4191 Convert arena->stats synchronization to atomics. 2017-02-16 09:39:46 -08:00
Jason Evans
fa2d64c94b Convert arena->prof_accumbytes synchronization to atomics. 2017-02-16 09:39:46 -08:00
Jason Evans
b779522b9b Convert arena->dss_prec synchronization to atomics. 2017-02-16 09:39:46 -08:00
Jason Evans
0721b895ff Do not generate unused tsd_*_[gs]et() functions.
This avoids a gcc diagnostic note:
    note: The ABI for passing parameters with 64-byte alignment has
    changed in GCC 4.6
This note related to the cacheline alignment of rtree_ctx_t, which was
introduced by 4a346f5593 (Replace rtree
path cache with LRU cache.).
2017-02-13 10:47:16 -08:00
Jason Evans
cd2501efd6 Fix extent_alloc_dss() regression.
Fix extent_alloc_dss() to account for bytes that are not a multiple of
the page size.  This regression was introduced by
577d4572b0 (Make dss operations
lockless.), which was first released in 4.3.0.
2017-02-10 14:06:31 -08:00
Jason Evans
5f11830754 Replace spin_init() with SPIN_INITIALIZER. 2017-02-08 18:50:03 -08:00
Jason Evans
650c070e10 Remove rtree support for 0 (NULL) keys.
NULL can never actually be inserted in practice, and removing support
allows a branch to be removed from the fast path.
2017-02-08 18:50:03 -08:00
Jason Evans
f5cf9b19c8 Determine rtree levels at compile time.
Rather than dynamically building a table to aid per level computations,
define a constant table at compile time.  Omit both high and low
insignificant bits.  Use one to three tree levels, depending on the
number of significant bits.
2017-02-08 18:50:03 -08:00
Jason Evans
ff4db5014e Remove rtree leading 0 bit optimization.
A subsequent change instead ignores insignificant high bits.
2017-02-08 18:50:03 -08:00
Jason Evans
cdc240d501 Make non-essential inline rtree functions static functions. 2017-02-08 18:50:03 -08:00
Jason Evans
c511a44e99 Split rtree_elm_lookup_hard() out of rtree_elm_lookup().
Anything but a hit in the first element of the lookup cache is
expensive enough to negate the benefits of inlining.
2017-02-08 18:50:03 -08:00
Jason Evans
5177995530 Fix extent_record().
Read adjacent rtree elements while holding element locks, since the
extents mutex only protects against relevant like-state extent mutation.

Fix management of the 'coalesced' loop state variable to merge
forward/backward results, rather than overwriting the result of forward
coalescing if attempting to coalesce backward.  In practice this caused
no correctness issues, but could cause extra iterations in rare cases.

These regressions were introduced by
d27f29b468 (Disentangle arena and extent
locking.).
2017-02-06 20:05:49 -08:00
Jason Evans
6737d5f61e Fix a race in extent_grow_retained().
Set extent as active prior to registration so that other threads can't
modify it in the absence of locking.

This regression was introduced by
d27f29b468 (Disentangle arena and extent
locking.), via non-obvious means.  Removal of extents_mtx protection
during extent_grow_retained() execution opened up the race, but in the
presence of that locking, the code was safe.

This resolves #599.
2017-02-04 12:15:13 -08:00
Jason Evans
1bac516aaa Optimize compute_size_with_overflow().
Do not check for overflow unless it is actually a possibility.
2017-02-03 19:13:05 -08:00
Jason Evans
767ffa2b5f Fix compute_size_with_overflow().
Fix compute_size_with_overflow() to use a high_bits mask that has the
high bits set, rather than the low bits.  This regression was introduced
by 5154ff32ee (Unify the allocation
paths).
2017-02-03 19:13:05 -08:00
Jason Evans
d27f29b468 Disentangle arena and extent locking.
Refactor arena and extent locking protocols such that arena and
extent locks are never held when calling into the extent_*_wrapper()
API.  This requires extra care during purging since the arena lock no
longer protects the inner purging logic.  It also requires extra care to
protect extents from being merged with adjacent extents.

Convert extent_t's 'active' flag to an enumerated 'state', so that
retained extents are explicitly marked as such, rather than depending on
ring linkage state.

Refactor the extent collections (and their synchronization) for cached
and retained extents into extents_t.  Incorporate LRU functionality to
support purging.  Incorporate page count accounting, which replaces
arena->ndirty and arena->stats.retained.

Assert that no core locks are held when entering any internal
[de]allocation functions.  This is in addition to existing assertions
that no locks are held when entering external [de]allocation functions.

Audit and document synchronization protocols for all arena_t fields.

This fixes a potential deadlock due to recursive allocation during
gdump, in a similar fashion to b49c649bc1
(Fix lock order reversal during gdump.), but with a necessarily much
broader code impact.
2017-02-01 16:43:46 -08:00
Jason Evans
1b6e43507e Fix/refactor tcaches synchronization.
Synchronize tcaches with tcaches_mtx rather than ctl_mtx.  Add missing
synchronization for tcache flushing.  This bug was introduced by
1cb181ed63 (Implement explicit tcache
support.), which was first released in 4.0.0.
2017-02-01 16:43:46 -08:00
Jason Evans
d0e93ada51 Add witness_assert_depth[_to_rank]().
This makes it possible to make lock state assertions about precisely
which locks are held.
2017-02-01 16:43:46 -08:00
Jason Evans
ace679ce74 Synchronize extent_grow_next accesses.
This should have been part of 411697adcd
(Use exponential series to size extents.), which introduced
extent_grow_next.
2017-02-01 16:43:46 -08:00
Jason Evans
5033a9176a Call prof_gctx_create() without owing bt2gctx_mtx.
This reduces the probability of allocating (and thereby indirectly
making a system call) while owning bt2gctx_mtx.  Unfortunately it is an
incomplete solution, because ckh insertion/deletion can also
allocate/deallocate, which requires more extensive changes to address.
2017-02-01 16:43:46 -08:00
Jason Evans
397f54aa46 Conditionalize prof fork handling on config_prof.
This allows the compiler to completely remove dead code.
2017-02-01 16:43:46 -08:00
Qi Wang
bbff6ca674 Handle race in stats_arena_bins_print
When multiple threads calling stats_print, race could happen as we read the
counters in separate mallctl calls; and the removed assertion could fail when
other operations happened in between the mallctl calls. For simplicity, output
"race" in the utilization field in this case.
2017-02-01 15:17:39 -08:00
David Goldblatt
85d2841818 Fix a bug in which a potentially invalid usize replaced size
In the refactoring that unified the allocation paths, usize was substituted for
size. This worked fine under the default test configuration, but triggered
asserts when we started beefing up our CI testing.

This change fixes the issue, and clarifies the comment describing the argument
selection that it got wrong.
2017-01-25 15:50:59 -08:00
Tamir Duberstein
0874b648e0 Avoid redeclaring glibc's secure_getenv
Avoid the name secure_getenv to avoid redeclaring secure_getenv when
secure_getenv is present but its use is manually disabled via
ac_cv_func_secure_getenv=no.
2017-01-25 11:24:32 -08:00
Jason Evans
c0cc5db871 Replace tabs following #define with spaces.
This resolves #564.
2017-01-20 21:45:53 -08:00
Jason Evans
f408643a4c Remove extraneous parens around return arguments.
This resolves #540.
2017-01-20 21:43:07 -08:00
Jason Evans
c4c2592c83 Update brace style.
Add braces around single-line blocks, and remove line breaks before
function-opening braces.

This resolves #537.
2017-01-20 21:43:07 -08:00
David Goldblatt
5154ff32ee Unify the allocation paths
This unifies the allocation paths for malloc, posix_memalign, aligned_alloc,
calloc, memalign, valloc, and mallocx, so that they all share common code where
they can.

There's more work that could be done here, but I think this is the smallest
discrete change in this direction.
2017-01-20 12:15:53 -08:00
Jason Evans
9eb1b1c881 Fix --disable-stats support.
Fix numerous regressions that were exposed by --disable-stats, both in
the core library and in the tests.
2017-01-19 18:31:07 -08:00
Jason Evans
66bf773ef2 Test JSON output of malloc_stats_print() and fix bugs.
Implement and test a JSON validation parser.  Use the parser to validate
JSON output from malloc_stats_print(), with a significant subset of
supported output options.

This resolves #551.
2017-01-19 14:05:00 -08:00
Qi Wang
58424e679d Added stats about number of bytes cached in tcache currently. 2017-01-18 10:55:21 -08:00
Mike Hommey
12ab4383e9 Add dummy implementations for most remaining OSX zone allocator functions
Some system libraries are using malloc_default_zone() and then using
some of the malloc_zone_* API. Under normal conditions, those functions
check the malloc_zone_t/malloc_introspection_t struct for the values
that are allowed to be NULL, so that a NULL deref doesn't happen.

As of OSX 10.12, malloc_default_zone() doesn't return the actual default
zone anymore, but returns a fake, wrapper zone. The wrapper zone defines
all the possible functions in the malloc_zone_t/malloc_introspection_t
struct (almost), and calls the function from the registered default zone
(jemalloc in our case) on its own. Without checking whether the pointers
are NULL.

This means that a system library that calls e.g.
malloc_zone_batch_malloc(malloc_default_zone(), ...) ends up trying to
call jemalloc_zone.batch_malloc, which is NULL, and crash follows.

So as of OSX 10.12, the default zone is required to have all the
functions available (really, the same as the wrapper zone), even if they
do nothing.

This is arguably a bug in libsystem_malloc in OSX 10.12, but jemalloc
still needs to work in that case.
2017-01-17 20:13:28 -08:00
Mike Hommey
0f7376eb62 Don't rely on OSX SDK malloc/malloc.h for malloc_zone struct definitions
The SDK jemalloc is built against might be not be the latest for various
reasons, but the resulting binary ought to work on newer versions of
OSX.

In order to ensure this, we need the fullest definitions possible, so
copy what we need from the latest version of malloc/malloc.h available
on opensource.apple.com.
2017-01-17 20:13:28 -08:00
Jason Evans
1ff09534b5 Fix prof_realloc() regression.
Mostly revert the prof_realloc() changes in
498856f44a (Move slabs out of chunks.) so
that prof_free_sampled_object() is called when appropriate.  Leave the
prof_tctx_[re]set() optimization in place, but add an assertion to
verify that all eight cases are correctly handled.  Add a comment to
make clear the code ordering, so that the regression originally fixed by
ea8d97b897 (Fix
prof_{malloc,free}_sample_object() call order in prof_realloc().) is not
repeated.

This resolves #499.
2017-01-17 15:16:37 -08:00
Jason Evans
de5e1aff2a Formatting/comment fixes. 2017-01-17 15:16:37 -08:00
Jason Evans
8115f05b26 Add nullptr support to sized delete operators. 2017-01-17 14:30:15 -08:00
Jason Evans
41aa41853c Fix style nits. 2017-01-17 14:30:15 -08:00
Qi Wang
e8990dc7c7 Remove redundent stats-merging logic when destroying tcache.
The removed stats merging logic is already taken care of by tcache_flush.
2017-01-17 09:42:39 -08:00
Jason Evans
ffbb7dac3d Remove leading blank lines from function bodies.
This resolves #535.
2017-01-13 14:49:24 -08:00
Jason Evans
87e81e609b Fix indentation. 2017-01-13 14:49:24 -08:00
Jason Evans
edf1bafb2b Implement arena.<i>.destroy .
Add MALLCTL_ARENAS_DESTROYED for accessing destroyed arena stats as an
analogue to MALLCTL_ARENAS_ALL.

This resolves #382.
2017-01-06 18:58:46 -08:00
Jason Evans
dc2125cf95 Replace the arenas.initialized mallctl with arena.<i>.initialized . 2017-01-06 18:58:46 -08:00
Jason Evans
6edbedd916 Range-check mib[1] --> arena_ind casts. 2017-01-06 18:58:46 -08:00
Jason Evans
c0a05e6aba Move static ctl_epoch variable into ctl_stats_t (as epoch). 2017-01-06 18:58:45 -08:00
Jason Evans
d778dd2afc Refactor ctl_stats_t.
Refactor ctl_stats_t to be a demand-zeroed non-growing data structure.
To keep the size from being onerous (~60 MiB) on 32-bit systems, convert
the arenas field to contain pointers rather than directly embedded
ctl_arena_stats_t elements.
2017-01-06 18:58:45 -08:00
Jason Evans
0f04bb1d6f Rename the arenas.extend mallctl to arenas.create. 2017-01-06 18:58:45 -08:00
Jason Evans
3dc4e83ccb Add MALLCTL_ARENAS_ALL.
Add the MALLCTL_ARENAS_ALL cpp macro as a fixed index for use
in accessing the arena.<i>.{purge,decay,dss} and stats.arenas.<i>.*
mallctls, and deprecate access via the arenas.narenas index (to be
removed in 6.0.0).
2017-01-06 18:58:45 -08:00
Jason Evans
d0a3129b88 Fix locking in arena_dirty_count().
This was a latent bug, since the function is (intentionally) not used.
2017-01-06 18:58:45 -08:00
Jason Evans
363629df88 Fix allocated_large stats with respect to sampled small allocations. 2017-01-06 18:58:45 -08:00
Jason Evans
5c5ff8d121 Fix arena_large_reset_stats_cancel().
Decrement ndalloc_large rather than incrementing, in order to cancel out
the increment in arena_large_dalloc_stats_update().
2017-01-04 20:26:30 -08:00
Jason Evans
a0dd3a4483 Implement per arena base allocators.
Add/rename related mallctls:
- Add stats.arenas.<i>.base .
- Rename stats.arenas.<i>.metadata to stats.arenas.<i>.internal .
- Add stats.arenas.<i>.resident .

Modify the arenas.extend mallctl to take an optional (extent_hooks_t *)
argument so that it is possible for all base allocations to be serviced
by the specified extent hooks.

This resolves #463.
2016-12-26 18:08:28 -08:00
Jason Evans
a6e86810d8 Refactor purging and splitting/merging.
Split purging into lazy and forced variants.  Use the forced variant for
zeroing dss.

Add support for NULL function pointers as an opt-out mechanism for the
dalloc, commit, decommit, purge_lazy, purge_forced, split, and merge
fields of extent_hooks_t.

Add short-circuiting checks in large_ralloc_no_move_{shrink,expand}() so
that no attempt is made if splitting/merging is not supported.

This resolves #268.
2016-12-26 18:08:16 -08:00
Jason Evans
884fa22b8c Rename arena_decay_t's ndirty to nunpurged. 2016-12-26 17:59:43 -08:00
Jason Evans
411697adcd Use exponential series to size extents.
If virtual memory is retained, allocate extents such that their sizes
form an exponentially growing series.  This limits the number of
disjoint virtual memory ranges so that extent merging can be effective
even if multiple arenas' extent allocation requests are highly
interleaved.

This resolves #462.
2016-12-26 17:59:42 -08:00
Jason Evans
c1baa0a9b7 Add huge page configuration and pages_[no}huge().
Add the --with-lg-hugepage configure option, but automatically configure
LG_HUGEPAGE even if it isn't specified.

Add the pages_[no]huge() functions, which toggle huge page state via
madvise(..., MADV_[NO]HUGEPAGE) calls.
2016-12-26 17:59:34 -08:00
Jason Evans
eab3b180e5 Fix JSON-mode output for !config_stats and/or !config_prof cases.
These bugs were introduced by 0ba5b9b618
(Add "J" (JSON) support to malloc_stats_print().), which was backported
as b599b32280 (with the same bugs except
the inapplicable "metatata" misspelling) and first released in 4.3.0.
2016-12-23 11:15:44 -08:00
Jason Evans
bacb6afc6c Simplify arena_slab_regind().
Rewrite arena_slab_regind() to provide sufficient constant data for
the compiler to perform division strength reduction.  This replaces
more general manual strength reduction that was implemented before
arena_bin_info was compile-time-constant.  It would be possible to
slightly improve on the compiler-generated division code by taking
advantage of range limits that the compiler doesn't know about.
2016-12-23 10:34:34 -08:00
Dave Watson
2319152d9f jemalloc cpp new/delete bindings
Adds cpp bindings for jemalloc, along with necessary autoconf settings.
This is mostly to add sized deallocation support, which can't be added
from C directly.  Sized deallocation is ~10% microbench improvement.

* Import ax_cxx_compile_stdcxx.m4 from the autoconf repo, seems like the
  easiest way to get c++14 detection.
* Adds various other changes, like CXXFLAGS, to configure.ac.
* Adds new rules to Makefile.in for src/jemalloc-cpp.cpp, and a basic
  unittest.
* Both new and delete are overridden, to ensure jemalloc is used for
  both.
* TODO future enhancement of avoiding extra PLT thunks for new and
  delete - sdallocx and malloc are publicly exported jemalloc symbols,
  using an alias would link them directly.  Unfortunately, was having
  trouble getting it to play nice with jemalloc's namespace support.

Testing:
Tested gcc 4.8, gcc 5, gcc 5.2, clang 4.0.  Only gcc >= 5 has sized
deallocation support, verified that the rest build correctly.

Tested mac osx and Centos.

Tested --with-jemalloc-prefix and --without-export.

This resolves #202.
2016-12-12 18:36:06 -08:00
Jason Evans
acb7b1f53e Add --disable-syscall.
This resolves #517.
2016-12-03 16:50:58 -08:00
Jason Evans
5234be2133 Add pthread_atfork(3) feature test.
Some versions of Android provide a pthreads library without providing
pthread_atfork(), so in practice a separate feature test is necessary
for the latter.
2016-11-17 15:14:57 -08:00
Jason Evans
a64123ce13 Refactor madvise(2) configuration.
Add feature tests for the MADV_FREE and MADV_DONTNEED flags to
madvise(2), so that MADV_FREE is detected and used for Linux kernel
versions 4.5 and newer.  Refactor pages_purge() so that on systems which
support both flags, MADV_FREE is preferred over MADV_DONTNEED.

This resolves #387.
2016-11-17 10:31:57 -08:00
Jason Evans
aec5a051e8 Avoid gcc type-limits warnings. 2016-11-16 18:28:38 -08:00
Maks Naumov
95974c0440 Remove size_t -> unsigned -> size_t conversion. 2016-11-16 11:23:31 -08:00
Jason Evans
8a4528bdd1 Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).
This avoids warnings in some cases, and is otherwise generally good
hygiene.
2016-11-15 15:01:03 -08:00
Jason Evans
a38acf716e Add extent serial numbers.
Add extent serial numbers and use them where appropriate as a sort key
that is higher priority than address, so that the allocation policy
prefers older extents.

This resolves #147.
2016-11-15 13:08:33 -08:00
Jason Evans
c0a667112c Fix arena_reset() crashing bug.
This regression was caused by 498856f44a
(Move slabs out of chunks.).
2016-11-15 10:34:02 -08:00