Commit Graph

142 Commits

Author SHA1 Message Date
Jason Evans
1ad060584f Add/use chunk_merge_wrapper(). 2016-06-03 12:27:41 -07:00
Jason Evans
fc0372a15e Replace extent_tree_szad_* with extent_heap_*. 2016-06-03 12:27:41 -07:00
Jason Evans
ffa45a5331 Use rtree rather than [sz]ad trees for chunk split/coalesce operations. 2016-06-03 12:27:41 -07:00
Jason Evans
d78846c989 Replace extent_achunk_[gs]et() with extent_slab_[gs]et(). 2016-06-03 12:27:41 -07:00
Jason Evans
8c9be3e837 Refactor rtree to always use base_alloc() for node allocation. 2016-06-03 12:27:41 -07:00
Jason Evans
db72272bef Use rtree-based chunk lookups rather than pointer bit twiddling.
Look up chunk metadata via the radix tree, rather than using
CHUNK_ADDR2BASE().

Propagate pointer's containing extent.

Minimize extent lookups by doing a single lookup (e.g. in free()) and
propagating the pointer's extent into nearly all the functions that may
need it.
2016-06-03 12:27:41 -07:00
Jason Evans
a7a6f5bc96 Rename extent_node_t to extent_t. 2016-05-16 12:21:28 -07:00
Jason Evans
3aea827f5e Simplify run quantization. 2016-05-16 12:21:27 -07:00
Jason Evans
7bb00ae9d6 Refactor runs_avail.
Use pszind_t size classes rather than szind_t size classes, and always
reserve space for NPSIZES elements.  This removes unused heaps that are
not multiples of the page size, and adds (currently) unused heaps for
all huge size classes, with the immediate benefit that the size of
arena_t allocations is constant (no longer dependent on chunk size).
2016-05-16 12:21:21 -07:00
Jason Evans
627372b459 Initialize arena_bin_info at compile time rather than at boot time.
This resolves #370.
2016-05-13 10:31:30 -07:00
Jason Evans
17c021c177 Remove redzone support.
This resolves #369.
2016-05-13 10:27:33 -07:00
Jason Evans
ba5c709517 Remove quarantine support. 2016-05-13 10:25:05 -07:00
Jason Evans
c1e00ef2a6 Resolve bootstrapping issues when embedded in FreeBSD libc.
b2c0d6322d (Add witness, a simple online
locking validator.) caused a broad propagation of tsd throughout the
internal API, but tsd_fetch() was designed to fail prior to tsd
bootstrapping.  Fix this by splitting tsd_t into non-nullable tsd_t and
nullable tsdn_t, and modifying all internal APIs that do not critically
rely on tsd to take nullable pointers.  Furthermore, add the
tsd_booted_get() function so that tsdn_fetch() can probe whether tsd
bootstrapping is complete and return NULL if not.  All dangerous
conversions of nullable pointers are tsdn_tsd() calls that assert-fail
on invalid conversion.
2016-05-10 22:51:33 -07:00
Jason Evans
3ef51d7f73 Optimize the fast paths of calloc() and [m,d,sd]allocx().
This is a broader application of optimizations to malloc() and free() in
f4a0f32d34 (Fast-path improvement:
reduce # of branches and unnecessary operations.).

This resolves #321.
2016-05-06 14:37:39 -07:00
Jason Evans
174c0c3a9c Fix fork()-related lock rank ordering reversals. 2016-04-25 23:16:20 -07:00
Jason Evans
19ff2cefba Implement the arena.<i>.reset mallctl.
This makes it possible to discard all of an arena's allocations in a
single operation.

This resolves #146.
2016-04-22 15:20:06 -07:00
Jason Evans
66cd953514 Do not allocate metadata via non-auto arenas, nor tcaches.
This assures that all internally allocated metadata come from the
first opt_narenas arenas, i.e. the automatically multiplexed arenas.
2016-04-22 15:19:59 -07:00
Jason Evans
b2c0d6322d Add witness, a simple online locking validator.
This resolves #358.
2016-04-14 02:09:28 -07:00
Jason Evans
c6a2c39404 Refactor/fix ph.
Refactor ph to support configurable comparison functions.  Use a cpp
macro code generation form equivalent to the rb macros so that pairing
heaps can be used for both run heaps and chunk heaps.

Remove per node parent pointers, and instead use leftmost siblings' prev
pointers to track parents.

Fix multi-pass sibling merging to iterate over intermediate results
using a FIFO, rather than a LIFO.  Use this fixed sibling merging
implementation for both merge phases of the auxiliary twopass algorithm
(first merging the aux list, then replacing the root with its merged
children).  This fixes both degenerate merge behavior and the potential
for deep recursion.

This regression was introduced by
6bafa6678f (Pairing heap).

This resolves #371.
2016-04-11 02:15:42 -07:00
Jason Evans
61a6dfcd5f Constify various internal arena APIs. 2016-03-23 16:15:42 -07:00
Jason Evans
613cdc80f6 Convert arena_bin_t's runs from a tree to a heap. 2016-03-08 13:48:27 -08:00
Dave Watson
4a0dbb5ac8 Use pairing heap for arena->runs_avail
Use pairing heap instead of red black tree in arena runs_avail.  The
extra links are unioned with the bitmap_t, so this change doesn't use
any extra memory.

Canaries show this change to be a 1% cpu win, and 2% latency win.  In
particular, large free()s, and small bin frees are now O(1) (barring
coalescing).

I also tested changing bin->runs to be a pairing heap, but saw a much
smaller win, and it would mean increasing the size of arena_run_s by two
pointers, so I left that as an rb-tree for now.
2016-03-08 13:48:27 -08:00
Jason Evans
3c07f803aa Fix stats.arenas.<i>.[...] for --disable-stats case.
Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time}
initialization.

Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of
the arena mutex.
2016-02-27 20:40:13 -08:00
Jason Evans
42ce80e15a Silence miscellaneous 64-to-32-bit data loss warnings.
This resolves #341.
2016-02-25 20:51:00 -08:00
Jason Evans
0c516a00c4 Make *allocx() size class overflow behavior defined.
Limit supported size and alignment to HUGE_MAXCLASS, which in turn is
now limited to be less than PTRDIFF_MAX.

This resolves #278 and #295.
2016-02-25 15:29:49 -08:00
Jason Evans
767d85061a Refactor arenas array (fixes deadlock).
Refactor the arenas array, which contains pointers to all extant arenas,
such that it starts out as a sparse array of maximum size, and use
double-checked atomics-based reads as the basis for fast and simple
arena_get().  Additionally, reduce arenas_lock's role such that it only
protects against arena initalization races.  These changes remove the
possibility for arena lookups to trigger locking, which resolves at
least one known (fork-related) deadlock.

This resolves #315.
2016-02-24 23:58:10 -08:00
Jason Evans
9e1810ca9d Silence miscellaneous 64-to-32-bit data loss warnings. 2016-02-24 13:03:48 -08:00
Jason Evans
9f4ee6034c Refactor jemalloc_ffs*() into ffs_*().
Use appropriate versions to resolve 64-to-32-bit data loss warnings.
2016-02-24 13:03:48 -08:00
Jason Evans
ae45142adc Collapse arena_avail_tree_* into arena_run_tree_*.
These tree types converged to become identical, yet they still had
independently generated red-black tree implementations.
2016-02-23 18:27:24 -08:00
Dave Watson
3417a304cc Separate arena_avail trees
Separate run trees by index, replacing the previous quantize logic.
Quantization by index is now performed only on insertion / removal from
the tree, and not on node comparison, saving some cpu.  This also means
we don't have to dereference the miscelm* pointers, saving half of the
memory loads from miscelms/mapbits that have fallen out of cache.  A
linear scan of the indicies appears to be fast enough.

The only cost of this is an extra tree array in each arena.
2016-02-23 18:09:36 -08:00
Jason Evans
0da8ce1e96 Use table lookup for run_quantize_{floor,ceil}().
Reduce run quantization overhead by generating lookup tables during
bootstrapping, and using the tables for all subsequent run quantization.
2016-02-22 16:47:34 -08:00
Jason Evans
a9a4684792 Test run quantization.
Also rename run_quantize_*() to improve clarity.  These tests
demonstrate that run_quantize_ceil() is flawed.
2016-02-22 14:58:05 -08:00
Jason Evans
817d9030a5 Indentation style cleanup. 2016-02-22 10:44:58 -08:00
Jason Evans
9bad079039 Refactor time_* into nstime_*.
Use a single uint64_t in nstime_t to store nanoseconds rather than using
struct timespec.  This reduces fragility around conversions between long
and uint64_t, especially missing casts that only cause problems on
32-bit platforms.
2016-02-21 21:39:05 -08:00
Jason Evans
243f7a0508 Implement decay-based unused dirty page purging.
This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.

Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time

This resolves #325.
2016-02-19 20:56:21 -08:00
Jason Evans
db927b6727 Refactor arenas_cache tsd.
Refactor arenas_cache tsd into arenas_tdata, which is a structure of
type arena_tdata_t.
2016-02-19 20:32:37 -08:00
Jason Evans
578cd16581 Refactor arena_malloc_hard() out of arena_malloc(). 2016-02-19 20:32:32 -08:00
Jason Evans
ef349f3f94 Fix arena_sdalloc() line wrapping. 2016-02-19 20:29:06 -08:00
Qi Wang
f4a0f32d34 Fast-path improvement: reduce # of branches and unnecessary operations.
- Combine multiple runtime branches into a single malloc_slow check.
- Avoid calling arena_choose / size2index / index2size on fast path.
- A few micro optimizations.
2015-11-10 14:28:34 -08:00
Joshua Kahn
13b4015531 Allow const keys for lookup
Signed-off-by: Steve Dougherty <sdougherty@barracuda.com>

This resolves #281.
2015-11-09 15:48:05 -08:00
Jason Evans
708ed79834 Resolve an unsupported special case in arena_prof_tctx_set().
Add arena_prof_tctx_reset() and use it instead of arena_prof_tctx_set()
when resetting the tctx pointer during reallocation, which happens
whenever an originally sampled reallocated object is not sampled during
reallocation.

This regression was introduced by
594c759f37 (Optimize
arena_prof_tctx_set().)
2015-09-14 23:57:58 -07:00
Jason Evans
676df88e48 Rename arena_maxclass to large_maxclass.
arena_maxclass is no longer an appropriate name, because arenas also
manage huge allocations.
2015-09-11 20:50:20 -07:00
Jason Evans
560a4e1e01 Fix xallocx() bugs.
Fix xallocx() bugs related to the 'extra' parameter when specified as
non-zero.
2015-09-11 20:40:34 -07:00
Jason Evans
b4330b02a8 Fix pointer comparision with undefined behavior.
This didn't cause bad code generation in the one case spot-checked (gcc
4.8.1), but had the potential to to so.  This bug was introduced by
594c759f37 (Optimize
arena_prof_tctx_set().).
2015-09-04 10:31:41 -07:00
Jason Evans
594c759f37 Optimize arena_prof_tctx_set().
Optimize arena_prof_tctx_set() to avoid reading run metadata when
deciding whether it's actually necessary to write.
2015-09-02 14:52:24 -07:00
Jason Evans
b5c2a347d7 Silence compiler warnings for unreachable code.
Reported by Ingvar Hagelund.
2015-08-19 23:28:34 -07:00
Jason Evans
d01fd19755 Rename index_t to szind_t to avoid an existing type on Solaris.
This resolves #256.
2015-08-19 15:21:32 -07:00
Jason Evans
5ef33a9f2b Don't bitshift by negative amounts.
Don't bitshift by negative amounts when encoding/decoding run sizes in
chunk header maps.  This affected systems with page sizes greater than 8
KiB.

Reported by Ingvar Hagelund <ingvar@redpill-linpro.com>.
2015-08-19 14:16:30 -07:00
Jason Evans
85ae064e96 Fix a comment. 2015-08-13 14:54:06 -07:00
Jason Evans
1f27abc1b1 Refactor arena_mapbits_{small,large}_set() to not preserve unzeroed.
Fix arena_run_split_large_helper() to treat newly committed memory as
zeroed.
2015-08-11 16:45:47 -07:00