Commit Graph

388 Commits

Author SHA1 Message Date
Jason Evans
c7a9a6c86b Attempt mmap-based in-place huge reallocation.
Attempt mmap-based in-place huge reallocation by plumbing new_addr into
chunk_alloc_mmap().  This can dramatically speed up incremental huge
reallocation.

This resolves #335.
2016-02-24 17:23:18 -08:00
Jason Evans
aa63d5d377 Fix ffs_zu() compilation error on MinGW.
This regression was caused by 9f4ee6034c
(Refactor jemalloc_ffs*() into ffs_*().).
2016-02-24 14:01:47 -08:00
Jason Evans
9e1810ca9d Silence miscellaneous 64-to-32-bit data loss warnings. 2016-02-24 13:03:48 -08:00
Jason Evans
1c42a04cc6 Change lg_floor() return type from size_t to unsigned. 2016-02-24 13:03:48 -08:00
Jason Evans
8f683b94a7 Make opt_narenas unsigned rather than size_t. 2016-02-24 13:03:48 -08:00
Jason Evans
603b3bd413 Make nhbins unsigned rather than size_t. 2016-02-24 13:03:48 -08:00
Jason Evans
9f4ee6034c Refactor jemalloc_ffs*() into ffs_*().
Use appropriate versions to resolve 64-to-32-bit data loss warnings.
2016-02-24 13:03:48 -08:00
Dmitri Smirnov
b41a07c31a Fix Windows build issues
This resolves #333.
2016-02-23 18:55:45 -08:00
Jason Evans
ae45142adc Collapse arena_avail_tree_* into arena_run_tree_*.
These tree types converged to become identical, yet they still had
independently generated red-black tree implementations.
2016-02-23 18:27:24 -08:00
Dave Watson
3417a304cc Separate arena_avail trees
Separate run trees by index, replacing the previous quantize logic.
Quantization by index is now performed only on insertion / removal from
the tree, and not on node comparison, saving some cpu.  This also means
we don't have to dereference the miscelm* pointers, saving half of the
memory loads from miscelms/mapbits that have fallen out of cache.  A
linear scan of the indicies appears to be fast enough.

The only cost of this is an extra tree array in each arena.
2016-02-23 18:09:36 -08:00
Dave Watson
2b1fc90b7b Remove rbt_nil
Since this is an intrusive tree, rbt_nil is the whole size of the node
and can be quite large.  For example, miscelm is ~100 bytes.
2016-02-23 18:09:25 -08:00
Jason Evans
0da8ce1e96 Use table lookup for run_quantize_{floor,ceil}().
Reduce run quantization overhead by generating lookup tables during
bootstrapping, and using the tables for all subsequent run quantization.
2016-02-22 16:47:34 -08:00
Jason Evans
a9a4684792 Test run quantization.
Also rename run_quantize_*() to improve clarity.  These tests
demonstrate that run_quantize_ceil() is flawed.
2016-02-22 14:58:05 -08:00
Jason Evans
817d9030a5 Indentation style cleanup. 2016-02-22 10:44:58 -08:00
Jason Evans
9bad079039 Refactor time_* into nstime_*.
Use a single uint64_t in nstime_t to store nanoseconds rather than using
struct timespec.  This reduces fragility around conversions between long
and uint64_t, especially missing casts that only cause problems on
32-bit platforms.
2016-02-21 21:39:05 -08:00
Jason Evans
56139dc403 Remove _WIN32-specific struct timespec declaration.
struct timespec is already defined by the system (at least on MinGW).
2016-02-20 23:43:17 -08:00
Jason Evans
ecae12323d Fix overflow in prng_range().
Add jemalloc_ffs64() and use it instead of jemalloc_ffsl() in
prng_range(), since long is not guaranteed to be a 64-bit type.
2016-02-20 23:41:33 -08:00
Jason Evans
aac93f414e Add symbol mangling for prng_[lg_]range(). 2016-02-20 11:26:00 -08:00
rustyx
3c2c5a5071 Fix warning in ipalloc 2016-02-20 10:55:18 -08:00
rustyx
46e0b2301c Detect LG_SIZEOF_PTR depending on MSVC platform target 2016-02-20 10:50:24 -08:00
Christopher Ferris
effaf7d40f Fix a typo in the ckh_search() prototype. 2016-02-20 10:26:17 -08:00
Jason Evans
a0aaad1afa Handle unaligned keys in hash().
Reported by Christopher Ferris <cferris@google.com>.
2016-02-20 10:23:48 -08:00
Jason Evans
243f7a0508 Implement decay-based unused dirty page purging.
This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.

Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time

This resolves #325.
2016-02-19 20:56:21 -08:00
Jason Evans
8e82af1166 Implement smoothstep table generation.
Check in a generated smootherstep table as smoothstep.h rather than
generating it at configure time, since not all systems (e.g. Windows)
have dc.
2016-02-19 20:56:15 -08:00
Jason Evans
db927b6727 Refactor arenas_cache tsd.
Refactor arenas_cache tsd into arenas_tdata, which is a structure of
type arena_tdata_t.
2016-02-19 20:32:37 -08:00
Jason Evans
578cd16581 Refactor arena_malloc_hard() out of arena_malloc(). 2016-02-19 20:32:32 -08:00
Jason Evans
34676d3369 Refactor prng* from cpp macros into inline functions.
Remove 32-bit variant, convert prng64() to prng_lg_range(), and add
prng_range().
2016-02-19 20:29:06 -08:00
Jason Evans
c87ab25d18 Use ticker for incremental tcache GC. 2016-02-19 20:29:06 -08:00
Jason Evans
9998000b2b Implement ticker.
Implement ticker, which provides a simple API for ticking off some
number of events before indicating that the ticker has hit its limit.
2016-02-19 20:29:06 -08:00
Jason Evans
94451d184b Flesh out time_*() API. 2016-02-19 20:29:06 -08:00
Cameron Evans
e5d5a4a517 Add time_update(). 2016-02-19 20:29:06 -08:00
Jason Evans
f829009929 Add --with-malloc-conf.
Add --with-malloc-conf, which makes it possible to embed a default
options string during configuration.
2016-02-19 20:29:06 -08:00
Jason Evans
ef349f3f94 Fix arena_sdalloc() line wrapping. 2016-02-19 20:29:06 -08:00
Jason Evans
f9e3459f75 Tweak code to allow compilation of concatenated src/*.c sources.
This resolves #294.
2015-11-12 11:06:41 -08:00
Jason Evans
a6ec1c869e Fix a comment. 2015-11-12 10:51:32 -08:00
Qi Wang
f4a0f32d34 Fast-path improvement: reduce # of branches and unnecessary operations.
- Combine multiple runtime branches into a single malloc_slow check.
- Avoid calling arena_choose / size2index / index2size on fast path.
- A few micro optimizations.
2015-11-10 14:28:34 -08:00
Joshua Kahn
e8ab0ab9c0 Add function to destroy tree
ex_destroy iterates over the tree using post-order traversal so nodes
can be removed and processed by the callback function without paying the
cost to rebalance the tree. The destruction process cannot be stopped
once started.
2015-11-09 15:56:18 -08:00
Joshua Kahn
13b4015531 Allow const keys for lookup
Signed-off-by: Steve Dougherty <sdougherty@barracuda.com>

This resolves #281.
2015-11-09 15:48:05 -08:00
Steve Dougherty
bd418ce11e Assert compact color bit is unused
Signed-off-by: Joshua Kahn <jkahn@barracuda.com>

This resolves #280.
2015-11-09 15:44:30 -08:00
Nathan Froyd
566d4c0240 use correct macro definitions for clang-cl
clang-cl, an MSVC-compatible frontend built on top of clang, defined
_MSC_VER *and* supports __attribute__ syntax.  The ordering of the
checks in jemalloc_macros.h.in, however, do the wrong thing for
clang-cl, as we want the Windows-specific macro definitions for
clang-cl.  To support this use case, we reorder the checks so that
_MSC_VER is checked first (which includes clang-cl), and then
JEMALLOC_HAVE_ATTR) is checked.  No functionality change intended.
2015-11-09 15:27:14 -08:00
Jason Evans
a784e411f2 Fix a xallocx(..., MALLOCX_ZERO) bug.
Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of
large allocations that have been randomly assigned an offset of 0 when
--enable-cache-oblivious configure option is enabled.  This addresses a
special case missed in d260f442ce (Fix
xallocx(..., MALLOCX_ZERO) bugs.).
2015-09-24 22:21:55 -07:00
Craig Rodrigues
66814c1a52 Fix tsd_boot1() to use explicit 'void' parameter list. 2015-09-20 21:57:32 -07:00
Jason Evans
6d91929e52 Address portability issues on Solaris.
Don't assume Bourne shell is in /bin/sh when running size_classes.sh .

Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.

This resolves #275.
2015-09-15 10:42:36 -07:00
Jason Evans
708ed79834 Resolve an unsupported special case in arena_prof_tctx_set().
Add arena_prof_tctx_reset() and use it instead of arena_prof_tctx_set()
when resetting the tctx pointer during reallocation, which happens
whenever an originally sampled reallocated object is not sampled during
reallocation.

This regression was introduced by
594c759f37 (Optimize
arena_prof_tctx_set().)
2015-09-14 23:57:58 -07:00
Jason Evans
ea8d97b897 Fix prof_{malloc,free}_sample_object() call order in prof_realloc().
Fix prof_realloc() to call prof_free_sampled_object() after calling
prof_malloc_sample_object().  Prior to this fix, if tctx and old_tctx
were the same, the tctx could have been prematurely destroyed.
2015-09-14 23:57:52 -07:00
Jason Evans
cec0d63d8b Make one call to prof_active_get_unlocked() per allocation event.
Make one call to prof_active_get_unlocked() per allocation event, and
use the result throughout the relevant functions that handle an
allocation event.  Also add a missing check in prof_realloc().  These
fixes protect allocation events against concurrent prof_active changes.
2015-09-14 23:55:48 -07:00
Jason Evans
676df88e48 Rename arena_maxclass to large_maxclass.
arena_maxclass is no longer an appropriate name, because arenas also
manage huge allocations.
2015-09-11 20:50:20 -07:00
Jason Evans
560a4e1e01 Fix xallocx() bugs.
Fix xallocx() bugs related to the 'extra' parameter when specified as
non-zero.
2015-09-11 20:40:34 -07:00
Jason Evans
a00b10735a Fix "prof.reset" mallctl-related corruption.
Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl).  This
bug could cause data structure corruption that would most likely result
in a segfault.
2015-09-09 23:16:10 -07:00
Jason Evans
b4330b02a8 Fix pointer comparision with undefined behavior.
This didn't cause bad code generation in the one case spot-checked (gcc
4.8.1), but had the potential to to so.  This bug was introduced by
594c759f37 (Optimize
arena_prof_tctx_set().).
2015-09-04 10:31:41 -07:00