Commit Graph

813 Commits

Author SHA1 Message Date
Jason Evans
c1baa0a9b7 Add huge page configuration and pages_[no}huge().
Add the --with-lg-hugepage configure option, but automatically configure
LG_HUGEPAGE even if it isn't specified.

Add the pages_[no]huge() functions, which toggle huge page state via
madvise(..., MADV_[NO]HUGEPAGE) calls.
2016-12-26 17:59:34 -08:00
Jason Evans
eab3b180e5 Fix JSON-mode output for !config_stats and/or !config_prof cases.
These bugs were introduced by 0ba5b9b618
(Add "J" (JSON) support to malloc_stats_print().), which was backported
as b599b32280 (with the same bugs except
the inapplicable "metatata" misspelling) and first released in 4.3.0.
2016-12-23 11:15:44 -08:00
Jason Evans
bacb6afc6c Simplify arena_slab_regind().
Rewrite arena_slab_regind() to provide sufficient constant data for
the compiler to perform division strength reduction.  This replaces
more general manual strength reduction that was implemented before
arena_bin_info was compile-time-constant.  It would be possible to
slightly improve on the compiler-generated division code by taking
advantage of range limits that the compiler doesn't know about.
2016-12-23 10:34:34 -08:00
Dave Watson
2319152d9f jemalloc cpp new/delete bindings
Adds cpp bindings for jemalloc, along with necessary autoconf settings.
This is mostly to add sized deallocation support, which can't be added
from C directly.  Sized deallocation is ~10% microbench improvement.

* Import ax_cxx_compile_stdcxx.m4 from the autoconf repo, seems like the
  easiest way to get c++14 detection.
* Adds various other changes, like CXXFLAGS, to configure.ac.
* Adds new rules to Makefile.in for src/jemalloc-cpp.cpp, and a basic
  unittest.
* Both new and delete are overridden, to ensure jemalloc is used for
  both.
* TODO future enhancement of avoiding extra PLT thunks for new and
  delete - sdallocx and malloc are publicly exported jemalloc symbols,
  using an alias would link them directly.  Unfortunately, was having
  trouble getting it to play nice with jemalloc's namespace support.

Testing:
Tested gcc 4.8, gcc 5, gcc 5.2, clang 4.0.  Only gcc >= 5 has sized
deallocation support, verified that the rest build correctly.

Tested mac osx and Centos.

Tested --with-jemalloc-prefix and --without-export.

This resolves #202.
2016-12-12 18:36:06 -08:00
Jason Evans
acb7b1f53e Add --disable-syscall.
This resolves #517.
2016-12-03 16:50:58 -08:00
Jason Evans
5234be2133 Add pthread_atfork(3) feature test.
Some versions of Android provide a pthreads library without providing
pthread_atfork(), so in practice a separate feature test is necessary
for the latter.
2016-11-17 15:14:57 -08:00
Jason Evans
a64123ce13 Refactor madvise(2) configuration.
Add feature tests for the MADV_FREE and MADV_DONTNEED flags to
madvise(2), so that MADV_FREE is detected and used for Linux kernel
versions 4.5 and newer.  Refactor pages_purge() so that on systems which
support both flags, MADV_FREE is preferred over MADV_DONTNEED.

This resolves #387.
2016-11-17 10:31:57 -08:00
Jason Evans
aec5a051e8 Avoid gcc type-limits warnings. 2016-11-16 18:28:38 -08:00
Maks Naumov
95974c0440 Remove size_t -> unsigned -> size_t conversion. 2016-11-16 11:23:31 -08:00
Jason Evans
8a4528bdd1 Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).
This avoids warnings in some cases, and is otherwise generally good
hygiene.
2016-11-15 15:01:03 -08:00
Jason Evans
a38acf716e Add extent serial numbers.
Add extent serial numbers and use them where appropriate as a sort key
that is higher priority than address, so that the allocation policy
prefers older extents.

This resolves #147.
2016-11-15 13:08:33 -08:00
Jason Evans
c0a667112c Fix arena_reset() crashing bug.
This regression was caused by 498856f44a
(Move slabs out of chunks.).
2016-11-15 10:34:02 -08:00
Jason Evans
cda59f9970 Rename atomic_*_{uint32,uint64,u}() to atomic_*_{u32,u64,zu}().
This change conforms to naming conventions throughout the codebase.
2016-11-07 11:27:48 -08:00
Jason Evans
04b463546e Refactor prng to not use 64-bit atomics on 32-bit platforms.
This resolves #495.
2016-11-07 10:52:44 -08:00
Jason Evans
a967fae362 Fix/simplify extent_recycle() allocation size computations.
Do not call s2u() during alloc_size computation, since any necessary
ceiling increase is taken care of later by extent_first_best_fit() -->
extent_size_quantize_ceil(), and the s2u() call may erroneously cause a
higher quantization result.

Remove an overly strict overflow check that was added in
4a7852137d (Fix extent_recycle()'s
cache-oblivious padding support.).
2016-11-03 23:49:21 -07:00
Jason Evans
4a7852137d Fix extent_recycle()'s cache-oblivious padding support.
Add padding *after* computing the size class, so that the optimal size
class isn't skipped during search for a usable extent.  This regression
was caused by b46261d58b (Implement
cache-oblivious support for huge size classes.).
2016-11-03 22:33:35 -07:00
Jason Evans
ea9961acdb Fix psz/pind edge cases.
Add an "over-size" extent heap in which to store extents which exceed
the maximum size class (plus cache-oblivious padding, if enabled).
Remove psz2ind_clamp() and use psz2ind() instead so that trying to
allocate the maximum size class can in principle succeed.  In practice,
this allows assertions to hold so that OOM errors can be successfully
generated.
2016-11-03 22:33:34 -07:00
Jason Evans
8dd5ea87ca Fix extent_alloc_cache[_locked]() to support decommitted allocation.
Fix extent_alloc_cache[_locked]() to support decommitted allocation, and
use this ability in arena_stash_dirty(), so that decommitted extents are
not needlessly committed during purging.  In practice this does not
happen on any currently supported systems, because both extent merging
and decommit must be implemented; all supported systems implement one
xor the other.
2016-11-03 22:33:23 -07:00
Dave Watson
25f7bbcf28 Fix long spinning in rtree_node_init
rtree_node_init spinlocks the node, allocates, and then sets the node.
This is under heavy contention at the top of the tree if many threads
start to allocate at the same time.

Instead, take a per-rtree sleeping mutex to reduce spinning.  Tested
both pthreads and osx OSSpinLock, and both reduce spinning adequately

Previous benchmark time:
./ttest1 500 100
~15s

New benchmark time:
./ttest1 500 100
.57s
2016-11-02 20:30:53 -07:00
Dave Watson
712fde79fd Check for existance of CPU_COUNT macro before using it.
This resolves #485.
2016-11-02 20:05:40 -07:00
Jason Evans
d82f2b3473 Do not use syscall(2) on OS X 10.12 (deprecated). 2016-11-02 19:18:33 -07:00
Jason Evans
795f6689de Add os_unfair_lock support.
OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended
replacement.
2016-11-02 18:09:45 -07:00
Jason Evans
d9f7b2a430 Fix/refactor zone allocator integration code.
Fix zone_force_unlock() to reinitialize, rather than unlocking mutexes,
since OS X 10.12 cannot tolerate a child unlocking mutexes that were
locked by its parent.

Refactor; this was a side effect of experimenting with zone
{de,re}registration during fork(2).
2016-11-02 18:06:40 -07:00
Jason Evans
7b0a8b74f0 malloc_stats_print() fixes/cleanups.
Fix and clean up various malloc_stats_print() issues caused by
0ba5b9b618 (Add "J" (JSON) support to
malloc_stats_print().).
2016-11-01 15:26:35 -07:00
Jason Evans
0ba5b9b618 Add "J" (JSON) support to malloc_stats_print().
This resolves #474.
2016-10-31 22:30:49 -07:00
Jason Evans
b93f63b3eb Fix extent_rtree acquire() to release element on error.
This resolves #480.
2016-10-31 16:32:33 -07:00
Jason Evans
6c80321aed Use CLOCK_MONOTONIC_COARSE rather than COARSE_MONOTONIC_RAW.
The raw clock variant is slow (even relative to plain CLOCK_MONOTONIC),
whereas the coarse clock variant is faster than CLOCK_MONOTONIC, but
still has resolution (~1ms) that is adequate for our purposes.

This resolves #479.
2016-10-29 22:58:18 -07:00
Jason Evans
d87037a62c Use syscall(2) rather than {open,read,close}(2) during boot.
Some applications wrap various system calls, and if they call the
allocator in their wrappers, unexpected reentry can result.  This is not
a general solution (many other syscalls are spread throughout the code),
but this resolves a bootstrapping issue that is apparently common.

This resolves #443.
2016-10-29 22:41:04 -07:00
Jason Evans
1dcd0aa07f Do not mark malloc_conf as weak on Windows.
This works around malloc_conf not being properly initialized by at least
the cygwin toolchain.  Prior build system changes to use
-Wl,--[no-]whole-archive may be necessary for malloc_conf resolution to
work properly as a non-weak symbol (not tested).
2016-10-29 00:13:11 -07:00
Jason Evans
6ec2d8e279 Do not mark malloc_conf as weak for unit tests.
This is generally correct (no need for weak symbols since no jemalloc
library is involved in the link phase), and avoids linking problems
(apparently unininitialized non-NULL malloc_conf) when using cygwin with
gcc.
2016-10-28 23:03:25 -07:00
Dave Watson
8309388408 Support static linking of jemalloc with glibc
glibc defines its malloc implementation with several weak and strong
symbols:

strong_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
strong_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
strong_alias (__libc_free, __free) strong_alias (__libc_free, free)
strong_alias (__libc_malloc, __malloc) strong_alias (__libc_malloc, malloc)

The issue is not with the weak symbols, but that other parts of glibc
depend on __libc_malloc explicitly.  Defining them in terms of jemalloc
API's allows the linker to drop glibc's malloc.o completely from the link,
and static linking no longer results in symbol collisions.

Another wrinkle: jemalloc during initialization calls sysconf to
get the number of CPU's.  GLIBC allocates for the first time before
setting up isspace (and other related) tables, which are used by
sysconf.  Instead, use the pthread API to get the number of
CPUs with GLIBC, which seems to work.

This resolves #442.
2016-10-28 15:08:19 -07:00
Jason Evans
68e14c9884 Fix over-sized allocation of rtree leaf nodes.
Use the correct level metadata when allocating child nodes so that leaf
nodes don't end up over-sized (2^16 elements vs 2^4 elements).
2016-10-28 00:16:55 -07:00
Jason Evans
977103c897 Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).
This avoids warnings in some cases, and is otherwise generally good
hygiene.
2016-10-27 21:31:25 -07:00
Jason Evans
b54d160dc4 Do not (recursively) allocate within tsd_fetch().
Refactor tsd so that tsdn_fetch() does not trigger allocation, since
allocation could cause infinite recursion.

This resolves #458.
2016-10-20 23:59:12 -07:00
Jason Evans
577d4572b0 Make dss operations lockless.
Rather than protecting dss operations with a mutex, use atomic
operations.  This has negligible impact on synchronization overhead
during typical dss allocation, but is a substantial improvement for
extent_in_dss() and the newly added extent_dss_mergeable(), which can be
called multiple times during extent deallocations.

This change also has the advantage of avoiding tsd in deallocation paths
associated with purging, which resolves potential deadlocks during
thread exit due to attempted tsd resurrection.

This resolves #425.
2016-10-13 15:37:00 -07:00
Jason Evans
e5effef428 Add/use adaptive spinning.
Add spin_t and spin_{init,adaptive}(), which provide a simple
abstraction for adaptive spinning.

Adaptively spin during busy waits in bootstrapping and rtree node
initialization.
2016-10-13 14:55:39 -07:00
Jason Evans
9acd5cf178 Remove all vestiges of chunks.
Remove mallctls:
- opt.lg_chunk
- stats.cactive

This resolves #464.
2016-10-12 11:55:43 -07:00
Jason Evans
63b5657aa5 Remove ratio-based purging.
Make decay-based purging the default (and only) mode.

Remove associated mallctls:
- opt.purge
- opt.lg_dirty_mult
- arena.<i>.lg_dirty_mult
- arenas.lg_dirty_mult
- stats.arenas.<i>.lg_dirty_mult

This resolves #385.
2016-10-12 10:40:27 -07:00
Jason Evans
b4b4a77848 Fix and simplify decay-based purging.
Simplify decay-based purging attempts to only be triggered when the
epoch is advanced, rather than every time purgeable memory increases.
In a correctly functioning system (not previously the case; see below),
this only causes a behavior difference if during subsequent purge
attempts the least recently used (LRU) purgeable memory extent is
initially too large to be purged, but that memory is reused between
attempts and one or more of the next LRU purgeable memory extents are
small enough to be purged.  In practice this is an arbitrary behavior
change that is within the set of acceptable behaviors.

As for the purging fix, assure that arena->decay.ndirty is recorded
*after* the epoch advance and associated purging occurs.  Prior to this
fix, it was possible for purging during epoch advance to cause a
substantially underrepresentative (arena->ndirty - arena->decay.ndirty),
i.e. the number of dirty pages attributed to the current epoch was too
low, and a series of unintended purges could result.  This fix is also
relevant in the context of the simplification described above, but the
bug's impact would be limited to over-purging at epoch advances.
2016-10-11 15:30:01 -07:00
Jason Evans
5f11fb7d43 Do not advance decay epoch when time goes backwards.
Instead, move the epoch backward in time.  Additionally, add
nstime_monotonic() and use it in debug builds to assert that time only
goes backward if nstime_update() is using a non-monotonic time source.
2016-10-10 22:15:10 -07:00
Jason Evans
ee0c74b77a Refactor arena->decay_* into arena->decay.* (arena_decay_t). 2016-10-10 20:32:19 -07:00
Jason Evans
e0164bc63c Refine nstime_update().
Add missing #include <time.h>.  The critical time facilities appear to
have been transitively included via unistd.h and sys/time.h, but in
principle this omission was capable of having caused
clock_gettime(CLOCK_MONOTONIC, ...) to have been overlooked in favor of
gettimeofday(), which in turn could cause spurious non-monotonic time
updates.

Refactor nstime_get() out of nstime_update() and add configure tests for
all variants.

Add CLOCK_MONOTONIC_RAW support (Linux-specific) and
mach_absolute_time() support (OS X-specific).

Do not fall back to clock_gettime(CLOCK_REALTIME, ...).  This was a
fragile Linux-specific workaround, which we're unlikely to use at all
now that clock_gettime(CLOCK_MONOTONIC_RAW, ...) is supported, and if we
have no choice besides non-monotonic clocks, gettimeofday() is only
incrementally worse.
2016-10-10 10:33:59 -07:00
Jason Evans
b6c0867142 Reduce "thread.arena" mallctl contention.
This resolves #460.
2016-10-04 09:54:18 -07:00
Jason Evans
a5a8d7ae8d Remove a size class assertion from extent_size_quantize_floor().
Extent coalescence can result in legitimate calls to
extent_size_quantize_floor() with size larger than LARGE_MAXCLASS.
2016-10-03 14:45:27 -07:00
Jason Evans
871a9498e1 Fix size class overflow bugs.
Avoid calling s2u() on raw extent sizes in extent_recycle().

Clamp psz2ind() (implemented as psz2ind_clamp()) when inserting/removing
into/from size-segregated extent heaps.
2016-10-03 14:18:55 -07:00
Jason Evans
3c8c3e9e9b Close file descriptor after reading "/proc/sys/vm/overcommit_memory".
This bug was introduced by c2f970c32b
(Modify pages_map() to support mapping uncommitted virtual memory.).

This resolves #399.
2016-09-26 15:55:40 -07:00
Jason Evans
5ff1839133 Formatting fixes. 2016-09-26 11:00:32 -07:00
Jason Evans
0222fb41d1 Add various mutex ownership assertions. 2016-09-23 12:21:34 -07:00
Jason Evans
e3187ec6b6 Fix large_dalloc_impl() to always lock large_mtx. 2016-09-23 12:21:34 -07:00
Jason Evans
fd96974040 Add new_addr validation in extent_recycle(). 2016-09-23 12:21:25 -07:00
Jason Evans
f6d01ff4b7 Protect extents_dirty access with extents_mtx.
This fixes race conditions during purging.
2016-09-22 11:57:28 -07:00
Jason Evans
bc49157d21 Fix extent_recycle() to exclude other arenas' extents.
When attempting to recycle an extent at a specified address, check that
the extent belongs to the correct arena.
2016-09-22 11:53:19 -07:00
Qi Wang
1cb399b630 Fix arena_bind().
When tsd is not in nominal state (e.g. during thread termination), we
should not increment nthreads.
2016-09-22 09:13:45 -07:00
Mike Hommey
19c9a3e828 Change how the default zone is found
On OSX 10.12, malloc_default_zone returns a special zone that is not
present in the list of registered zones. That zone uses a "lite zone"
if one is present (apparently enabled when malloc stack logging is
enabled), or the first registered zone otherwise. In practice this
means unless malloc stack logging is enabled, the first registered
zone is the default.

So get the list of zones to get the first one, instead of relying on
malloc_default_zone.
2016-07-08 13:35:35 +09:00
Mike Hommey
4abaee5d13 Avoid getting the same default zone twice in a row.
847ff22 added a call to malloc_default_zone() before the main loop in
register_zone, effectively making malloc_default_zone() called twice
without any different outcome expected in the returned result.

It is also called once at the beginning, and a second time at the end
of the loop block.

Instead, call it only once per iteration.
2016-07-08 13:28:16 +09:00
Jason Evans
dd752c1ffd Fix potential VM map fragmentation regression.
Revert 245ae6036c (Support --with-lg-page
values larger than actual page size.), because it could cause VM map
fragmentation if the kernel grows mmap()ed memory downward.

This resolves #391.
2016-06-07 14:15:49 -07:00
Elliot Ronaghan
de23f6fce7 Fix mixed decl in nstime.c
Fix mixed decl in the gettimeofday() branch of nstime_update()
2016-06-07 14:03:27 -07:00
Jason Evans
cc289f40b6 Propagate tsdn to default extent hooks.
This avoids bootstrapping issues for configurations that require
allocation during tsd initialization.

This resolves #390.
2016-06-07 13:37:22 -07:00
Jason Evans
02a475d89a Use extent_commit_wrapper() rather than directly calling commit hook.
As a side effect this causes the extent's 'committed' flag to be
updated.
2016-06-06 15:32:01 -07:00
Jason Evans
10b9087b14 Set 'committed' in extent_[de]commit_wrapper(). 2016-06-05 23:24:52 -07:00
Jason Evans
487093d999 Fix regressions related extent splitting failures.
Fix a fundamental extent_split_wrapper() bug in an error path.

Fix extent_recycle() to deregister unsplittable extents before leaking
them.

Relax xallocx() test assertions so that unsplittable extents don't cause
test failures.
2016-06-05 22:08:20 -07:00
Jason Evans
9a645c612f Fix an extent [de]allocation/[de]registration race.
Deregister extents before deallocation, so that subsequent
reallocation/registration doesn't race with deregistration.
2016-06-05 21:00:02 -07:00
Jason Evans
4e910fc958 Fix extent_alloc_dss() regressions.
Page-align the gap, if any, and add/use extent_dalloc_gap(), which
registers the gap extent before deallocation.
2016-06-05 21:00:02 -07:00
Jason Evans
c4bb17f891 Fix gdump triggering regression.
Now that extents are not multiples of chunksize, it's necessary to track
pages rather than chunks.
2016-06-05 21:00:02 -07:00
Jason Evans
04942c3d90 Remove a stray memset(), and fix a junk filling test regression. 2016-06-05 21:00:02 -07:00
Jason Evans
f02fec8839 Silence a bogus compiler warning. 2016-06-05 21:00:02 -07:00
Jason Evans
8835cf3bed Fix locking order reversal in arena_reset(). 2016-06-05 21:00:02 -07:00
Jason Evans
f8f0542194 Modify extent hook functions to take an (extent_t *) argument.
This facilitates the application accessing its own extent allocator
metadata during hook invocations.

This resolves #259.
2016-06-05 21:00:02 -07:00
Jason Evans
6f29a83924 Add rtree lookup path caching.
rtree-based extent lookups remain more expensive than chunk-based run
lookups, but with this optimization the fast path slowdown is ~3 CPU
cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles
prior.  The path caching speedup tends to degrade gracefully unless
allocated memory is spread far apart (as is the case when using a
mixture of sbrk() and mmap()).
2016-06-05 20:59:57 -07:00
Jason Evans
7be2ebc23f Make tsd cleanup functions optional, remove noop cleanup functions. 2016-06-05 20:42:24 -07:00
Jason Evans
e28b43a739 Remove some unnecessary locking. 2016-06-05 20:42:24 -07:00
Jason Evans
819417580e Fix rallocx() sampling code to not eagerly commit sampler update.
rallocx() for an alignment-constrained request may end up with a
smaller-than-worst-case size if in-place reallocation succeeds due to
serendipitous alignment.  In such cases, sampling may not happen.
2016-06-05 20:42:24 -07:00
Jason Evans
c8c3cbdf47 Miscellaneous s/chunk/extent/ updates. 2016-06-05 20:42:24 -07:00
Jason Evans
a83a31c1c5 Relax opt_lg_chunk clamping constraints. 2016-06-05 20:42:24 -07:00
Jason Evans
751f2c332d Remove obsolete stats.arenas.<i>.metadata.mapped mallctl.
Rename stats.arenas.<i>.metadata.allocated mallctl to
stats.arenas.<i>.metadata .
2016-06-05 20:42:24 -07:00
Jason Evans
22588dda6e Rename most remaining *chunk* APIs to *extent*. 2016-06-05 20:42:23 -07:00
Jason Evans
0c4932eb1e s/chunk_lookup/extent_lookup/g, s/chunks_rtree/extents_rtree/g 2016-06-05 20:42:23 -07:00
Jason Evans
4a55daa363 s/CHUNK_HOOKS_INITIALIZER/EXTENT_HOOKS_INITIALIZER/g 2016-06-05 20:42:23 -07:00
Jason Evans
c9a76481d8 Rename chunks_{cached,retained,mtx} to extents_{cached,retained,mtx}. 2016-06-05 20:42:23 -07:00
Jason Evans
127026ad98 Rename chunk_*_t hooks to extent_*_t. 2016-06-05 20:42:23 -07:00
Jason Evans
9c305c9e5c s/chunk_hook/extent_hook/g 2016-06-05 20:42:23 -07:00
Jason Evans
7d63fed0fd Rename huge to large. 2016-06-05 20:42:23 -07:00
Jason Evans
714d1640f3 Update private symbols. 2016-06-05 20:42:23 -07:00
Jason Evans
498856f44a Move slabs out of chunks. 2016-06-05 20:42:23 -07:00
Jason Evans
d28e5a6696 Improve interval-based profile dump triggering.
When an allocation is large enough to trigger multiple dumps, use
modular math rather than subtraction to reset the interval counter.
Prior to this change, it was possible for a single allocation to cause
many subsequent allocations to all trigger profile dumps.

When updating usable size for a sampled object, try to cancel out
the difference between LARGE_MINCLASS and usable size from the interval
counter.
2016-06-05 20:42:23 -07:00
Jason Evans
ed2c2427a7 Use huge size class infrastructure for large size classes. 2016-06-05 20:42:18 -07:00
Jason Evans
b46261d58b Implement cache-oblivious support for huge size classes. 2016-06-03 12:27:41 -07:00
Jason Evans
4731cd47f7 Allow chunks to not be naturally aligned.
Precisely size extents for huge size classes that aren't multiples of
chunksize.
2016-06-03 12:27:41 -07:00
Jason Evans
741967e79d Remove CHUNK_ADDR2BASE() and CHUNK_ADDR2OFFSET(). 2016-06-03 12:27:41 -07:00
Jason Evans
760bf11b23 Add extent_dirty_[gs]et(). 2016-06-03 12:27:41 -07:00
Jason Evans
47613afc34 Convert rtree from per chunk to per page.
Refactor [de]registration to maintain interior rtree entries for slabs.
2016-06-03 12:27:41 -07:00
Jason Evans
5c6be2bdd3 Refactor chunk_purge_wrapper() to take extent argument. 2016-06-03 12:27:41 -07:00
Jason Evans
0eb6f08959 Refactor chunk_[de]commit_wrapper() to take extent arguments. 2016-06-03 12:27:41 -07:00
Jason Evans
6c94470822 Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments.
Rename arena_extent_[d]alloc() to extent_[d]alloc().

Move all chunk [de]registration responsibility into chunk.c.
2016-06-03 12:27:41 -07:00
Jason Evans
de0305a7f3 Add/use chunk_split_wrapper().
Remove redundant ptr/oldsize args from huge_*().

Refactor huge/chunk/arena code boundaries.
2016-06-03 12:27:41 -07:00
Jason Evans
1ad060584f Add/use chunk_merge_wrapper(). 2016-06-03 12:27:41 -07:00
Jason Evans
384e88f451 Add/use chunk_commit_wrapper(). 2016-06-03 12:27:41 -07:00
Jason Evans
56e0031d7d Add/use chunk_decommit_wrapper(). 2016-06-03 12:27:41 -07:00
Jason Evans
4d2d9cec5a Merge chunk_alloc_base() into its only caller. 2016-06-03 12:27:41 -07:00
Jason Evans
fc0372a15e Replace extent_tree_szad_* with extent_heap_*. 2016-06-03 12:27:41 -07:00
Jason Evans
ffa45a5331 Use rtree rather than [sz]ad trees for chunk split/coalesce operations. 2016-06-03 12:27:41 -07:00
Jason Evans
93e79c5c3f Remove redundant chunk argument from chunk_{,de,re}register(). 2016-06-03 12:27:41 -07:00
Jason Evans
f442254bdf Fix opt_zero-triggered in-place huge reallocation zeroing.
Fix huge_ralloc_no_move_expand() to update the extent's zeroed attribute
based on the intersection of the previous value and that of the newly
merged trailing extent.
2016-06-03 12:27:41 -07:00
Jason Evans
d78846c989 Replace extent_achunk_[gs]et() with extent_slab_[gs]et(). 2016-06-03 12:27:41 -07:00
Jason Evans
fae8344098 Add extent_active_[gs]et().
Always initialize extents' runs_dirty and chunks_cache linkage.
2016-06-03 12:27:41 -07:00
Jason Evans
b2a9fae886 Set/unset rtree node for last chunk of extents.
Set/unset rtree node for last chunk of extents, so that the rtree can be
used for chunk coalescing.
2016-06-03 12:27:41 -07:00
Jason Evans
e75e9be130 Add rtree element witnesses. 2016-06-03 12:27:41 -07:00
Jason Evans
8c9be3e837 Refactor rtree to always use base_alloc() for node allocation. 2016-06-03 12:27:41 -07:00
Jason Evans
db72272bef Use rtree-based chunk lookups rather than pointer bit twiddling.
Look up chunk metadata via the radix tree, rather than using
CHUNK_ADDR2BASE().

Propagate pointer's containing extent.

Minimize extent lookups by doing a single lookup (e.g. in free()) and
propagating the pointer's extent into nearly all the functions that may
need it.
2016-06-03 12:27:41 -07:00
Jason Evans
2d2b4e98c9 Add element acquire/release capabilities to rtree.
This makes it possible to acquire short-term "ownership" of rtree
elements so that it is possible to read an extent pointer *and* read the
extent's contents with a guarantee that the element will not be modified
until the ownership is released.  This is intended as a mechanism for
resolving rtree read/write races rather than as a way to lock extents.
2016-06-03 12:27:33 -07:00
Jason Evans
a7a6f5bc96 Rename extent_node_t to extent_t. 2016-05-16 12:21:28 -07:00
Jason Evans
3aea827f5e Simplify run quantization. 2016-05-16 12:21:27 -07:00
Jason Evans
7bb00ae9d6 Refactor runs_avail.
Use pszind_t size classes rather than szind_t size classes, and always
reserve space for NPSIZES elements.  This removes unused heaps that are
not multiples of the page size, and adds (currently) unused heaps for
all huge size classes, with the immediate benefit that the size of
arena_t allocations is constant (no longer dependent on chunk size).
2016-05-16 12:21:21 -07:00
Jason Evans
226c446979 Implement pz2ind(), pind2sz(), and psz2u().
These compute size classes and indices similarly to size2index(),
index2size() and s2u(), respectively, but using the subset of size
classes that are multiples of the page size.  Note that pszind_t and
szind_t are not interchangeable.
2016-05-13 10:31:54 -07:00
Jason Evans
627372b459 Initialize arena_bin_info at compile time rather than at boot time.
This resolves #370.
2016-05-13 10:31:30 -07:00
Jason Evans
b683734b43 Implement BITMAP_INFO_INITIALIZER(nbits).
This allows static initialization of bitmap_info_t structures.
2016-05-13 10:27:48 -07:00
Jason Evans
17c021c177 Remove redzone support.
This resolves #369.
2016-05-13 10:27:33 -07:00
Jason Evans
ba5c709517 Remove quarantine support. 2016-05-13 10:25:05 -07:00
Jason Evans
9a8add1510 Remove Valgrind support. 2016-05-13 09:56:18 -07:00
Jason Evans
a397045323 Use TSDN_NULL rather than NULL as appropriate. 2016-05-12 21:07:08 -07:00
Jason Evans
1c35f63797 Guard tsdn_tsd() call with tsdn_null() check. 2016-05-11 16:52:58 -07:00
Jason Evans
0fc1317fc6 Mangle tested functions as n_witness_* rather than witness_*_impl. 2016-05-11 16:14:20 -07:00
Jason Evans
73d3d58dc2 Optimize witness fast path.
Short-circuit commonly called witness functions so that they only
execute in debug builds, and remove equivalent guards from mutex
functions.  This avoids pointless code execution in
witness_assert_lockless(), which is typically called twice per
allocation/deallocation function invocation.

Inline commonly called witness functions so that optimized builds can
completely remove calls as dead code.
2016-05-11 15:38:06 -07:00
Jason Evans
7790a0ba40 Fix chunk accounting related to triggering gdump profiles.
Fix in place huge reallocation to update the chunk counters that are
used for triggering gdump profiles.
2016-05-11 00:56:30 -07:00
Jason Evans
c1e00ef2a6 Resolve bootstrapping issues when embedded in FreeBSD libc.
b2c0d6322d (Add witness, a simple online
locking validator.) caused a broad propagation of tsd throughout the
internal API, but tsd_fetch() was designed to fail prior to tsd
bootstrapping.  Fix this by splitting tsd_t into non-nullable tsd_t and
nullable tsdn_t, and modifying all internal APIs that do not critically
rely on tsd to take nullable pointers.  Furthermore, add the
tsd_booted_get() function so that tsdn_fetch() can probe whether tsd
bootstrapping is complete and return NULL if not.  All dangerous
conversions of nullable pointers are tsdn_tsd() calls that assert-fail
on invalid conversion.
2016-05-10 22:51:33 -07:00
Jason Evans
0c12dcabc5 Fix tsd bootstrapping for a0malloc(). 2016-05-07 16:55:36 -07:00
Jason Evans
3ef51d7f73 Optimize the fast paths of calloc() and [m,d,sd]allocx().
This is a broader application of optimizations to malloc() and free() in
f4a0f32d34 (Fast-path improvement:
reduce # of branches and unnecessary operations.).

This resolves #321.
2016-05-06 14:37:39 -07:00
Jason Evans
c2f970c32b Modify pages_map() to support mapping uncommitted virtual memory.
If the OS overcommits:
- Commit all mappings in pages_map() regardless of whether the caller
  requested committed memory.
- Linux-specific: Specify MAP_NORESERVE to avoid
  unfortunate interactions with heuristic overcommit mode during
  fork(2).

This resolves #193.
2016-05-05 18:56:17 -07:00
Jason Evans
dc391adc65 Scale leak report summary according to sampling probability.
This makes the numbers reported in the leak report summary closely match
those reported by jeprof.

This resolves #356.
2016-05-04 12:14:36 -07:00
Jason Evans
04c3c0f9a0 Add the stats.retained and stats.arenas.<i>.retained statistics.
This resolves #367.
2016-05-03 22:11:35 -07:00
Jason Evans
90827a3f3e Fix huge_palloc() regression.
Split arena_choose() into arena_[i]choose() and use arena_ichoose() for
arena lookup during internal allocation.  This fixes huge_palloc() so
that it always succeeds during extent node allocation.

This regression was introduced by
66cd953514 (Do not allocate metadata via
non-auto arenas, nor tcaches.).
2016-05-03 17:19:15 -07:00
Jason Evans
108c4a11e9 Fix witness/fork() interactions.
Fix witness to clear its list of owned mutexes in the child if
platform-specific malloc_mutex code re-initializes mutexes rather than
unlocking them.
2016-04-26 10:47:22 -07:00
Jason Evans
174c0c3a9c Fix fork()-related lock rank ordering reversals. 2016-04-25 23:16:20 -07:00
Jason Evans
7e6749595a Fix arena reset effects on large/huge stats.
Reset large curruns to 0 during arena reset.

Do not increase huge ndalloc stats during arena reset.
2016-04-25 13:26:54 -07:00
Jason Evans
259f8ebbfc Fix arena_choose_hard() regression.
This regression was caused by 66cd953514
(Do not allocate metadata via non-auto arenas, nor tcaches.).
2016-04-22 22:21:31 -07:00
Jason Evans
19ff2cefba Implement the arena.<i>.reset mallctl.
This makes it possible to discard all of an arena's allocations in a
single operation.

This resolves #146.
2016-04-22 15:20:06 -07:00
Jason Evans
66cd953514 Do not allocate metadata via non-auto arenas, nor tcaches.
This assures that all internally allocated metadata come from the
first opt_narenas arenas, i.e. the automatically multiplexed arenas.
2016-04-22 15:19:59 -07:00
Jason Evans
c9a4bf9170 Reduce a variable scope. 2016-04-22 14:56:58 -07:00
Jason Evans
ab0cfe01fa Update private_symbols.txt.
Change test-related mangling to simplify symbol filtering.

The following commands can be used to detect missing/obsolete symbol
mangling, with the caveat that the full set of symbols is based on the
union of symbols generated by all configurations, some of which are
platform-specific:

./autogen.sh --enable-debug --enable-prof --enable-lazy-lock
make all tests
nm -a lib/libjemalloc.a src/*.jet.o \
  |grep " [TDBCR] " \
  |awk '{print $3}' \
  |sed -e 's/^\(je_\|jet_\(n_\)\?\)\([a-zA-Z0-9_]*\)/\3/g' \
  |LC_COLLATE=C sort -u \
  |grep -v \
   -e '^\(malloc\|calloc\|posix_memalign\|aligned_alloc\|realloc\|free\)$' \
   -e '^\(m\|r\|x\|s\|d\|sd\|n\)allocx$' \
   -e '^mallctl\(\|nametomib\|bymib\)$' \
   -e '^malloc_\(stats_print\|usable_size\|message\)$' \
   -e '^\(memalign\|valloc\)$' \
   -e '^__\(malloc\|memalign\|realloc\|free\)_hook$' \
   -e '^pthread_create$' \
  > /tmp/private_symbols.txt
2016-04-18 15:23:35 -07:00
Jason Evans
1423ee9016 Fix style nits. 2016-04-17 13:44:59 -07:00
Jason Evans
1b5830178f Fix malloc_mutex_[un]lock() to conditionally check witness.
Also remove tautological cassert(config_debug) calls.
2016-04-17 13:44:59 -07:00
Jason Evans
d9394d0ca8 Convert base_mtx locking protocol comments to assertions. 2016-04-17 13:44:58 -07:00
Jason Evans
b2c0d6322d Add witness, a simple online locking validator.
This resolves #358.
2016-04-14 02:09:28 -07:00
rustyx
00432331b8 Fix 64-to-32 conversion warnings in 32-bit mode 2016-04-12 09:34:09 -07:00
Jason Evans
e7642715ac Fix malloc_stats_print() to print correct opt.narenas value.
This regression was caused by 8f683b94a7
(Make opt_narenas unsigned rather than size_t.).
2016-04-11 18:47:18 -07:00
Jason Evans
245ae6036c Support --with-lg-page values larger than actual page size.
During over-allocation in preparation for creating aligned mappings,
allocate one more page than necessary if PAGE is the actual page size,
so that trimming still succeeds even if the system returns a mapping
that has less than PAGE alignment.  This allows compiling with e.g. 64
KiB "pages" on systems that actually use 4 KiB pages.

Note that for e.g. --with-lg-page=21, it is also necessary to increase
the chunk size (e.g. --with-malloc-conf=lg_chunk:22) so that there are
at least two "pages" per chunk.  In practice this isn't a particularly
compelling configuration because so much (unusable) virtual memory is
dedicated to chunk headers.
2016-04-11 02:35:00 -07:00
Jason Evans
c6a2c39404 Refactor/fix ph.
Refactor ph to support configurable comparison functions.  Use a cpp
macro code generation form equivalent to the rb macros so that pairing
heaps can be used for both run heaps and chunk heaps.

Remove per node parent pointers, and instead use leftmost siblings' prev
pointers to track parents.

Fix multi-pass sibling merging to iterate over intermediate results
using a FIFO, rather than a LIFO.  Use this fixed sibling merging
implementation for both merge phases of the auxiliary twopass algorithm
(first merging the aux list, then replacing the root with its merged
children).  This fixes both degenerate merge behavior and the potential
for deep recursion.

This regression was introduced by
6bafa6678f (Pairing heap).

This resolves #371.
2016-04-11 02:15:42 -07:00
Jason Evans
2ee2f1ec57 Reduce differences between alternative bitmap implementations. 2016-04-06 10:38:47 -07:00
Chris Peterson
a82070ef5f Add JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macros
Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and
JEMALLOC_FREE_JUNK macros, respectively.
2016-03-31 11:23:29 -07:00
Jason Evans
f86bc081d6 Update a comment. 2016-03-31 11:19:46 -07:00
Jason Evans
ce7c0f999b Fix potential chunk leaks.
Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(),
so that if the dalloc hook fails, proper decommit/purge/retain cascading
occurs.  This fixes three potential chunk leaks on OOM paths, one during
dss-based chunk allocation, one during chunk header commit (currently
relevant only on Windows), and one during rtree write (e.g. if rtree
node allocation fails).

Merge chunk_purge_arena() into chunk_purge_default() (refactor, no
change to functionality).
2016-03-30 18:36:04 -07:00
Chris Peterson
0bc716ae27 Fix -Wunreachable-code warning in malloc_vsnprintf().
Variables s and slen are declared inside a switch statement, but outside
a case scope. clang reports these variable definitions as "unreachable",
though this is not really meaningful in this case. This is the only
-Wunreachable-code warning in jemalloc.

src/util.c:501:5 [-Wunreachable-code] code will never be executed

This resolves #364.
2016-03-26 23:24:33 -07:00
Jason Evans
61a6dfcd5f Constify various internal arena APIs. 2016-03-23 16:15:42 -07:00
Jason Evans
f6bd2e5a17 Code formatting fixes. 2016-03-23 16:15:42 -07:00
Jason Evans
6c460ad91b Optimize rtree_get().
Specialize fast path to avoid code that cannot execute for dependent
loads.

Manually unroll.
2016-03-22 17:54:35 -07:00
Jason Evans
22af74e106 Refactor out signed/unsigned comparisons. 2016-03-15 09:40:02 -07:00
Jason Evans
613cdc80f6 Convert arena_bin_t's runs from a tree to a heap. 2016-03-08 13:48:27 -08:00
Dave Watson
4a0dbb5ac8 Use pairing heap for arena->runs_avail
Use pairing heap instead of red black tree in arena runs_avail.  The
extra links are unioned with the bitmap_t, so this change doesn't use
any extra memory.

Canaries show this change to be a 1% cpu win, and 2% latency win.  In
particular, large free()s, and small bin frees are now O(1) (barring
coalescing).

I also tested changing bin->runs to be a pairing heap, but saw a much
smaller win, and it would mean increasing the size of arena_run_s by two
pointers, so I left that as an rb-tree for now.
2016-03-08 13:48:27 -08:00
Dave Watson
6bafa6678f Pairing heap
Initial implementation of a twopass pairing heap with aux list.
Research papers linked in comments.

Where search/nsearch/last aren't needed, this gives much faster first(),
delete(), and insert().  Insert is O(1), and first/delete don't have to
walk the whole tree.

Also tested rb_old with parent pointers - it was better than the current
rb.h for memory loads, but still much worse than a pairing heap.

An array-based heap would be much faster if everything fits in memory,
but on a cold cache it has many more memory loads for most operations.
2016-03-08 13:46:19 -08:00
Jason Evans
022f6891fa Avoid a potential innocuous compiler warning.
Add a cast to avoid comparing a ssize_t value to a uint64_t value that
is always larger than a 32-bit ssize_t.  This silences an innocuous
compiler warning from e.g. gcc 4.2.1 about the comparison always having
the same result.
2016-03-02 22:45:37 -08:00
Dmitri Smirnov
33184bf698 Fix stack corruption and uninitialized var warning
Stack corruption happens in x64 bit

This resolves #347.
2016-02-29 15:22:53 -08:00
Jason Evans
39f58755a7 Fix a potential tsd cleanup leak.
Prior to 767d85061a (Refactor arenas array
(fixes deadlock).), it was possible under some circumstances for
arena_get() to trigger recreation of the arenas cache during tsd
cleanup, and the arenas cache would then be leaked.  In principle a
similar issue could still occur as a side effect of decay-based purging,
which calls arena_tdata_get().  Fix arenas_tdata_cleanup() by setting
tsd->arenas_tdata_bypass to true, so that arena_tdata_get() will
gracefully fail (an expected behavior) rather than recreating
tsd->arena_tdata.

Reported by Christopher Ferris <cferris@google.com>.
2016-02-27 21:18:15 -08:00
Jason Evans
3c07f803aa Fix stats.arenas.<i>.[...] for --disable-stats case.
Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time}
initialization.

Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of
the arena mutex.
2016-02-27 20:40:13 -08:00
Jason Evans
40ee9aa957 Fix stats.cactive accounting regression.
Fix stats.cactive accounting to always increase/decrease by multiples of
the chunk size, even for huge size classes that are not multiples of the
chunk size, e.g. {2.5, 3, 3.5, 5, 7} MiB with 2 MiB chunk size.  This
regression was introduced by 155bfa7da1
(Normalize size classes.) and first released in 4.0.0.

This resolves #336.
2016-02-27 15:35:52 -08:00
Jason Evans
3763d3b5f9 Refactor arena_cactive_update() into arena_cactive_{add,sub}().
This removes an implicit conversion from size_t to ssize_t.  For cactive
decreases, the size_t value was intentionally underflowed to generate
"negative" values (actually positive values above the positive range of
ssize_t), and the conversion to ssize_t was undefined according to C
language semantics.

This regression was perpetuated by
1522937e9c (Fix the cactive statistic.)
and first release in 4.0.0, which in retrospect only fixed one of two
problems introduced by aa5113b1fd
(Refactor overly large/complex functions) and first released in 3.5.0.
2016-02-26 17:29:35 -08:00
buchgr
d412624b25 Move retaining out of default chunk hooks
This fixes chunk allocation to reuse retained memory even if an
application-provided chunk allocation function is in use.

This resolves #307.
2016-02-26 15:24:13 -08:00
Dave Watson
b8823ab026 Use linear scan for small bitmaps
For small bitmaps, a linear scan of the bitmap is slightly faster than
a tree search - bitmap_t is more compact, and there are fewer writes
since we don't have to propogate state transitions up the tree.
On x86_64 with the current settings, I'm seeing ~.5%-1% CPU improvement
in production canaries with this change.

The old tree code is left since 32bit sizes are much larger (and ffsl
smaller), and maybe the run sizes will change in the future.

This resolves #339.
2016-02-26 14:21:10 -08:00
Jason Evans
01ecdf32d6 Miscellaneous bitmap refactoring. 2016-02-26 14:21:10 -08:00
Jason Evans
42ce80e15a Silence miscellaneous 64-to-32-bit data loss warnings.
This resolves #341.
2016-02-25 20:51:00 -08:00
Jason Evans
8282a2ad97 Remove a superfluous comment. 2016-02-25 16:44:48 -08:00
Jason Evans
9d2c10f2e8 Add more HUGE_MAXCLASS overflow checks.
Add HUGE_MAXCLASS overflow checks that are specific to heap profiling
code paths.  This fixes test failures that were introduced by
0c516a00c4 (Make *allocx() size class
overflow behavior defined.).
2016-02-25 16:42:15 -08:00
Jason Evans
0c516a00c4 Make *allocx() size class overflow behavior defined.
Limit supported size and alignment to HUGE_MAXCLASS, which in turn is
now limited to be less than PTRDIFF_MAX.

This resolves #278 and #295.
2016-02-25 15:29:49 -08:00
Jason Evans
767d85061a Refactor arenas array (fixes deadlock).
Refactor the arenas array, which contains pointers to all extant arenas,
such that it starts out as a sparse array of maximum size, and use
double-checked atomics-based reads as the basis for fast and simple
arena_get().  Additionally, reduce arenas_lock's role such that it only
protects against arena initalization races.  These changes remove the
possibility for arena lookups to trigger locking, which resolves at
least one known (fork-related) deadlock.

This resolves #315.
2016-02-24 23:58:10 -08:00
Dave Watson
3812729167 Fix arena_size computation.
Fix arena_size arena_new() computation to incorporate
runs_avail_nclasses elements for runs_avail, rather than
(runs_avail_nclasses - 1) elements.  Since offsetof(arena_t, runs_avail)
is used rather than sizeof(arena_t) for the first term of the
computation, all of the runs_avail elements must be added into the
second term.

This bug was introduced (by Jason Evans) while merging pull request #330
as 3417a304cc (Separate arena_avail
trees).
2016-02-24 20:10:02 -08:00
Dave Watson
cd86c1481a Fix arena_run_first_best_fit
Merge of 3417a304cc looks like a small
bug: first_best_fit doesn't scan through all the classes, since ind is
offset from runs_avail_nclasses by run_avail_bias.
2016-02-24 17:50:02 -08:00
Jason Evans
c7a9a6c86b Attempt mmap-based in-place huge reallocation.
Attempt mmap-based in-place huge reallocation by plumbing new_addr into
chunk_alloc_mmap().  This can dramatically speed up incremental huge
reallocation.

This resolves #335.
2016-02-24 17:23:18 -08:00
Jason Evans
ca8fffb5c1 Silence miscellaneous 64-to-32-bit data loss warnings. 2016-02-24 13:16:51 -08:00
Jason Evans
9e1810ca9d Silence miscellaneous 64-to-32-bit data loss warnings. 2016-02-24 13:03:48 -08:00
Jason Evans
0931cecbfa Use ssize_t for readlink() rather than int. 2016-02-24 13:03:48 -08:00
Jason Evans
8f683b94a7 Make opt_narenas unsigned rather than size_t. 2016-02-24 13:03:48 -08:00
Jason Evans
603b3bd413 Make nhbins unsigned rather than size_t. 2016-02-24 13:03:48 -08:00
Jason Evans
8dd5115ede Explicitly cast mib[] elements to unsigned where appropriate. 2016-02-24 13:03:48 -08:00
Jason Evans
9f4ee6034c Refactor jemalloc_ffs*() into ffs_*().
Use appropriate versions to resolve 64-to-32-bit data loss warnings.
2016-02-24 13:03:48 -08:00
Jason Evans
ae45142adc Collapse arena_avail_tree_* into arena_run_tree_*.
These tree types converged to become identical, yet they still had
independently generated red-black tree implementations.
2016-02-23 18:27:24 -08:00
Dave Watson
3417a304cc Separate arena_avail trees
Separate run trees by index, replacing the previous quantize logic.
Quantization by index is now performed only on insertion / removal from
the tree, and not on node comparison, saving some cpu.  This also means
we don't have to dereference the miscelm* pointers, saving half of the
memory loads from miscelms/mapbits that have fallen out of cache.  A
linear scan of the indicies appears to be fast enough.

The only cost of this is an extra tree array in each arena.
2016-02-23 18:09:36 -08:00
Jason Evans
0da8ce1e96 Use table lookup for run_quantize_{floor,ceil}().
Reduce run quantization overhead by generating lookup tables during
bootstrapping, and using the tables for all subsequent run quantization.
2016-02-22 16:47:34 -08:00
Jason Evans
08551eee58 Fix run_quantize_ceil().
In practice this bug had limited impact (and then only by increasing
chunk fragmentation) because run_quantize_ceil() returned correct
results except for inputs that could only arise from aligned allocation
requests that required more than page alignment.

This bug existed in the original run quantization implementation, which
was introduced by 8a03cf039c (Implement
cache index randomization for large allocations.).
2016-02-22 16:28:00 -08:00
Jason Evans
a9a4684792 Test run quantization.
Also rename run_quantize_*() to improve clarity.  These tests
demonstrate that run_quantize_ceil() is flawed.
2016-02-22 14:58:05 -08:00
Jason Evans
9bad079039 Refactor time_* into nstime_*.
Use a single uint64_t in nstime_t to store nanoseconds rather than using
struct timespec.  This reduces fragility around conversions between long
and uint64_t, especially missing casts that only cause problems on
32-bit platforms.
2016-02-21 21:39:05 -08:00
Jason Evans
788d29d397 Fix Windows-specific prof-related compilation portability issues. 2016-02-20 23:46:14 -08:00
Jason Evans
fd9cd7a6cc Fix time_update() to compile and work on MinGW. 2016-02-20 23:45:22 -08:00
rustyx
efbee86278 Prevent MSVC from optimizing away tls_callback (resolves #318) 2016-02-20 10:52:53 -08:00
rustyx
7f283980f0 getpid() fix for Win32 2016-02-20 10:52:53 -08:00
Jason Evans
243f7a0508 Implement decay-based unused dirty page purging.
This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.

Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time

This resolves #325.
2016-02-19 20:56:21 -08:00
Jason Evans
1a4ad3c0fa Refactor out arena_compute_npurge().
Refactor out arena_compute_npurge() by integrating its logic into
arena_stash_dirty() as an incremental computation.
2016-02-19 20:32:37 -08:00
Jason Evans
db927b6727 Refactor arenas_cache tsd.
Refactor arenas_cache tsd into arenas_tdata, which is a structure of
type arena_tdata_t.
2016-02-19 20:32:37 -08:00
Jason Evans
4985dc681e Refactor arena_ralloc_no_move().
Refactor early return logic in arena_ralloc_no_move() to return early on
failure rather than on success.
2016-02-19 20:32:37 -08:00
Jason Evans
578cd16581 Refactor arena_malloc_hard() out of arena_malloc(). 2016-02-19 20:32:32 -08:00
Jason Evans
34676d3369 Refactor prng* from cpp macros into inline functions.
Remove 32-bit variant, convert prng64() to prng_lg_range(), and add
prng_range().
2016-02-19 20:29:06 -08:00
Jason Evans
c87ab25d18 Use ticker for incremental tcache GC. 2016-02-19 20:29:06 -08:00