Commit Graph

414 Commits

Author SHA1 Message Date
Jason Evans
977103c897 Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).
This avoids warnings in some cases, and is otherwise generally good
hygiene.
2016-10-27 21:31:25 -07:00
Jason Evans
44df4a45cf Explicitly cast negative constants meant for use as unsigned. 2016-10-27 21:29:59 -07:00
Jason Evans
17aa187f6b Add cast to silence (harmless) conversion warning. 2016-10-27 21:29:00 -07:00
Jason Evans
b54d160dc4 Do not (recursively) allocate within tsd_fetch().
Refactor tsd so that tsdn_fetch() does not trigger allocation, since
allocation could cause infinite recursion.

This resolves #458.
2016-10-20 23:59:12 -07:00
Jason Evans
577d4572b0 Make dss operations lockless.
Rather than protecting dss operations with a mutex, use atomic
operations.  This has negligible impact on synchronization overhead
during typical dss allocation, but is a substantial improvement for
extent_in_dss() and the newly added extent_dss_mergeable(), which can be
called multiple times during extent deallocations.

This change also has the advantage of avoiding tsd in deallocation paths
associated with purging, which resolves potential deadlocks during
thread exit due to attempted tsd resurrection.

This resolves #425.
2016-10-13 15:37:00 -07:00
Jason Evans
9acd5cf178 Remove all vestiges of chunks.
Remove mallctls:
- opt.lg_chunk
- stats.cactive

This resolves #464.
2016-10-12 11:55:43 -07:00
Jason Evans
63b5657aa5 Remove ratio-based purging.
Make decay-based purging the default (and only) mode.

Remove associated mallctls:
- opt.purge
- opt.lg_dirty_mult
- arena.<i>.lg_dirty_mult
- arenas.lg_dirty_mult
- stats.arenas.<i>.lg_dirty_mult

This resolves #385.
2016-10-12 10:40:27 -07:00
Jason Evans
48993ed536 Fix decay tests to all adapt to nstime_monotonic(). 2016-10-11 15:28:43 -07:00
Jason Evans
5f11fb7d43 Do not advance decay epoch when time goes backwards.
Instead, move the epoch backward in time.  Additionally, add
nstime_monotonic() and use it in debug builds to assert that time only
goes backward if nstime_update() is using a non-monotonic time source.
2016-10-10 22:15:10 -07:00
Jason Evans
d51139c33c Verify extent hook functions receive correct extent_hooks pointer. 2016-09-29 09:50:35 -07:00
Elliot Ronaghan
fbd7956d45 Work around a weird pgi bug in test/unit/math.c
pgi fails to compile math.c, reporting that `-INFINITY` in `pt_norm_expected[]`
is a "Non-constant" expression. A simplified version of this failure is:

```c
#include <math.h>

static double inf1, inf2 = INFINITY;  // no complaints
static double inf3 = INFINITY;        // suddenly INFINITY is "Non-constant"

int main() { }
```

```sh
PGC-S-0074-Non-constant expression in initializer (t.c: 4)
```

pgi errors on the declaration of inf3, and will compile fine if that line is
removed. I've reported this bug to pgi, but in the meantime I just switched to
using (DBL_MAX + DBL_MAX) to work around this bug.
2016-06-08 14:20:32 -07:00
Jason Evans
0c5cec833f Relax extent hook tests to work with unsplittable extents. 2016-06-05 22:30:31 -07:00
Jason Evans
487093d999 Fix regressions related extent splitting failures.
Fix a fundamental extent_split_wrapper() bug in an error path.

Fix extent_recycle() to deregister unsplittable extents before leaking
them.

Relax xallocx() test assertions so that unsplittable extents don't cause
test failures.
2016-06-05 22:08:20 -07:00
Jason Evans
42faa9e3e0 Work around legitimate xallocx() failures during testing.
With the removal of subchunk size class infrastructure, there are no
large size classes that are guaranteed to be re-expandable in place
unless munmap() is disabled.  Work around these legitimate failures with
rallocx() fallback calls.  If there were no test configuration for which
the xallocx() calls succeeded, it would be important to override the
extent hooks for testing purposes, but by default these tests don't use
the rallocx() fallbacks on Linux, so test coverage is still sufficient.
2016-06-05 21:00:02 -07:00
Jason Evans
04942c3d90 Remove a stray memset(), and fix a junk filling test regression. 2016-06-05 21:00:02 -07:00
Jason Evans
f8f0542194 Modify extent hook functions to take an (extent_t *) argument.
This facilitates the application accessing its own extent allocator
metadata during hook invocations.

This resolves #259.
2016-06-05 21:00:02 -07:00
Jason Evans
6f29a83924 Add rtree lookup path caching.
rtree-based extent lookups remain more expensive than chunk-based run
lookups, but with this optimization the fast path slowdown is ~3 CPU
cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles
prior.  The path caching speedup tends to degrade gracefully unless
allocated memory is spread far apart (as is the case when using a
mixture of sbrk() and mmap()).
2016-06-05 20:59:57 -07:00
Jason Evans
37f0e34606 Reduce NSZS, since NSIZES (was nsizes) can not be so large. 2016-06-05 20:42:24 -07:00
Jason Evans
c8c3cbdf47 Miscellaneous s/chunk/extent/ updates. 2016-06-05 20:42:24 -07:00
Jason Evans
22588dda6e Rename most remaining *chunk* APIs to *extent*. 2016-06-05 20:42:23 -07:00
Jason Evans
0c4932eb1e s/chunk_lookup/extent_lookup/g, s/chunks_rtree/extents_rtree/g 2016-06-05 20:42:23 -07:00
Jason Evans
9c305c9e5c s/chunk_hook/extent_hook/g 2016-06-05 20:42:23 -07:00
Jason Evans
7d63fed0fd Rename huge to large. 2016-06-05 20:42:23 -07:00
Jason Evans
498856f44a Move slabs out of chunks. 2016-06-05 20:42:23 -07:00
Jason Evans
ed2c2427a7 Use huge size class infrastructure for large size classes. 2016-06-05 20:42:18 -07:00
Jason Evans
b46261d58b Implement cache-oblivious support for huge size classes. 2016-06-03 12:27:41 -07:00
Jason Evans
4731cd47f7 Allow chunks to not be naturally aligned.
Precisely size extents for huge size classes that aren't multiples of
chunksize.
2016-06-03 12:27:41 -07:00
Jason Evans
fc0372a15e Replace extent_tree_szad_* with extent_heap_*. 2016-06-03 12:27:41 -07:00
Jason Evans
25845db7c9 Dodge ivsalloc() assertion in test code. 2016-06-03 12:27:41 -07:00
Jason Evans
e75e9be130 Add rtree element witnesses. 2016-06-03 12:27:41 -07:00
Jason Evans
8c9be3e837 Refactor rtree to always use base_alloc() for node allocation. 2016-06-03 12:27:41 -07:00
Jason Evans
2d2b4e98c9 Add element acquire/release capabilities to rtree.
This makes it possible to acquire short-term "ownership" of rtree
elements so that it is possible to read an extent pointer *and* read the
extent's contents with a guarantee that the element will not be modified
until the ownership is released.  This is intended as a mechanism for
resolving rtree read/write races rather than as a way to lock extents.
2016-06-03 12:27:33 -07:00
Jason Evans
a7a6f5bc96 Rename extent_node_t to extent_t. 2016-05-16 12:21:28 -07:00
Jason Evans
7bb00ae9d6 Refactor runs_avail.
Use pszind_t size classes rather than szind_t size classes, and always
reserve space for NPSIZES elements.  This removes unused heaps that are
not multiples of the page size, and adds (currently) unused heaps for
all huge size classes, with the immediate benefit that the size of
arena_t allocations is constant (no longer dependent on chunk size).
2016-05-16 12:21:21 -07:00
Jason Evans
226c446979 Implement pz2ind(), pind2sz(), and psz2u().
These compute size classes and indices similarly to size2index(),
index2size() and s2u(), respectively, but using the subset of size
classes that are multiples of the page size.  Note that pszind_t and
szind_t are not interchangeable.
2016-05-13 10:31:54 -07:00
Jason Evans
627372b459 Initialize arena_bin_info at compile time rather than at boot time.
This resolves #370.
2016-05-13 10:31:30 -07:00
Jason Evans
b683734b43 Implement BITMAP_INFO_INITIALIZER(nbits).
This allows static initialization of bitmap_info_t structures.
2016-05-13 10:27:48 -07:00
Jason Evans
17c021c177 Remove redzone support.
This resolves #369.
2016-05-13 10:27:33 -07:00
Jason Evans
ba5c709517 Remove quarantine support. 2016-05-13 10:25:05 -07:00
Jason Evans
9a8add1510 Remove Valgrind support. 2016-05-13 09:56:18 -07:00
Jason Evans
3a9ec67626 Disable junk filling for tests that could otherwise easily OOM. 2016-05-11 00:52:16 -07:00
Jason Evans
c1e00ef2a6 Resolve bootstrapping issues when embedded in FreeBSD libc.
b2c0d6322d (Add witness, a simple online
locking validator.) caused a broad propagation of tsd throughout the
internal API, but tsd_fetch() was designed to fail prior to tsd
bootstrapping.  Fix this by splitting tsd_t into non-nullable tsd_t and
nullable tsdn_t, and modifying all internal APIs that do not critically
rely on tsd to take nullable pointers.  Furthermore, add the
tsd_booted_get() function so that tsdn_fetch() can probe whether tsd
bootstrapping is complete and return NULL if not.  All dangerous
conversions of nullable pointers are tsdn_tsd() calls that assert-fail
on invalid conversion.
2016-05-10 22:51:33 -07:00
Jason Evans
0c12dcabc5 Fix tsd bootstrapping for a0malloc(). 2016-05-07 16:55:36 -07:00
Jason Evans
1eb46ab6e7 Don't test fork() on Windows. 2016-05-03 17:18:34 -07:00
Jason Evans
9aa1543e9c Update mallocx() OOM test to deal with smaller hugemax.
Depending on virtual memory resource limits, it is necessary to attempt
allocating three maximally sized objects to trigger OOM rather than just
two, since the maximum supported size is slightly less than half the
total virtual memory address space.

This fixes a test failure that was introduced by
0c516a00c4 (Make *allocx() size class
overflow behavior defined.).

This resolves #379.
2016-05-03 09:37:54 -07:00
Jason Evans
108c4a11e9 Fix witness/fork() interactions.
Fix witness to clear its list of owned mutexes in the child if
platform-specific malloc_mutex code re-initializes mutexes rather than
unlocking them.
2016-04-26 10:47:22 -07:00
Jason Evans
174c0c3a9c Fix fork()-related lock rank ordering reversals. 2016-04-25 23:16:20 -07:00
Jason Evans
0d970a054e Use separate arena for chunk tests.
This assures that side effects of internal allocation don't impact
tests.
2016-04-25 20:26:03 -07:00
Jason Evans
2fe64d237c Fix arena_reset() test to avoid tcache. 2016-04-25 12:51:17 -07:00
Jason Evans
19ff2cefba Implement the arena.<i>.reset mallctl.
This makes it possible to discard all of an arena's allocations in a
single operation.

This resolves #146.
2016-04-22 15:20:06 -07:00
Jason Evans
1423ee9016 Fix style nits. 2016-04-17 13:44:59 -07:00
Jason Evans
b2c0d6322d Add witness, a simple online locking validator.
This resolves #358.
2016-04-14 02:09:28 -07:00
Jason Evans
bab58ef401 Fix more 64-to-32 conversion warnings. 2016-04-12 12:39:02 -07:00
Jason Evans
96aa67aca8 Clean up char vs. uint8_t in junk filling code.
Consistently use uint8_t rather than char for junk filling code.
2016-04-11 02:26:35 -07:00
Jason Evans
c6a2c39404 Refactor/fix ph.
Refactor ph to support configurable comparison functions.  Use a cpp
macro code generation form equivalent to the rb macros so that pairing
heaps can be used for both run heaps and chunk heaps.

Remove per node parent pointers, and instead use leftmost siblings' prev
pointers to track parents.

Fix multi-pass sibling merging to iterate over intermediate results
using a FIFO, rather than a LIFO.  Use this fixed sibling merging
implementation for both merge phases of the auxiliary twopass algorithm
(first merging the aux list, then replacing the root with its merged
children).  This fixes both degenerate merge behavior and the potential
for deep recursion.

This regression was introduced by
6bafa6678f (Pairing heap).

This resolves #371.
2016-04-11 02:15:42 -07:00
Jason Evans
a3c4193280 Fix a compilation warning in the ph test code. 2016-04-05 16:32:59 -07:00
Chris Peterson
a82070ef5f Add JEMALLOC_ALLOC_JUNK and JEMALLOC_FREE_JUNK macros
Replace hardcoded 0xa5 and 0x5a junk values with JEMALLOC_ALLOC_JUNK and
JEMALLOC_FREE_JUNK macros, respectively.
2016-03-31 11:23:29 -07:00
Jason Evans
ff63dca363 Avoid blindly enabling assertions for header code when testing.
Restructure the test program master header to avoid blindly enabling
assertions.  Prior to this change, assertion code in e.g. arena.h was
always enabled for tests, which could skew performance-related testing.
2016-03-23 16:15:42 -07:00
Jason Evans
f6bd2e5a17 Code formatting fixes. 2016-03-23 16:15:42 -07:00
Jason Evans
22af74e106 Refactor out signed/unsigned comparisons. 2016-03-15 09:40:02 -07:00
Jason Evans
824b947be0 Add (size_t) casts to MALLOCX_ALIGN().
Add (size_t) casts to MALLOCX_ALIGN() macros so that passing the integer
constant 0x80000000 does not cause a compiler warning about invalid
shift amount.

This resolves #354.
2016-03-11 10:11:56 -08:00
Dave Watson
34dca5671f Unittest for pairing heap 2016-03-08 13:48:27 -08:00
Dmitri Smirnov
33184bf698 Fix stack corruption and uninitialized var warning
Stack corruption happens in x64 bit

This resolves #347.
2016-02-29 15:22:53 -08:00
Jason Evans
7d3055432d Fix decay tests for --disable-tcache case. 2016-02-27 23:40:31 -08:00
Jason Evans
3c07f803aa Fix stats.arenas.<i>.[...] for --disable-stats case.
Add missing stats.arenas.<i>.{dss,lg_dirty_mult,decay_time}
initialization.

Fix stats.arenas.<i>.{pactive,pdirty} to read under the protection of
the arena mutex.
2016-02-27 20:40:13 -08:00
Jason Evans
fd4858225b Fix decay tests for --disable-stats case. 2016-02-27 20:38:29 -08:00
Jason Evans
a62e94cabb Remove invalid tests.
Remove invalid tests that were intended to be tests of (hugemax+1) OOM,
for which tests already exist.
2016-02-26 16:27:52 -08:00
Jason Evans
01ecdf32d6 Miscellaneous bitmap refactoring. 2016-02-26 14:21:10 -08:00
Jason Evans
e3195fa4a5 Cast PTRDIFF_MAX to size_t before adding 1.
This fixes compilation warnings regarding integer overflow that were
introduced by 0c516a00c4 (Make *allocx()
size class overflow behavior defined.).
2016-02-25 16:40:24 -08:00
Jason Evans
0c516a00c4 Make *allocx() size class overflow behavior defined.
Limit supported size and alignment to HUGE_MAXCLASS, which in turn is
now limited to be less than PTRDIFF_MAX.

This resolves #278 and #295.
2016-02-25 15:29:49 -08:00
Jason Evans
9e1810ca9d Silence miscellaneous 64-to-32-bit data loss warnings. 2016-02-24 13:03:48 -08:00
Jason Evans
8f683b94a7 Make opt_narenas unsigned rather than size_t. 2016-02-24 13:03:48 -08:00
Dmitri Smirnov
b41a07c31a Fix Windows build issues
This resolves #333.
2016-02-23 18:55:45 -08:00
Dave Watson
2b1fc90b7b Remove rbt_nil
Since this is an intrusive tree, rbt_nil is the whole size of the node
and can be quite large.  For example, miscelm is ~100 bytes.
2016-02-23 18:09:25 -08:00
Jason Evans
0da8ce1e96 Use table lookup for run_quantize_{floor,ceil}().
Reduce run quantization overhead by generating lookup tables during
bootstrapping, and using the tables for all subsequent run quantization.
2016-02-22 16:47:34 -08:00
Jason Evans
a9a4684792 Test run quantization.
Also rename run_quantize_*() to improve clarity.  These tests
demonstrate that run_quantize_ceil() is flawed.
2016-02-22 14:58:05 -08:00
Jason Evans
9bad079039 Refactor time_* into nstime_*.
Use a single uint64_t in nstime_t to store nanoseconds rather than using
struct timespec.  This reduces fragility around conversions between long
and uint64_t, especially missing casts that only cause problems on
32-bit platforms.
2016-02-21 21:39:05 -08:00
Jason Evans
a0aaad1afa Handle unaligned keys in hash().
Reported by Christopher Ferris <cferris@google.com>.
2016-02-20 10:23:48 -08:00
Jason Evans
9f24c94474 Increase test coverage in test_decay_ticks. 2016-02-20 09:02:49 -08:00
Jason Evans
243f7a0508 Implement decay-based unused dirty page purging.
This is an alternative to the existing ratio-based unused dirty page
purging, and is intended to eventually become the sole purging
mechanism.

Add mallctls:
- opt.purge
- opt.decay_time
- arena.<i>.decay
- arena.<i>.decay_time
- arenas.decay_time
- stats.arenas.<i>.decay_time

This resolves #325.
2016-02-19 20:56:21 -08:00
Jason Evans
8e82af1166 Implement smoothstep table generation.
Check in a generated smootherstep table as smoothstep.h rather than
generating it at configure time, since not all systems (e.g. Windows)
have dc.
2016-02-19 20:56:15 -08:00
Jason Evans
34676d3369 Refactor prng* from cpp macros into inline functions.
Remove 32-bit variant, convert prng64() to prng_lg_range(), and add
prng_range().
2016-02-19 20:29:06 -08:00
Jason Evans
9998000b2b Implement ticker.
Implement ticker, which provides a simple API for ticking off some
number of events before indicating that the ticker has hit its limit.
2016-02-19 20:29:06 -08:00
Jason Evans
94451d184b Flesh out time_*() API. 2016-02-19 20:29:06 -08:00
Cameron Evans
e5d5a4a517 Add time_update(). 2016-02-19 20:29:06 -08:00
Jason Evans
f829009929 Add --with-malloc-conf.
Add --with-malloc-conf, which makes it possible to embed a default
options string during configuration.
2016-02-19 20:29:06 -08:00
Jason Evans
49931bd8ff Fix test_stats_arenas_summary fragility.
Fix test_stats_arenas_summary to deallocate before asserting that
purging must have happened.
2016-02-19 20:29:06 -08:00
Jason Evans
b24f74b862 Don't rely on unpurged chunks in xallocx() test. 2016-02-19 17:02:25 -08:00
Joshua Kahn
710ca112e3 Add test for tree destruction 2015-11-09 15:56:24 -08:00
Joshua Kahn
13b4015531 Allow const keys for lookup
Signed-off-by: Steve Dougherty <sdougherty@barracuda.com>

This resolves #281.
2015-11-09 15:48:05 -08:00
Jason Evans
fed1f9f367 Fix intermittent xallocx() test failures.
Modify xallocx() tests that expect to expand in place to use a separate
arena.  This avoids the potential for interposed internal allocations
from e.g. heap profile sampling to disrupt the tests.

This resolves #286.
2015-10-01 13:48:09 -07:00
Jason Evans
044047fae1 Remove fragile xallocx() test case.
In addition to depending on map coalescing, the test depended on
munmap() being disabled so that chunk recycling would always succeed.
2015-09-24 19:52:28 -07:00
Jason Evans
03eb37e8fd Make mallocx() OOM test more robust.
Make mallocx() OOM testing work correctly even on systems that can
allocate the majority of virtual address space in a single contiguous
region.
2015-09-24 16:44:16 -07:00
Jason Evans
d260f442ce Fix xallocx(..., MALLOCX_ZERO) bugs.
Zero all trailing bytes of large allocations when
--enable-cache-oblivious configure option is enabled.  This regression
was introduced by 8a03cf039c (Implement
cache index randomization for large allocations.).

Zero trailing bytes of huge allocations when resizing from/to a size
class that is not a multiple of the chunk size.
2015-09-24 16:38:45 -07:00
Jason Evans
21523297fc Add mallocx() OOM tests. 2015-09-17 15:27:28 -07:00
Jason Evans
65b940a3c5 Loosen expected xallocx() results.
Systems that do not support chunk split/merge cannot shrink/grow huge
allocations in place.
2015-09-15 15:48:42 -07:00
Jason Evans
aca490f004 Add more xallocx() overflow tests. 2015-09-15 14:39:29 -07:00
Jason Evans
676df88e48 Rename arena_maxclass to large_maxclass.
arena_maxclass is no longer an appropriate name, because arenas also
manage huge allocations.
2015-09-11 20:50:20 -07:00
Jason Evans
560a4e1e01 Fix xallocx() bugs.
Fix xallocx() bugs related to the 'extra' parameter when specified as
non-zero.
2015-09-11 20:40:34 -07:00
Jason Evans
a00b10735a Fix "prof.reset" mallctl-related corruption.
Fix heap profiling to distinguish among otherwise identical sample sites
with interposed resets (triggered via the "prof.reset" mallctl).  This
bug could cause data structure corruption that would most likely result
in a segfault.
2015-09-09 23:16:10 -07:00
Jason Evans
594c759f37 Optimize arena_prof_tctx_set().
Optimize arena_prof_tctx_set() to avoid reading run metadata when
deciding whether it's actually necessary to write.
2015-09-02 14:52:24 -07:00
Mike Hommey
4a2a3c9a6e Don't purge junk filled chunks when shrinking huge allocations
When junk filling is enabled, shrinking an allocation fills the bytes
that were previously allocated but now aren't. Purging the chunk before
doing that is just a waste of time.

This resolves #260.
2015-08-27 22:00:09 -07:00
Christopher Ferris
45e9f66c28 Fix arenas_cache_cleanup().
Fix arenas_cache_cleanup() to handle allocation/deallocation within the
application's thread-specific data cleanup functions even after
arenas_cache is torn down.
2015-08-21 12:33:17 -07:00
Jason Evans
d01fd19755 Rename index_t to szind_t to avoid an existing type on Solaris.
This resolves #256.
2015-08-19 15:21:32 -07:00
Jason Evans
828d919b5e Fix test for MinGW. 2015-08-12 15:21:07 -07:00
Jason Evans
38d8f50144 Fix assertion in test. 2015-08-12 12:06:38 -07:00
Jason Evans
03bf5b67be Try to decommit new chunks.
Always leave decommit disabled on non-Windows systems.
2015-08-12 10:26:54 -07:00
Jason Evans
4be0c3ca42 Add no-OOM assertions to test. 2015-08-07 00:51:11 -07:00
Jason Evans
8fadb1a8c2 Implement chunk hook support for page run commit/decommit.
Cascade from decommit to purge when purging unused dirty pages, so that
it is possible to decommit cleaned memory rather than just purging.  For
non-Windows debug builds, decommit runs rather than purging them, since
this causes access of deallocated runs to segfault.

This resolves #251.
2015-08-07 00:50:58 -07:00
Jason Evans
b49a334a64 Generalize chunk management hooks.
Add the "arena.<i>.chunk_hooks" mallctl, which replaces and expands on
the "arena.<i>.chunk.{alloc,dalloc,purge}" mallctls.  The chunk hooks
allow control over chunk allocation/deallocation, decommit/commit,
purging, and splitting/merging, such that the application can rely on
jemalloc's internal chunk caching and retaining functionality, yet
implement a variety of chunk management mechanisms and policies.

Merge the chunks_[sz]ad_{mmap,dss} red-black trees into
chunks_[sz]ad_retained.  This slightly reduces how hard jemalloc tries
to honor the dss precedence setting; prior to this change the precedence
setting was also consulted when recycling chunks.

Fix chunk purging.  Don't purge chunks in arena_purge_stashed(); instead
deallocate them in arena_unstash_purged(), so that the dirty memory
linkage remains valid until after the last time it is used.

This resolves #176 and #201.
2015-08-03 21:49:02 -07:00
Jason Evans
d059b9d6a1 Implement support for non-coalescing maps on MinGW.
- Do not reallocate huge objects in place if the number of backing
  chunks would change.
- Do not cache multi-chunk mappings.

This resolves #213.
2015-07-24 18:39:14 -07:00
Jason Evans
40cbd30d50 Fix huge_ralloc_no_move() to succeed more often.
Fix huge_ralloc_no_move() to succeed if an allocation request results in
the same usable size as the existing allocation, even if the request
size is smaller than the usable size.  This bug did not cause
correctness issues, but it could cause unnecessary moves during
reallocation.
2015-07-24 18:20:48 -07:00
Jason Evans
5fae7dc1b3 Fix MinGW-related portability issues.
Create and use FMT* macros that are equivalent to the PRI* macros that
inttypes.h defines.  This allows uniform use of the Unix-specific format
specifiers, e.g. "%zu", as well as avoiding Windows-specific definitions
of e.g. PRIu64.

Add ffs()/ffsl() support for compiling with gcc.

Extract compatibility definitions of ENOENT, EINVAL, EAGAIN, EPERM,
ENOMEM, and ENORANGE into include/msvc_compat/windows_extra.h and
use the file for tests as well as for core jemalloc code.
2015-07-23 13:56:25 -07:00
Jason Evans
e475ff1600 Fix a compilation error.
This regression was introduced by
1b0e4abbfd (Port mq_get() to MinGW.).
2015-07-22 15:49:34 -07:00
Jason Evans
e42c309eba Add JEMALLOC_FORMAT_PRINTF().
Replace JEMALLOC_ATTR(format(printf, ...). with
JEMALLOC_FORMAT_PRINTF(), so that configuration feature tests can
omit the attribute if it would cause extraneous compilation warnings.
2015-07-22 15:44:47 -07:00
Jason Evans
1b0e4abbfd Port mq_get() to MinGW. 2015-07-21 16:45:35 -07:00
Jason Evans
218b15cc29 Fix more MinGW build warnings. 2015-07-17 18:12:44 -07:00
Jason Evans
f2bc85298c Add the config.cache_oblivious mallctl. 2015-07-17 16:38:25 -07:00
Jason Evans
8693a9ea05 Add timer support for Windows. 2015-07-13 14:35:15 -07:00
Jason Evans
ae93d6bf36 Avoid function prototype incompatibilities.
Add various function attributes to the exported functions to give the
compiler more information to work with during optimization, and also
specify throw() when compiling with C++ on Linux, in order to adequately
match what __THROW does in glibc.

This resolves #237.
2015-07-10 16:09:40 -07:00
Jason Evans
dde067264d Fix an integer overflow bug in {size2index,s2u}_compute().
This {bug,regression} was introduced by
155bfa7da1 (Normalize size classes.).

This resolves #241.
2015-07-09 21:36:33 -07:00
Jason Evans
7ae1239177 Fix indentation. 2015-07-09 16:32:49 -07:00
Jason Evans
241abc601b Fix size class overflow handling when profiling is enabled.
Fix size class overflow handling for malloc(), posix_memalign(),
memalign(), calloc(), and realloc() when profiling is enabled.

Remove an assertion that erroneously caused arena_sdalloc() to fail when
profiling was enabled.

This resolves #232.
2015-06-23 18:56:14 -07:00
Jason Evans
56048baeb4 Clarify relationship between stats.resident and stats.mapped. 2015-05-29 19:21:10 -07:00
Jason Evans
fd5f9e43c3 Avoid atomic operations for dependent rtree reads. 2015-05-15 17:02:30 -07:00
Jason Evans
bd16ea49c3 Fix signed/unsigned comparison in arena_lg_dirty_mult_valid(). 2015-03-24 15:59:28 -07:00
Jason Evans
8d6a3e8321 Implement dynamic per arena control over dirty page purging.
Add mallctls:
- arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
  modified to change the initial lg_dirty_mult setting for newly created
  arenas.
- arena.<i>.lg_dirty_mult controls an individual arena's dirty page
  purging threshold, and synchronously triggers any purging that may be
  necessary to maintain the constraint.
- arena.<i>.chunk.purge allows the per arena dirty page purging function
  to be replaced.

This resolves #93.
2015-03-18 18:55:33 -07:00
Daniel Micay
d6384b09e1 use CLOCK_MONOTONIC in the timer if it's available
Linux sets _POSIX_MONOTONIC_CLOCK to 0 meaning it *might* be available,
so a sysconf check is necessary at runtime with a fallback to the
mandatory CLOCK_REALTIME clock.
2015-03-13 14:07:35 -07:00
Jason Evans
cb9b44914e Remove obsolete (incorrect) assertions.
This regression was introduced by
88fef7ceda (Refactor huge_*() calls into
arena internals.), and went undetected because of the --enable-debug
regression.
2015-02-15 20:13:28 -08:00
Jason Evans
cbf3a6d703 Move centralized chunk management into arenas.
Migrate all centralized data structures related to huge allocations and
recyclable chunks into arena_t, so that each arena can manage huge
allocations and recyclable virtual memory completely independently of
other arenas.

Add chunk node caching to arenas, in order to avoid contention on the
base allocator.

Use chunks_rtree to look up huge allocations rather than a red-black
tree.  Maintain a per arena unsorted list of huge allocations (which
will be needed to enumerate huge allocations during arena reset).

Remove the --enable-ivsalloc option, make ivsalloc() always available,
and use it for size queries if --enable-debug is enabled.  The only
practical implications to this removal are that 1) ivsalloc() is now
always available during live debugging (and the underlying radix tree is
available during core-based debugging), and 2) size query validation can
no longer be enabled independent of --enable-debug.

Remove the stats.chunks.{current,total,high} mallctls, and replace their
underlying statistics with simpler atomically updated counters used
exclusively for gdump triggering.  These statistics are no longer very
useful because each arena manages chunks independently, and per arena
statistics provide similar information.

Simplify chunk synchronization code, now that base chunk allocation
cannot cause recursive lock acquisition.
2015-02-12 00:15:56 -08:00
Jason Evans
9e561e8d3f Test and fix tcache ID recycling. 2015-02-10 09:03:48 -08:00
Jason Evans
1cb181ed63 Implement explicit tcache support.
Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be
used in conjunction with the *allocx() API.

Add the tcache.create, tcache.flush, and tcache.destroy mallctls.

This resolves #145.
2015-02-09 17:44:48 -08:00
Jason Evans
8d0e04d42f Refactor rtree to be lock-free.
Recent huge allocation refactoring associates huge allocations with
arenas, but it remains necessary to quickly look up huge allocation
metadata during reallocation/deallocation.  A global radix tree remains
a good solution to this problem, but locking would have become the
primary bottleneck after (upcoming) migration of chunk management from
global to per arena data structures.

This lock-free implementation uses double-checked reads to traverse the
tree, so that in the steady state, each read or write requires only a
single atomic operation.

This implementation also assures that no more than two tree levels
actually exist, through a combination of careful virtual memory
allocation which makes large sparse nodes cheap, and skipping the root
node on x64 (possible because the top 16 bits are all 0 in practice).
2015-02-04 16:51:53 -08:00
Jason Evans
a55dfa4b0a Implement more atomic operations.
- atomic_*_p().
- atomic_cas_*().
- atomic_write_*().
2015-02-04 16:50:05 -08:00
Jason Evans
5b8ed5b7c9 Implement the prof.gdump mallctl.
This feature makes it possible to toggle the gdump feature on/off during
program execution, whereas the the opt.prof_dump mallctl value can only
be set during program startup.

This resolves #72.
2015-01-25 21:21:35 -08:00
Guilherme Goncalves
2c5cb613df Introduce two new modes of junk filling: "alloc" and "free".
In addition to true/false, opt.junk can now be either "alloc" or "free",
giving applications the possibility of junking memory only on allocation
or deallocation.

This resolves #172.
2014-12-14 17:07:26 -08:00
Jason Evans
e12eaf93dc Style and spelling fixes. 2014-12-08 16:34:04 -08:00
Yuriy Kaminskiy
f79e01f75b Fix test_stats_arenas_bins for 32-bit builds. 2014-12-02 16:27:15 -08:00
Jason Evans
acbcbad1e1 Thwart compiler optimizations. 2014-10-15 14:49:14 -07:00
Jason Evans
3c4d92e82a Add per size class huge allocation statistics.
Add per size class huge allocation statistics, and normalize various
stats:
- Change the arenas.nlruns type from size_t to unsigned.
- Add the arenas.nhchunks and arenas.hchunks.<i>.size mallctl's.
- Replace the stats.arenas.<i>.bins.<j>.allocated mallctl with
  stats.arenas.<i>.bins.<j>.curregs .
- Add the stats.arenas.<i>.hchunks.<j>.nmalloc,
  stats.arenas.<i>.hchunks.<j>.ndalloc,
  stats.arenas.<i>.hchunks.<j>.nrequests, and
  stats.arenas.<i>.hchunks.<j>.curhchunks mallctl's.
2014-10-12 23:02:10 -07:00
Jason Evans
9b75677e53 Don't fetch tsd in a0{d,}alloc().
Don't fetch tsd in a0{d,}alloc(), because doing so can cause infinite
recursion on systems that require an allocated tsd wrapper.
2014-10-10 18:19:20 -07:00
Jason Evans
fc0b3b7383 Add configure options.
Add:
  --with-lg-page
  --with-lg-page-sizes
  --with-lg-size-class-group
  --with-lg-quantum

Get rid of STATIC_PAGE_SHIFT, in favor of directly setting LG_PAGE.

Fix various edge conditions exposed by the configure options.
2014-10-09 22:44:37 -07:00
Jason Evans
b123ddc760 Don't configure HAVE_SSE2.
Don't configure HAVE_SSE2 (on behalf of SFMT), because its dependencies
are notoriously unportable in practice.

This resolves #119.
2014-10-08 18:18:03 -07:00
Jason Evans
57efa7bb0e Avoid atexit(3) when possible, disable prof_final by default.
atexit(3) can deadlock internally during its own initialization if
jemalloc calls atexit() during jemalloc initialization.  Mitigate the
impact by restructuring prof initialization to avoid calling atexit()
unless the registered function will actually dump a final heap profile.

Additionally, disable prof_final by default so that this land mine is
opt-in rather than opt-out.

This resolves #144.
2014-10-08 18:08:00 -07:00
Daniel Micay
f22214a29d Use regular arena allocation for huge tree nodes.
This avoids grabbing the base mutex, as a step towards fine-grained
locking for huge allocations. The thread cache also provides a tiny
(~3%) improvement for serial huge allocations.
2014-10-07 23:57:09 -07:00
Jason Evans
8bb3198f72 Refactor/fix arenas manipulation.
Abstract arenas access to use arena_get() (or a0get() where appropriate)
rather than directly reading e.g. arenas[ind].  Prior to the addition of
the arenas.extend mallctl, the worst possible outcome of directly
accessing arenas was a stale read, but arenas.extend may allocate and
assign a new array to arenas.

Add a tsd-based arenas_cache, which amortizes arenas reads.  This
introduces some subtle bootstrapping issues, with tsd_boot() now being
split into tsd_boot[01]() to support tsd wrapper allocation
bootstrapping, as well as an arenas_cache_bypass tsd variable which
dynamically terminates allocation of arenas_cache itself.

Promote a0malloc(), a0calloc(), and a0free() to be generally useful for
internal allocation, and use them in several places (more may be
appropriate).

Abstract arena->nthreads management and fix a missing decrement during
thread destruction (recent tsd refactoring left arenas_cleanup()
unused).

Change arena_choose() to propagate OOM, and handle OOM in all callers.
This is important for providing consistent allocation behavior when the
MALLOCX_ARENA() flag is being used.  Prior to this fix, it was possible
for an OOM to result in allocation silently allocating from a different
arena than the one specified.
2014-10-07 23:14:57 -07:00
Jason Evans
155bfa7da1 Normalize size classes.
Normalize size classes to use the same number of size classes per size
doubling (currently hard coded to 4), across the intire range of size
classes.  Small size classes already used this spacing, but in order to
support this change, additional small size classes now fill [4 KiB .. 16
KiB).  Large size classes range from [16 KiB .. 4 MiB).  Huge size
classes now support non-multiples of the chunk size in order to fill (4
MiB .. 16 MiB).
2014-10-06 01:45:13 -07:00
Daniel Micay
a95018ee81 Attempt to expand huge allocations in-place.
This adds support for expanding huge allocations in-place by requesting
memory at a specific address from the chunk allocator.

It's currently only implemented for the chunk recycling path, although
in theory it could also be done by optimistically allocating new chunks.
On Linux, it could attempt an in-place mremap. However, that won't work
in practice since the heap is grown downwards and memory is not unmapped
(in a normal build, at least).

Repeated vector reallocation micro-benchmark:

    #include <string.h>
    #include <stdlib.h>

    int main(void) {
        for (size_t i = 0; i < 100; i++) {
            void *ptr = NULL;
            size_t old_size = 0;
            for (size_t size = 4; size < (1 << 30); size *= 2) {
                ptr = realloc(ptr, size);
                if (!ptr) return 1;
                memset(ptr + old_size, 0xff, size - old_size);
                old_size = size;
            }
            free(ptr);
        }
    }

The glibc allocator fails to do any in-place reallocations on this
benchmark once it passes the M_MMAP_THRESHOLD (default 128k) but it
elides the cost of copies via mremap, which is currently not something
that jemalloc can use.

With this improvement, jemalloc still fails to do any in-place huge
reallocations for the first outer loop, but then succeeds 100% of the
time for the remaining 99 iterations. The time spent doing allocations
and copies drops down to under 5%, with nearly all of it spent doing
purging + faulting (when huge pages are disabled) and the array memset.

An improved mremap API (MREMAP_RETAIN - #138) would be far more general
but this is a portable optimization and would still be useful on Linux
for xallocx.

Numbers with transparent huge pages enabled:

glibc (copies elided via MREMAP_MAYMOVE): 8.471s

jemalloc: 17.816s
jemalloc + no-op madvise: 13.236s

jemalloc + this commit: 6.787s
jemalloc + this commit + no-op madvise: 6.144s

Numbers with transparent huge pages disabled:

glibc (copies elided via MREMAP_MAYMOVE): 15.403s

jemalloc: 39.456s
jemalloc + no-op madvise: 12.768s

jemalloc + this commit: 15.534s
jemalloc + this commit + no-op madvise: 6.354s

Closes #137
2014-10-05 14:47:01 -07:00
Jason Evans
47395a1b4c Avoid purging in microbench when lazy-lock is enabled. 2014-10-04 14:59:38 -07:00
Jason Evans
029d44cf8b Fix tsd cleanup regressions.
Fix tsd cleanup regressions that were introduced in
5460aa6f66 (Convert all tsd variables to
reside in a single tsd structure.).  These regressions were twofold:

1) tsd_tryget() should never (and need never) return NULL.  Rename it to
   tsd_fetch() and simplify all callers.
2) tsd_*_set() must only be called when tsd is in the nominal state,
   because cleanup happens during the nominal-->purgatory transition,
   and re-initialization must not happen while in the purgatory state.
   Add tsd_nominal() and use it as needed.  Note that tsd_*{p,}_get()
   can still be used as long as no re-initialization that would require
   cleanup occurs.  This means that e.g. the thread_allocated counter
   can be updated unconditionally.
2014-10-04 11:22:55 -07:00
Jason Evans
b72d4abc5f Skip test_prof_thread_name_validation if !config_prof. 2014-10-03 23:41:53 -07:00
Jason Evans
fc12c0b8bc Implement/test/fix prof-related mallctl's.
Implement/test/fix the opt.prof_thread_active_init,
prof.thread_active_init, and thread.prof.active mallctl's.

Test/fix the thread.prof.name mallctl.

Refactor opt_prof_active to be read-only and move mutable state into the
prof_active variable.  Stop leaning on ctl-related locking for
protection.
2014-10-03 23:25:30 -07:00
Jason Evans
551ebc4364 Convert to uniform style: cond == false --> !cond 2014-10-03 10:16:09 -07:00
Jason Evans
ebbd0c91f0 Remove obsolete comment. 2014-10-02 23:05:23 -07:00
Jason Evans
20c31deaae Test prof.reset mallctl and fix numerous discovered bugs. 2014-10-02 23:01:10 -07:00
Jason Evans
cc9e626ea9 Refactor permuted backtrace test allocation.
Refactor permuted backtrace test allocation that was originally used
only by the prof_accum test, so that it can be used by other heap
profiling test binaries.
2014-10-01 22:28:23 -07:00
Jason Evans
f97e5ac4ec Implement compile-time bitmap size computation. 2014-09-28 14:43:11 -07:00
Jason Evans
5460aa6f66 Convert all tsd variables to reside in a single tsd structure. 2014-09-23 02:36:08 -07:00
Daniel Micay
4cfe55166e Add support for sized deallocation.
This adds a new `sdallocx` function to the external API, allowing the
size to be passed by the caller.  It avoids some extra reads in the
thread cache fast path.  In the case where stats are enabled, this
avoids the work of calculating the size from the pointer.

An assertion validates the size that's passed in, so enabling debugging
will allow users of the API to debug cases where an incorrect size is
passed in.

The performance win for a contrived microbenchmark doing an allocation
and immediately freeing it is ~10%.  It may have a different impact on a
real workload.

Closes #28
2014-09-08 17:34:24 -07:00
Jason Evans
c3f8650749 Add relevant function attributes to [msn]allocx(). 2014-09-08 16:47:51 -07:00
Jason Evans
a1f3929ffd Thwart optimization of free(malloc(1)) in microbench. 2014-09-08 16:23:48 -07:00
Daniel Micay
c3bfe9569a avoid conflict with the POSIX timer_t type
It hits a compilation error with glibc 2.19 without a rename.
2014-09-08 01:20:44 -04:00
Jason Evans
423d78a21b Add microbench tests. 2014-09-07 19:58:04 -07:00
Jason Evans
b67ec3c497 Add a simple timer implementation for use in benchmarking. 2014-09-07 19:57:24 -07:00
Jason Evans
c21b05ea09 Whitespace cleanups. 2014-09-04 22:27:26 -07:00
Jason Evans
1628e8615e Add rb_empty(). 2014-08-19 21:05:54 -07:00
Jason Evans
586c8ede42 Fix arena.<i>.dss mallctl to handle read-only calls. 2014-08-15 12:20:20 -07:00
Jason Evans
a2ea54c986 Add atomic operations tests and fix latent bugs. 2014-08-06 23:36:19 -07:00
Mike Hommey
999e1b5cc7 Fix thd_join on win64 2014-06-01 20:50:24 -07:00
Jason Evans
1f6d77e1f6 Use KQU() rather than QU() where applicable.
Fix KZI() and KQI() to append LL rather than ULL.
2014-05-28 21:17:42 -07:00
Jason Evans
99118622ff Use nallocx() rather than mallctl() to trigger initialization.
Use nallocx() rather than mallctl() to trigger initialization, because
nallocx() has no side effects other than initialization, whereas
mallctl() does a bunch of internal memory allocation.
2014-05-28 11:23:01 -07:00
Jason Evans
26f44df742 Make sure initialization occurs prior to running tests. 2014-05-28 11:08:17 -07:00
Mike Hommey
b54aef1d8c Fixup after 3a730df (Avoid pointer arithmetic on void*[...]) 2014-05-28 09:46:09 -07:00
Mike Hommey
17767b5f2b Correctly return exit code from thd_join on Windows 2014-05-28 09:43:30 -07:00
Mike Hommey
26246af977 Define INFINITY when it's not defined 2014-05-28 09:41:28 -07:00
Mike Hommey
12f74e680c Move platform headers and tricks from jemalloc_internal.h.in to a new jemalloc_internal_decls.h header 2014-05-28 09:38:10 -07:00
Mike Hommey
a9df1ae622 Use ULL prefix instead of LLU for unsigned long longs
MSVC only supports the former.
2014-05-27 15:45:14 -07:00
Mike Hommey
3a730dfd50 Avoid pointer arithmetic on void* in test/integration/rallocx.c 2014-05-27 15:26:28 -07:00
Mike Hommey
86e2e703ff Rename "small" local variable, because windows headers #define it 2014-05-27 15:20:31 -07:00
Mike Hommey
7330c3770a Use C99 varadic macros instead of GCC ones 2014-05-27 15:17:00 -07:00
Mike Hommey
f41f143668 Replace variable arrays in tests with VARIABLE_ARRAY 2014-05-27 15:10:38 -07:00
Mike Hommey
47d58a01ff Define _CRT_SPINCOUNT in test/src/mtx.c like in src/mutex.c 2014-05-27 15:05:05 -07:00
Jason Evans
e2deab7a75 Refactor huge allocation to be managed by arenas.
Refactor huge allocation to be managed by arenas (though the global
red-black tree of huge allocations remains for lookup during
deallocation).  This is the logical conclusion of recent changes that 1)
made per arena dss precedence apply to huge allocation, and 2) made it
possible to replace the per arena chunk allocation/deallocation
functions.

Remove the top level huge stats, and replace them with per arena huge
stats.

Normalize function names and types to *dalloc* (some were *dealloc*).

Remove the --enable-mremap option.  As jemalloc currently operates, this
is a performace regression for some applications, but planned work to
logarithmically space huge size classes should provide similar amortized
performance.  The motivation for this change was that mremap-based huge
reallocation forced leaky abstractions that prevented refactoring.
2014-05-15 22:36:41 -07:00
aravind
fb7fe50a88 Add support for user-specified chunk allocators/deallocators.
Add new mallctl endpoints "arena<i>.chunk.alloc" and
"arena<i>.chunk.dealloc" to allow userspace to configure
jemalloc's chunk allocator and deallocator on a per-arena
basis.
2014-05-12 10:46:03 -07:00
Jason Evans
a344dd01c7 Fix coding sytle nits. 2014-05-01 15:51:30 -07:00
Jason Evans
ecd3e59ca3 Remove the "opt.valgrind" mallctl.
Remove the "opt.valgrind" mallctl because it is unnecessary -- jemalloc
automatically detects whether it is running inside valgrind.
2014-04-15 14:33:50 -07:00
Jason Evans
a2c719b374 Remove the "arenas.purge" mallctl.
Remove the "arenas.purge" mallctl, which was obsoleted by the
"arena.<i>.purge" mallctl in 3.1.0.
2014-04-15 12:46:28 -07:00
Jason Evans
4d434adb14 Make dss non-optional, and fix an "arena.<i>.dss" mallctl bug.
Make dss non-optional on all platforms which support sbrk(2).

Fix the "arena.<i>.dss" mallctl to return an error if "primary" or
"secondary" precedence is specified, but sbrk(2) is not supported.
2014-04-15 12:09:48 -07:00
Jason Evans
9790b9667f Remove the *allocm() API, which is superceded by the *allocx() API. 2014-04-14 22:32:31 -07:00
Jason Evans
e64b1b7be9 Enable big-endian mode for SFMT.
Add cpp logic to enable big-endian mode in SFMT.  This should fix SFMT
tests on e.g. MIPS and SPARC.
2014-03-30 17:24:24 -07:00
Jason Evans
df3f27024f Adapt hash tests to big-endian systems.
The hash code, which has MurmurHash3 at its core, generates different
output depending on system endianness, so adapt the expected output on
big-endian systems.  MurmurHash3 code also makes the assumption that
unaligned access is okay (not true on all systems), but jemalloc only
hashes data structures that have sufficient alignment to dodge this
limitation.
2014-03-30 16:27:08 -07:00
Jason Evans
ada8447cf6 Reduce maximum tested alignment.
Reduce maximum tested alignment from 2^29 to 2^25.  Some systems may not
have enough contiguous virtual memory to satisfy the larger alignment,
but the smaller alignment is still adequate to test multi-chunk
alignment.
2014-03-30 11:22:23 -07:00
Jason Evans
ab8c79fdaf Fix message formatting errors uncovered by p_test_fail() refactoring. 2014-03-30 11:21:09 -07:00
Jason Evans
e3f27cfced Fix p_test_fail()'s va_list abuse.
p_test_fail() was passing a va_list to two separate functions with the
expectation that no reset would occur.  Refactor p_test_fail()'s callers
to instead format two strings and pass them to p_test_fail().

Add a missing parameter to an assert_u64_eq() call, which the compiler
warned about after the assertion macro refactoring.
2014-03-29 23:14:32 -07:00
Jason Evans
c2dcfd8ded Convert ALLOCM_ARENA() test to MALLOCX_ARENA() test. 2014-03-28 10:40:03 -07:00
Jason Evans
9e20df163c Remove duplicate 'static' keyword.
Reported by İsmail Dönmez.
2014-02-26 10:19:18 -08:00
Jason Evans
b037a55f36 Restore tail call optimization subversion.
Restore the essence of 898960247a, which
sabotages tail call optimization.  This is necessary even when the
mutually recursive functions are in separate compilation units.
2014-02-25 16:11:15 -08:00
Jason Evans
940fdfd5ee Fix junk filling for mremap(2)-based huge reallocation.
If mremap(2) is used for huge reallocation, physical pages are mapped to
new virtual addresses rather than data being copied to new pages.  This
bypasses the normal junk filling that would happen during allocation, so
add junk filling that is specific to this case.
2014-02-25 12:37:25 -08:00
Jason Evans
ad47e8996e Break prof_accum into multiple compilation units.
Break prof_accum into multiple compilation units, in order to thwart
compiler optimizations such as inlining and tail call optimization that
would alter backtraces.
2014-02-24 22:00:10 -08:00
Jason Evans
526e4a59a2 Prevent inlining of backtraced test functions.
Inlining of alloc_0() and alloc_1() would prevent generation of unique
backtraces, upon which the test code relies.
2014-01-29 10:58:32 -08:00