Commit Graph

3368 Commits

Author SHA1 Message Date
Jason Evans
8c9be3e837 Refactor rtree to always use base_alloc() for node allocation. 2016-06-03 12:27:41 -07:00
Jason Evans
db72272bef Use rtree-based chunk lookups rather than pointer bit twiddling.
Look up chunk metadata via the radix tree, rather than using
CHUNK_ADDR2BASE().

Propagate pointer's containing extent.

Minimize extent lookups by doing a single lookup (e.g. in free()) and
propagating the pointer's extent into nearly all the functions that may
need it.
2016-06-03 12:27:41 -07:00
Jason Evans
2d2b4e98c9 Add element acquire/release capabilities to rtree.
This makes it possible to acquire short-term "ownership" of rtree
elements so that it is possible to read an extent pointer *and* read the
extent's contents with a guarantee that the element will not be modified
until the ownership is released.  This is intended as a mechanism for
resolving rtree read/write races rather than as a way to lock extents.
2016-06-03 12:27:33 -07:00
Jason Evans
f4a58847d3 Remove obsolete reference to Valgrind and quarantine. 2016-06-03 12:20:02 -07:00
Jason Evans
a7a6f5bc96 Rename extent_node_t to extent_t. 2016-05-16 12:21:28 -07:00
Jason Evans
3aea827f5e Simplify run quantization. 2016-05-16 12:21:27 -07:00
Jason Evans
7bb00ae9d6 Refactor runs_avail.
Use pszind_t size classes rather than szind_t size classes, and always
reserve space for NPSIZES elements.  This removes unused heaps that are
not multiples of the page size, and adds (currently) unused heaps for
all huge size classes, with the immediate benefit that the size of
arena_t allocations is constant (no longer dependent on chunk size).
2016-05-16 12:21:21 -07:00
Jason Evans
226c446979 Implement pz2ind(), pind2sz(), and psz2u().
These compute size classes and indices similarly to size2index(),
index2size() and s2u(), respectively, but using the subset of size
classes that are multiples of the page size.  Note that pszind_t and
szind_t are not interchangeable.
2016-05-13 10:31:54 -07:00
Jason Evans
627372b459 Initialize arena_bin_info at compile time rather than at boot time.
This resolves #370.
2016-05-13 10:31:30 -07:00
Jason Evans
b683734b43 Implement BITMAP_INFO_INITIALIZER(nbits).
This allows static initialization of bitmap_info_t structures.
2016-05-13 10:27:48 -07:00
Jason Evans
17c021c177 Remove redzone support.
This resolves #369.
2016-05-13 10:27:33 -07:00
Jason Evans
ba5c709517 Remove quarantine support. 2016-05-13 10:25:05 -07:00
Jason Evans
9a8add1510 Remove Valgrind support. 2016-05-13 09:56:18 -07:00
Jason Evans
a397045323 Use TSDN_NULL rather than NULL as appropriate. 2016-05-12 21:07:08 -07:00
Jason Evans
dc7ff6306d Fix a typo. 2016-05-12 15:06:50 -07:00
Jason Evans
f70a254d44 Merge branch 'dev' 2016-05-12 14:53:25 -07:00
Jason Evans
09f8585ce8 Update ChangeLog for 4.2.0. 2016-05-12 14:23:50 -07:00
Jason Evans
1c35f63797 Guard tsdn_tsd() call with tsdn_null() check. 2016-05-11 16:52:58 -07:00
Jason Evans
0fc1317fc6 Mangle tested functions as n_witness_* rather than witness_*_impl. 2016-05-11 16:14:20 -07:00
Jason Evans
73d3d58dc2 Optimize witness fast path.
Short-circuit commonly called witness functions so that they only
execute in debug builds, and remove equivalent guards from mutex
functions.  This avoids pointless code execution in
witness_assert_lockless(), which is typically called twice per
allocation/deallocation function invocation.

Inline commonly called witness functions so that optimized builds can
completely remove calls as dead code.
2016-05-11 15:38:06 -07:00
Jason Evans
7790a0ba40 Fix chunk accounting related to triggering gdump profiles.
Fix in place huge reallocation to update the chunk counters that are
used for triggering gdump profiles.
2016-05-11 00:56:30 -07:00
Jason Evans
3a9ec67626 Disable junk filling for tests that could otherwise easily OOM. 2016-05-11 00:52:16 -07:00
Jason Evans
c1e00ef2a6 Resolve bootstrapping issues when embedded in FreeBSD libc.
b2c0d6322d (Add witness, a simple online
locking validator.) caused a broad propagation of tsd throughout the
internal API, but tsd_fetch() was designed to fail prior to tsd
bootstrapping.  Fix this by splitting tsd_t into non-nullable tsd_t and
nullable tsdn_t, and modifying all internal APIs that do not critically
rely on tsd to take nullable pointers.  Furthermore, add the
tsd_booted_get() function so that tsdn_fetch() can probe whether tsd
bootstrapping is complete and return NULL if not.  All dangerous
conversions of nullable pointers are tsdn_tsd() calls that assert-fail
on invalid conversion.
2016-05-10 22:51:33 -07:00
Jason Evans
0c12dcabc5 Fix tsd bootstrapping for a0malloc(). 2016-05-07 16:55:36 -07:00
Jason Evans
919e4a0ea9 Add LG_QUANTUM definition for the RISC-V architecture. 2016-05-06 17:15:32 -07:00
Jason Evans
62c217e613 Update ChangeLog. 2016-05-06 15:22:32 -07:00
Jason Evans
1326010cf4 Update private_symbols.txt. 2016-05-06 14:50:58 -07:00
Jason Evans
3ef51d7f73 Optimize the fast paths of calloc() and [m,d,sd]allocx().
This is a broader application of optimizations to malloc() and free() in
f4a0f32d34 (Fast-path improvement:
reduce # of branches and unnecessary operations.).

This resolves #321.
2016-05-06 14:37:39 -07:00
Jason Evans
c2f970c32b Modify pages_map() to support mapping uncommitted virtual memory.
If the OS overcommits:
- Commit all mappings in pages_map() regardless of whether the caller
  requested committed memory.
- Linux-specific: Specify MAP_NORESERVE to avoid
  unfortunate interactions with heuristic overcommit mode during
  fork(2).

This resolves #193.
2016-05-05 18:56:17 -07:00
Jason Evans
dc391adc65 Scale leak report summary according to sampling probability.
This makes the numbers reported in the leak report summary closely match
those reported by jeprof.

This resolves #356.
2016-05-04 12:14:36 -07:00
Jason Evans
04c3c0f9a0 Add the stats.retained and stats.arenas.<i>.retained statistics.
This resolves #367.
2016-05-03 22:11:35 -07:00
Jason Evans
c1e9cf47f9 Link against librt for clock_gettime(2) if glibc < 2.17.
Link libjemalloc against librt if clock_gettime(2) is in librt rather
than libc, as for versions of glibc prior to 2.17.

This resolves #349.
2016-05-03 21:28:20 -07:00
Jason Evans
7ba6e74233 Fix a typo. 2016-05-03 17:46:07 -07:00
Jason Evans
e02b83cc5e Merge branch. 2016-05-03 17:34:40 -07:00
Jason Evans
2e5eb21184 Update ChangeLog for 4.1.1. 2016-05-03 17:31:59 -07:00
Jason Evans
417c0c9ef1 Add private symbols. 2016-05-03 17:31:59 -07:00
Jason Evans
44d12d435a Update mallocx() OOM test to deal with smaller hugemax.
Depending on virtual memory resource limits, it is necessary to attempt
allocating three maximally sized objects to trigger OOM rather than just
two, since the maximum supported size is slightly less than half the
total virtual memory address space.

This fixes a test failure that was introduced by
0c516a00c4 (Make *allocx() size class
overflow behavior defined.).

This resolves #379.
2016-05-03 17:31:59 -07:00
Jason Evans
21e33ed317 Don't test fork() on Windows. 2016-05-03 17:31:59 -07:00
Jason Evans
90827a3f3e Fix huge_palloc() regression.
Split arena_choose() into arena_[i]choose() and use arena_ichoose() for
arena lookup during internal allocation.  This fixes huge_palloc() so
that it always succeeds during extent node allocation.

This regression was introduced by
66cd953514 (Do not allocate metadata via
non-auto arenas, nor tcaches.).
2016-05-03 17:19:15 -07:00
Jason Evans
21cda0dc42 Update ChangeLog for 4.1.1. 2016-05-03 17:19:15 -07:00
Jason Evans
1eb46ab6e7 Don't test fork() on Windows. 2016-05-03 17:18:34 -07:00
Jason Evans
2687a72087 Fix fork()-related lock rank ordering reversals. 2016-05-03 10:28:25 -07:00
Jason Evans
de35328a10 Use separate arena for chunk tests.
This assures that side effects of internal allocation don't impact
tests.
2016-05-03 10:09:04 -07:00
hitstergtd
c3b008ec39 Doc typo fixes. 2016-05-03 10:08:05 -07:00
Jason Evans
d65db0e402 Fix malloc_stats_print() to print correct opt.narenas value.
This regression was caused by 8f683b94a7
(Make opt_narenas unsigned rather than size_t.).
2016-05-03 10:06:38 -07:00
Jason Evans
8c83c021b0 Fix bitmap_sfu() regression.
Fix bitmap_sfu() to shift by LG_BITMAP_GROUP_NBITS rather than
hard-coded 6 when using linear (non-USE_TREE) bitmap search.  In
practice this affects only 64-bit systems for which sizeof(long) is not
8 (i.e. Windows), since USE_TREE is defined for 32-bit systems.

This regression was caused by b8823ab026
(Use linear scan for small bitmaps).

This resolves #368.
2016-05-03 10:05:30 -07:00
Jason Evans
8d8960f635 Fix potential chunk leaks.
Move chunk_dalloc_arena()'s implementation into chunk_dalloc_wrapper(),
so that if the dalloc hook fails, proper decommit/purge/retain cascading
occurs.  This fixes three potential chunk leaks on OOM paths, one during
dss-based chunk allocation, one during chunk header commit (currently
relevant only on Windows), and one during rtree write (e.g. if rtree
node allocation fails).

Merge chunk_purge_arena() into chunk_purge_default() (refactor, no
change to functionality).
2016-05-03 10:04:32 -07:00
Rajeev Misra
b40253a93e typecast address to pointer to byte to avoid unaligned memory access error 2016-05-03 10:02:26 -07:00
Dmitri Smirnov
c3ab90483f Fix stack corruption and uninitialized var warning
Stack corruption happens in x64 bit

This resolves #347.
2016-05-03 10:01:47 -07:00
rustyx
7798c7ac1d Fix MSVC project and improve MSVC lib naming (v140 -> vc140) 2016-05-03 10:01:31 -07:00