Jason Evans
0c5cec833f
Relax extent hook tests to work with unsplittable extents.
2016-06-05 22:30:31 -07:00
Jason Evans
487093d999
Fix regressions related extent splitting failures.
...
Fix a fundamental extent_split_wrapper() bug in an error path.
Fix extent_recycle() to deregister unsplittable extents before leaking
them.
Relax xallocx() test assertions so that unsplittable extents don't cause
test failures.
2016-06-05 22:08:20 -07:00
Jason Evans
9a645c612f
Fix an extent [de]allocation/[de]registration race.
...
Deregister extents before deallocation, so that subsequent
reallocation/registration doesn't race with deregistration.
2016-06-05 21:00:02 -07:00
Jason Evans
4e910fc958
Fix extent_alloc_dss() regressions.
...
Page-align the gap, if any, and add/use extent_dalloc_gap(), which
registers the gap extent before deallocation.
2016-06-05 21:00:02 -07:00
Jason Evans
c4bb17f891
Fix gdump triggering regression.
...
Now that extents are not multiples of chunksize, it's necessary to track
pages rather than chunks.
2016-06-05 21:00:02 -07:00
Jason Evans
42faa9e3e0
Work around legitimate xallocx() failures during testing.
...
With the removal of subchunk size class infrastructure, there are no
large size classes that are guaranteed to be re-expandable in place
unless munmap() is disabled. Work around these legitimate failures with
rallocx() fallback calls. If there were no test configuration for which
the xallocx() calls succeeded, it would be important to override the
extent hooks for testing purposes, but by default these tests don't use
the rallocx() fallbacks on Linux, so test coverage is still sufficient.
2016-06-05 21:00:02 -07:00
Jason Evans
04942c3d90
Remove a stray memset(), and fix a junk filling test regression.
2016-06-05 21:00:02 -07:00
Jason Evans
f02fec8839
Silence a bogus compiler warning.
2016-06-05 21:00:02 -07:00
Jason Evans
8835cf3bed
Fix locking order reversal in arena_reset().
2016-06-05 21:00:02 -07:00
Jason Evans
f8f0542194
Modify extent hook functions to take an (extent_t *) argument.
...
This facilitates the application accessing its own extent allocator
metadata during hook invocations.
This resolves #259 .
2016-06-05 21:00:02 -07:00
Jason Evans
6f29a83924
Add rtree lookup path caching.
...
rtree-based extent lookups remain more expensive than chunk-based run
lookups, but with this optimization the fast path slowdown is ~3 CPU
cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles
prior. The path caching speedup tends to degrade gracefully unless
allocated memory is spread far apart (as is the case when using a
mixture of sbrk() and mmap()).
2016-06-05 20:59:57 -07:00
Jason Evans
7be2ebc23f
Make tsd cleanup functions optional, remove noop cleanup functions.
2016-06-05 20:42:24 -07:00
Jason Evans
e28b43a739
Remove some unnecessary locking.
2016-06-05 20:42:24 -07:00
Jason Evans
37f0e34606
Reduce NSZS, since NSIZES (was nsizes) can not be so large.
2016-06-05 20:42:24 -07:00
Jason Evans
819417580e
Fix rallocx() sampling code to not eagerly commit sampler update.
...
rallocx() for an alignment-constrained request may end up with a
smaller-than-worst-case size if in-place reallocation succeeds due to
serendipitous alignment. In such cases, sampling may not happen.
2016-06-05 20:42:24 -07:00
Jason Evans
b14fdaaca0
Add a missing prof_alloc_rollback() call.
...
In the case where prof_alloc_prep() is called with an over-estimate of
allocation size, and sampling doesn't end up being triggered, the tctx
must be discarded.
2016-06-05 20:42:24 -07:00
Jason Evans
c8c3cbdf47
Miscellaneous s/chunk/extent/ updates.
2016-06-05 20:42:24 -07:00
Jason Evans
a43db1c608
Relax NBINS constraint (max 255 --> max 256).
2016-06-05 20:42:24 -07:00
Jason Evans
a83a31c1c5
Relax opt_lg_chunk clamping constraints.
2016-06-05 20:42:24 -07:00
Jason Evans
751f2c332d
Remove obsolete stats.arenas.<i>.metadata.mapped mallctl.
...
Rename stats.arenas.<i>.metadata.allocated mallctl to
stats.arenas.<i>.metadata .
2016-06-05 20:42:24 -07:00
Jason Evans
03eea4fb8b
Better document --enable-ivsalloc.
2016-06-05 20:42:24 -07:00
Jason Evans
22588dda6e
Rename most remaining *chunk* APIs to *extent*.
2016-06-05 20:42:23 -07:00
Jason Evans
0c4932eb1e
s/chunk_lookup/extent_lookup/g, s/chunks_rtree/extents_rtree/g
2016-06-05 20:42:23 -07:00
Jason Evans
4a55daa363
s/CHUNK_HOOKS_INITIALIZER/EXTENT_HOOKS_INITIALIZER/g
2016-06-05 20:42:23 -07:00
Jason Evans
c9a76481d8
Rename chunks_{cached,retained,mtx} to extents_{cached,retained,mtx}.
2016-06-05 20:42:23 -07:00
Jason Evans
127026ad98
Rename chunk_*_t hooks to extent_*_t.
2016-06-05 20:42:23 -07:00
Jason Evans
9c305c9e5c
s/chunk_hook/extent_hook/g
2016-06-05 20:42:23 -07:00
Jason Evans
7d63fed0fd
Rename huge to large.
2016-06-05 20:42:23 -07:00
Jason Evans
714d1640f3
Update private symbols.
2016-06-05 20:42:23 -07:00
Jason Evans
498856f44a
Move slabs out of chunks.
2016-06-05 20:42:23 -07:00
Jason Evans
d28e5a6696
Improve interval-based profile dump triggering.
...
When an allocation is large enough to trigger multiple dumps, use
modular math rather than subtraction to reset the interval counter.
Prior to this change, it was possible for a single allocation to cause
many subsequent allocations to all trigger profile dumps.
When updating usable size for a sampled object, try to cancel out
the difference between LARGE_MINCLASS and usable size from the interval
counter.
2016-06-05 20:42:23 -07:00
Jason Evans
ed2c2427a7
Use huge size class infrastructure for large size classes.
2016-06-05 20:42:18 -07:00
Jason Evans
b46261d58b
Implement cache-oblivious support for huge size classes.
2016-06-03 12:27:41 -07:00
Jason Evans
4731cd47f7
Allow chunks to not be naturally aligned.
...
Precisely size extents for huge size classes that aren't multiples of
chunksize.
2016-06-03 12:27:41 -07:00
Jason Evans
741967e79d
Remove CHUNK_ADDR2BASE() and CHUNK_ADDR2OFFSET().
2016-06-03 12:27:41 -07:00
Jason Evans
23c52c895f
Make extent_prof_tctx_[gs]et() atomic.
2016-06-03 12:27:41 -07:00
Jason Evans
760bf11b23
Add extent_dirty_[gs]et().
2016-06-03 12:27:41 -07:00
Jason Evans
47613afc34
Convert rtree from per chunk to per page.
...
Refactor [de]registration to maintain interior rtree entries for slabs.
2016-06-03 12:27:41 -07:00
Jason Evans
5c6be2bdd3
Refactor chunk_purge_wrapper() to take extent argument.
2016-06-03 12:27:41 -07:00
Jason Evans
0eb6f08959
Refactor chunk_[de]commit_wrapper() to take extent arguments.
2016-06-03 12:27:41 -07:00
Jason Evans
6c94470822
Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments.
...
Rename arena_extent_[d]alloc() to extent_[d]alloc().
Move all chunk [de]registration responsibility into chunk.c.
2016-06-03 12:27:41 -07:00
Jason Evans
de0305a7f3
Add/use chunk_split_wrapper().
...
Remove redundant ptr/oldsize args from huge_*().
Refactor huge/chunk/arena code boundaries.
2016-06-03 12:27:41 -07:00
Jason Evans
1ad060584f
Add/use chunk_merge_wrapper().
2016-06-03 12:27:41 -07:00
Jason Evans
384e88f451
Add/use chunk_commit_wrapper().
2016-06-03 12:27:41 -07:00
Jason Evans
56e0031d7d
Add/use chunk_decommit_wrapper().
2016-06-03 12:27:41 -07:00
Jason Evans
4d2d9cec5a
Merge chunk_alloc_base() into its only caller.
2016-06-03 12:27:41 -07:00
Jason Evans
fc0372a15e
Replace extent_tree_szad_* with extent_heap_*.
2016-06-03 12:27:41 -07:00
Jason Evans
ffa45a5331
Use rtree rather than [sz]ad trees for chunk split/coalesce operations.
2016-06-03 12:27:41 -07:00
Jason Evans
25845db7c9
Dodge ivsalloc() assertion in test code.
2016-06-03 12:27:41 -07:00
Jason Evans
93e79c5c3f
Remove redundant chunk argument from chunk_{,de,re}register().
2016-06-03 12:27:41 -07:00