Jason Evans
7be2ebc23f
Make tsd cleanup functions optional, remove noop cleanup functions.
2016-06-05 20:42:24 -07:00
Jason Evans
b14fdaaca0
Add a missing prof_alloc_rollback() call.
...
In the case where prof_alloc_prep() is called with an over-estimate of
allocation size, and sampling doesn't end up being triggered, the tctx
must be discarded.
2016-06-05 20:42:24 -07:00
Jason Evans
c8c3cbdf47
Miscellaneous s/chunk/extent/ updates.
2016-06-05 20:42:24 -07:00
Jason Evans
a43db1c608
Relax NBINS constraint (max 255 --> max 256).
2016-06-05 20:42:24 -07:00
Jason Evans
751f2c332d
Remove obsolete stats.arenas.<i>.metadata.mapped mallctl.
...
Rename stats.arenas.<i>.metadata.allocated mallctl to
stats.arenas.<i>.metadata .
2016-06-05 20:42:24 -07:00
Jason Evans
03eea4fb8b
Better document --enable-ivsalloc.
2016-06-05 20:42:24 -07:00
Jason Evans
22588dda6e
Rename most remaining *chunk* APIs to *extent*.
2016-06-05 20:42:23 -07:00
Jason Evans
0c4932eb1e
s/chunk_lookup/extent_lookup/g, s/chunks_rtree/extents_rtree/g
2016-06-05 20:42:23 -07:00
Jason Evans
4a55daa363
s/CHUNK_HOOKS_INITIALIZER/EXTENT_HOOKS_INITIALIZER/g
2016-06-05 20:42:23 -07:00
Jason Evans
c9a76481d8
Rename chunks_{cached,retained,mtx} to extents_{cached,retained,mtx}.
2016-06-05 20:42:23 -07:00
Jason Evans
127026ad98
Rename chunk_*_t hooks to extent_*_t.
2016-06-05 20:42:23 -07:00
Jason Evans
9c305c9e5c
s/chunk_hook/extent_hook/g
2016-06-05 20:42:23 -07:00
Jason Evans
7d63fed0fd
Rename huge to large.
2016-06-05 20:42:23 -07:00
Jason Evans
714d1640f3
Update private symbols.
2016-06-05 20:42:23 -07:00
Jason Evans
498856f44a
Move slabs out of chunks.
2016-06-05 20:42:23 -07:00
Jason Evans
d28e5a6696
Improve interval-based profile dump triggering.
...
When an allocation is large enough to trigger multiple dumps, use
modular math rather than subtraction to reset the interval counter.
Prior to this change, it was possible for a single allocation to cause
many subsequent allocations to all trigger profile dumps.
When updating usable size for a sampled object, try to cancel out
the difference between LARGE_MINCLASS and usable size from the interval
counter.
2016-06-05 20:42:23 -07:00
Jason Evans
ed2c2427a7
Use huge size class infrastructure for large size classes.
2016-06-05 20:42:18 -07:00
Jason Evans
b46261d58b
Implement cache-oblivious support for huge size classes.
2016-06-03 12:27:41 -07:00
Jason Evans
4731cd47f7
Allow chunks to not be naturally aligned.
...
Precisely size extents for huge size classes that aren't multiples of
chunksize.
2016-06-03 12:27:41 -07:00
Jason Evans
741967e79d
Remove CHUNK_ADDR2BASE() and CHUNK_ADDR2OFFSET().
2016-06-03 12:27:41 -07:00
Jason Evans
23c52c895f
Make extent_prof_tctx_[gs]et() atomic.
2016-06-03 12:27:41 -07:00
Jason Evans
760bf11b23
Add extent_dirty_[gs]et().
2016-06-03 12:27:41 -07:00
Jason Evans
47613afc34
Convert rtree from per chunk to per page.
...
Refactor [de]registration to maintain interior rtree entries for slabs.
2016-06-03 12:27:41 -07:00
Jason Evans
5c6be2bdd3
Refactor chunk_purge_wrapper() to take extent argument.
2016-06-03 12:27:41 -07:00
Jason Evans
0eb6f08959
Refactor chunk_[de]commit_wrapper() to take extent arguments.
2016-06-03 12:27:41 -07:00
Jason Evans
6c94470822
Refactor chunk_dalloc_{cache,wrapper}() to take extent arguments.
...
Rename arena_extent_[d]alloc() to extent_[d]alloc().
Move all chunk [de]registration responsibility into chunk.c.
2016-06-03 12:27:41 -07:00
Jason Evans
de0305a7f3
Add/use chunk_split_wrapper().
...
Remove redundant ptr/oldsize args from huge_*().
Refactor huge/chunk/arena code boundaries.
2016-06-03 12:27:41 -07:00
Jason Evans
1ad060584f
Add/use chunk_merge_wrapper().
2016-06-03 12:27:41 -07:00
Jason Evans
384e88f451
Add/use chunk_commit_wrapper().
2016-06-03 12:27:41 -07:00
Jason Evans
56e0031d7d
Add/use chunk_decommit_wrapper().
2016-06-03 12:27:41 -07:00
Jason Evans
4d2d9cec5a
Merge chunk_alloc_base() into its only caller.
2016-06-03 12:27:41 -07:00
Jason Evans
fc0372a15e
Replace extent_tree_szad_* with extent_heap_*.
2016-06-03 12:27:41 -07:00
Jason Evans
ffa45a5331
Use rtree rather than [sz]ad trees for chunk split/coalesce operations.
2016-06-03 12:27:41 -07:00
Jason Evans
93e79c5c3f
Remove redundant chunk argument from chunk_{,de,re}register().
2016-06-03 12:27:41 -07:00
Jason Evans
9aea58d9a2
Add extent_past_get().
2016-06-03 12:27:41 -07:00
Jason Evans
d78846c989
Replace extent_achunk_[gs]et() with extent_slab_[gs]et().
2016-06-03 12:27:41 -07:00
Jason Evans
fae8344098
Add extent_active_[gs]et().
...
Always initialize extents' runs_dirty and chunks_cache linkage.
2016-06-03 12:27:41 -07:00
Jason Evans
6f71844659
Move *PAGE* definitions to pages.h.
2016-06-03 12:27:41 -07:00
Jason Evans
e75e9be130
Add rtree element witnesses.
2016-06-03 12:27:41 -07:00
Jason Evans
8c9be3e837
Refactor rtree to always use base_alloc() for node allocation.
2016-06-03 12:27:41 -07:00
Jason Evans
db72272bef
Use rtree-based chunk lookups rather than pointer bit twiddling.
...
Look up chunk metadata via the radix tree, rather than using
CHUNK_ADDR2BASE().
Propagate pointer's containing extent.
Minimize extent lookups by doing a single lookup (e.g. in free()) and
propagating the pointer's extent into nearly all the functions that may
need it.
2016-06-03 12:27:41 -07:00
Jason Evans
2d2b4e98c9
Add element acquire/release capabilities to rtree.
...
This makes it possible to acquire short-term "ownership" of rtree
elements so that it is possible to read an extent pointer *and* read the
extent's contents with a guarantee that the element will not be modified
until the ownership is released. This is intended as a mechanism for
resolving rtree read/write races rather than as a way to lock extents.
2016-06-03 12:27:33 -07:00
Jason Evans
a7a6f5bc96
Rename extent_node_t to extent_t.
2016-05-16 12:21:28 -07:00
Jason Evans
3aea827f5e
Simplify run quantization.
2016-05-16 12:21:27 -07:00
Jason Evans
7bb00ae9d6
Refactor runs_avail.
...
Use pszind_t size classes rather than szind_t size classes, and always
reserve space for NPSIZES elements. This removes unused heaps that are
not multiples of the page size, and adds (currently) unused heaps for
all huge size classes, with the immediate benefit that the size of
arena_t allocations is constant (no longer dependent on chunk size).
2016-05-16 12:21:21 -07:00
Jason Evans
226c446979
Implement pz2ind(), pind2sz(), and psz2u().
...
These compute size classes and indices similarly to size2index(),
index2size() and s2u(), respectively, but using the subset of size
classes that are multiples of the page size. Note that pszind_t and
szind_t are not interchangeable.
2016-05-13 10:31:54 -07:00
Jason Evans
627372b459
Initialize arena_bin_info at compile time rather than at boot time.
...
This resolves #370 .
2016-05-13 10:31:30 -07:00
Jason Evans
b683734b43
Implement BITMAP_INFO_INITIALIZER(nbits).
...
This allows static initialization of bitmap_info_t structures.
2016-05-13 10:27:48 -07:00
Jason Evans
17c021c177
Remove redzone support.
...
This resolves #369 .
2016-05-13 10:27:33 -07:00
Jason Evans
ba5c709517
Remove quarantine support.
2016-05-13 10:25:05 -07:00