Separate arena_avail trees

Separate run trees by index, replacing the previous quantize logic.
Quantization by index is now performed only on insertion / removal from
the tree, and not on node comparison, saving some cpu.  This also means
we don't have to dereference the miscelm* pointers, saving half of the
memory loads from miscelms/mapbits that have fallen out of cache.  A
linear scan of the indicies appears to be fast enough.

The only cost of this is an extra tree array in each arena.
This commit is contained in:
Dave Watson
2016-02-23 12:06:21 -08:00
committed by Jason Evans
parent 2b1fc90b7b
commit 3417a304cc
2 changed files with 58 additions and 96 deletions

View File

@@ -351,12 +351,6 @@ struct arena_s {
*/
size_t ndirty;
/*
* Size/address-ordered tree of this arena's available runs. The tree
* is used for first-best-fit run allocation.
*/
arena_avail_tree_t runs_avail;
/*
* Unused dirty memory this arena manages. Dirty memory is conceptually
* tracked as an arbitrarily interleaved LRU of dirty runs and cached
@@ -462,6 +456,12 @@ struct arena_s {
/* bins is used to store trees of free regions. */
arena_bin_t bins[NBINS];
/*
* Quantized address-ordered trees of this arena's available runs. The
* trees are used for first-best-fit run allocation.
*/
arena_avail_tree_t runs_avail[1]; /* Dynamically sized. */
};
/* Used in conjunction with tsd for fast arena-related context lookup. */