Rather than relying on two's complement negation for alignment mask
generation, use bitwise not and addition. This dodges warnings from
MSVC, and should be strength-reduced by compiler optimization anyway.
Add extent serial numbers and use them where appropriate as a sort key
that is higher priority than address, so that the allocation policy
prefers older extents.
This resolves#147.
Add extent serial numbers and use them where appropriate as a sort key
that is higher priority than address, so that the allocation policy
prefers older extents.
This resolves#147.
2cdf07aba9 (Fix extent_quantize() to
handle greater-than-huge-size extents.) solved a non-problem; the
expression passed in to index2size() was never too large. However the
expression could in principle underflow, so fix the actual (latent) bug
and remove unnecessary complexity.
Remove outer CHUNK_CEILING(s2u(...)) from alloc_size computation, since
s2u() may overflow (and return 0), and CHUNK_CEILING() is only needed
around the alignment portion of the computation.
This fixes a regression caused by
5707d6f952 (Quantize szad trees by size
class.) and first released in 4.0.0.
This resolves#497.
Allocation requests can't directly create extents that exceed
HUGE_MAXCLASS, but extent merging can create them.
This fixes a regression caused by
8a03cf039c (Implement cache index
randomization for large allocations.) and first released in 4.0.0.
This resolves#497.
Fix arena_run_first_best_fit() to search all potentially non-empty
runs_avail heaps, rather than ignoring the heap that contains runs
larger than large_maxclass, but less than chunksize.
This fixes a regression caused by
f193fd80cf (Refactor runs_avail.).
This resolves#493.
Fix paren placement so that QUANTUM_CEILING() applies to the correct
portion of the expression that computes how much memory to base_alloc().
In practice this bug had no impact. This was caused by
5d8db15db9 (Simplify run quantization.),
which in turn fixed an over-allocation regression caused by
3c4d92e82a (Add per size class huge
allocation statistics.).
Fix arena_run_alloc_large_helper() to not convert size to usize when
searching for the first best fit via arena_run_first_best_fit(). This
allows the search to consider the optimal quantized size class, so that
e.g. allocating and deallocating 40 KiB in a tight loop can reuse the
same memory.
This regression was nominally caused by
5707d6f952 (Quantize szad trees by size
class.), but it did not commonly cause problems until
8a03cf039c (Implement cache index
randomization for large allocations.). These regressions were first
released in 4.0.0.
This resolves#487.
Fix chunk_alloc_cache() to support decommitted allocation, and use this
ability in arena_chunk_alloc_internal() and arena_stash_dirty(), so that
chunks don't get permanently stuck in a hybrid state.
This resolves#487.