Commit Graph

2186 Commits

Author SHA1 Message Date
Jason Evans
84ae603577 Explicitly cast negative constants meant for use as unsigned. 2016-11-15 14:05:16 -08:00
Jason Evans
72c587a411 Add cast to silence (harmless) conversion warning. 2016-11-15 14:05:00 -08:00
Jason Evans
87004d238c Avoid negation of unsigned numbers.
Rather than relying on two's complement negation for alignment mask
generation, use bitwise not and addition.  This dodges warnings from
MSVC, and should be strength-reduced by compiler optimization anyway.
2016-11-15 14:04:35 -08:00
Jason Evans
2379479225 Consistently use size_t rather than uint64_t for extent serial numbers. 2016-11-15 13:47:22 -08:00
Jason Evans
6a71d37a75 Add packing test, which verifies stable layout policy. 2016-11-15 13:33:47 -08:00
Jason Evans
5c77af98b1 Add extent serial numbers.
Add extent serial numbers and use them where appropriate as a sort key
that is higher priority than address, so that the allocation policy
prefers older extents.

This resolves #147.
2016-11-15 13:33:40 -08:00
Jason Evans
2c95154501 Add packing test, which verifies stable layout policy. 2016-11-15 13:08:33 -08:00
Jason Evans
a38acf716e Add extent serial numbers.
Add extent serial numbers and use them where appropriate as a sort key
that is higher priority than address, so that the allocation policy
prefers older extents.

This resolves #147.
2016-11-15 13:08:33 -08:00
Jason Evans
c0a667112c Fix arena_reset() crashing bug.
This regression was caused by 498856f44a
(Move slabs out of chunks.).
2016-11-15 10:34:02 -08:00
Jason Evans
a2e601a223 Add JE_RUNNABLE() and use it for os_unfair_lock_*() test.
This resolves #494.
2016-11-12 09:48:06 -08:00
Jason Evans
45f83a2ac6 Add JE_RUNNABLE() and use it for os_unfair_lock_*() test.
This resolves #494.
2016-11-12 09:47:07 -08:00
Jason Evans
c25e711cf9 Reduce memory usage for sdallocx() test_alignment_and_size. 2016-11-11 23:50:35 -08:00
Jason Evans
ded4f38ffd Reduce memory usage for sdallocx() test_alignment_and_size. 2016-11-11 23:49:40 -08:00
Jason Evans
1aeea0f391 Simplify extent_quantize().
2cdf07aba9 (Fix extent_quantize() to
handle greater-than-huge-size extents.) solved a non-problem; the
expression passed in to index2size() was never too large.  However the
expression could in principle underflow, so fix the actual (latent) bug
and remove unnecessary complexity.
2016-11-11 22:46:55 -08:00
Jason Evans
a2af09f025 Remove overly restrictive stats_cactive_{add,sub}() assertions.
This fixes a regression caused by
40ee9aa957 (Fix stats.cactive accounting
regression.) and first released in 4.1.0.
2016-11-11 22:19:10 -08:00
Jason Evans
b9408d77a6 Fix/simplify chunk_recycle() allocation size computations.
Remove outer CHUNK_CEILING(s2u(...)) from alloc_size computation, since
s2u() may overflow (and return 0), and CHUNK_CEILING() is only needed
around the alignment portion of the computation.

This fixes a regression caused by
5707d6f952 (Quantize szad trees by size
class.) and first released in 4.0.0.

This resolves #497.
2016-11-11 22:18:39 -08:00
Jason Evans
2cdf07aba9 Fix extent_quantize() to handle greater-than-huge-size extents.
Allocation requests can't directly create extents that exceed
HUGE_MAXCLASS, but extent merging can create them.

This fixes a regression caused by
8a03cf039c (Implement cache index
randomization for large allocations.) and first released in 4.0.0.

This resolves #497.
2016-11-11 22:17:27 -08:00
Jason Evans
e916d55ba1 Add configure support for *-*-linux-android.
This is tailored to Android, i.e. more specific than the *-*-linux*
configuration.

This resolves #471.
2016-11-10 15:40:23 -08:00
Samuel Moritz
092d760817 Support Debian GNU/kFreeBSD.
Treat it exactly like Linux since they both use GNU libc.
2016-11-10 15:39:33 -08:00
Jason Evans
32d69e967e Add configure support for *-*-linux-android.
This is tailored to Android, i.e. more specific than the *-*-linux*
configuration.

This resolves #471.
2016-11-10 15:36:17 -08:00
Jason Evans
b4486dce24 Update config.{guess,sub} from upstream. 2016-11-10 15:08:32 -08:00
Jason Evans
c233dd5e40 Update config.{guess,sub} from upstream. 2016-11-10 15:02:05 -08:00
Jason Evans
0110fa8451 Merge branch 'rc-4.3.1' 2016-11-07 17:21:12 -08:00
Jason Evans
b0f56583b7 Update ChangeLog for 4.3.1. 2016-11-07 16:22:25 -08:00
Jason Evans
85dae2ff49 Update ChangeLog for 4.3.1. 2016-11-07 16:22:02 -08:00
Jason Evans
7b8e74f48f Revert "Define 64-bits atomics unconditionally"
This reverts commit af33e9a597.

This resolves #495.
2016-11-07 11:51:05 -08:00
Jason Evans
5d6cb6eb66 Refactor prng to not use 64-bit atomics on 32-bit platforms.
This resolves #495.
2016-11-07 11:50:59 -08:00
Jason Evans
5e0373c815 Fix test_prng_lg_range_zu() to work on 32-bit systems. 2016-11-07 11:50:11 -08:00
Jason Evans
cda59f9970 Rename atomic_*_{uint32,uint64,u}() to atomic_*_{u32,u64,zu}().
This change conforms to naming conventions throughout the codebase.
2016-11-07 11:27:48 -08:00
Jason Evans
2e46b13ad5 Revert "Define 64-bits atomics unconditionally"
This reverts commit c2942e2c0e.

This resolves #495.
2016-11-07 10:53:35 -08:00
Jason Evans
04b463546e Refactor prng to not use 64-bit atomics on 32-bit platforms.
This resolves #495.
2016-11-07 10:52:44 -08:00
Jason Evans
a4e83e8593 Fix run leak.
Fix arena_run_first_best_fit() to search all potentially non-empty
runs_avail heaps, rather than ignoring the heap that contains runs
larger than large_maxclass, but less than chunksize.

This fixes a regression caused by
f193fd80cf (Refactor runs_avail.).

This resolves #493.
2016-11-07 09:43:39 -08:00
Jason Evans
9bef119b42 Merge branch 'rel-4.3.0' 2016-11-04 17:46:17 -07:00
Jason Evans
8019f4c21c Merge branch 'rel-4.3.0' 2016-11-04 17:38:52 -07:00
Jason Evans
23f04ef9b7 Update ChangeLog for 4.3.0. 2016-11-04 15:15:40 -07:00
Jason Evans
e0a9e78374 Update ChangeLog for 4.3.0. 2016-11-04 15:15:24 -07:00
Jason Evans
28b7e42e44 Fix arena data structure size calculation.
Fix paren placement so that QUANTUM_CEILING() applies to the correct
portion of the expression that computes how much memory to base_alloc().
In practice this bug had no impact.  This was caused by
5d8db15db9 (Simplify run quantization.),
which in turn fixed an over-allocation regression caused by
3c4d92e82a (Add per size class huge
allocation statistics.).
2016-11-04 15:00:08 -07:00
Matthew Parkinson
77635bf532 Fixes to Visual Studio Project files 2016-11-04 10:01:31 -07:00
Matthew Parkinson
d30b3ea51a Fixes to Visual Studio Project files 2016-11-04 09:59:05 -07:00
Jason Evans
cb3ad659f0 Use -std=gnu11 if available.
This supersedes -std=gnu99, and enables C11 atomics.
2016-11-04 01:11:48 -07:00
Jason Evans
213667fe26 Update ChangeLog for 4.3.0. 2016-11-04 00:04:27 -07:00
Jason Evans
32896a902b Fix large allocation to search optimal size class heap.
Fix arena_run_alloc_large_helper() to not convert size to usize when
searching for the first best fit via arena_run_first_best_fit().  This
allows the search to consider the optimal quantized size class, so that
e.g. allocating and deallocating 40 KiB in a tight loop can reuse the
same memory.

This regression was nominally caused by
5707d6f952 (Quantize szad trees by size
class.), but it did not commonly cause problems until
8a03cf039c (Implement cache index
randomization for large allocations.).  These regressions were first
released in 4.0.0.

This resolves #487.
2016-11-03 22:36:30 -07:00
Jason Evans
e9012630ac Fix chunk_alloc_cache() to support decommitted allocation.
Fix chunk_alloc_cache() to support decommitted allocation, and use this
ability in arena_chunk_alloc_internal() and arena_stash_dirty(), so that
chunks don't get permanently stuck in a hybrid state.

This resolves #487.
2016-11-03 22:36:30 -07:00
Jason Evans
6d2a57cfbb Use -std=gnu11 if available.
This supersedes -std=gnu99, and enables C11 atomics.
2016-11-03 21:57:17 -07:00
Jason Evans
0760876927 Update ChangeLog for 4.3.0. 2016-11-04 00:02:43 -07:00
Jason Evans
a967fae362 Fix/simplify extent_recycle() allocation size computations.
Do not call s2u() during alloc_size computation, since any necessary
ceiling increase is taken care of later by extent_first_best_fit() -->
extent_size_quantize_ceil(), and the s2u() call may erroneously cause a
higher quantization result.

Remove an overly strict overflow check that was added in
4a7852137d (Fix extent_recycle()'s
cache-oblivious padding support.).
2016-11-03 23:49:21 -07:00
Jason Evans
4a7852137d Fix extent_recycle()'s cache-oblivious padding support.
Add padding *after* computing the size class, so that the optimal size
class isn't skipped during search for a usable extent.  This regression
was caused by b46261d58b (Implement
cache-oblivious support for huge size classes.).
2016-11-03 22:33:35 -07:00
Jason Evans
ea9961acdb Fix psz/pind edge cases.
Add an "over-size" extent heap in which to store extents which exceed
the maximum size class (plus cache-oblivious padding, if enabled).
Remove psz2ind_clamp() and use psz2ind() instead so that trying to
allocate the maximum size class can in principle succeed.  In practice,
this allows assertions to hold so that OOM errors can be successfully
generated.
2016-11-03 22:33:34 -07:00
Jason Evans
8dd5ea87ca Fix extent_alloc_cache[_locked]() to support decommitted allocation.
Fix extent_alloc_cache[_locked]() to support decommitted allocation, and
use this ability in arena_stash_dirty(), so that decommitted extents are
not needlessly committed during purging.  In practice this does not
happen on any currently supported systems, because both extent merging
and decommit must be implemented; all supported systems implement one
xor the other.
2016-11-03 22:33:23 -07:00
Jason Evans
4f7d8c2dee Update symbol mangling. 2016-11-03 15:00:02 -07:00