Commit Graph

2869 Commits

Author SHA1 Message Date
Jason Evans
e916d55ba1 Add configure support for *-*-linux-android.
This is tailored to Android, i.e. more specific than the *-*-linux*
configuration.

This resolves #471.
2016-11-10 15:40:23 -08:00
Samuel Moritz
092d760817 Support Debian GNU/kFreeBSD.
Treat it exactly like Linux since they both use GNU libc.
2016-11-10 15:39:33 -08:00
Jason Evans
32d69e967e Add configure support for *-*-linux-android.
This is tailored to Android, i.e. more specific than the *-*-linux*
configuration.

This resolves #471.
2016-11-10 15:36:17 -08:00
Jason Evans
b4486dce24 Update config.{guess,sub} from upstream. 2016-11-10 15:08:32 -08:00
Jason Evans
c233dd5e40 Update config.{guess,sub} from upstream. 2016-11-10 15:02:05 -08:00
Jason Evans
0110fa8451 Merge branch 'rc-4.3.1' 2016-11-07 17:21:12 -08:00
Jason Evans
b0f56583b7 Update ChangeLog for 4.3.1. 2016-11-07 16:22:25 -08:00
Jason Evans
85dae2ff49 Update ChangeLog for 4.3.1. 2016-11-07 16:22:02 -08:00
Jason Evans
7b8e74f48f Revert "Define 64-bits atomics unconditionally"
This reverts commit af33e9a597.

This resolves #495.
2016-11-07 11:51:05 -08:00
Jason Evans
5d6cb6eb66 Refactor prng to not use 64-bit atomics on 32-bit platforms.
This resolves #495.
2016-11-07 11:50:59 -08:00
Jason Evans
5e0373c815 Fix test_prng_lg_range_zu() to work on 32-bit systems. 2016-11-07 11:50:11 -08:00
Jason Evans
cda59f9970 Rename atomic_*_{uint32,uint64,u}() to atomic_*_{u32,u64,zu}().
This change conforms to naming conventions throughout the codebase.
2016-11-07 11:27:48 -08:00
Jason Evans
2e46b13ad5 Revert "Define 64-bits atomics unconditionally"
This reverts commit c2942e2c0e.

This resolves #495.
2016-11-07 10:53:35 -08:00
Jason Evans
04b463546e Refactor prng to not use 64-bit atomics on 32-bit platforms.
This resolves #495.
2016-11-07 10:52:44 -08:00
Jason Evans
a4e83e8593 Fix run leak.
Fix arena_run_first_best_fit() to search all potentially non-empty
runs_avail heaps, rather than ignoring the heap that contains runs
larger than large_maxclass, but less than chunksize.

This fixes a regression caused by
f193fd80cf (Refactor runs_avail.).

This resolves #493.
2016-11-07 09:43:39 -08:00
Jason Evans
9bef119b42 Merge branch 'rel-4.3.0' 2016-11-04 17:46:17 -07:00
Jason Evans
8019f4c21c Merge branch 'rel-4.3.0' 2016-11-04 17:38:52 -07:00
Jason Evans
23f04ef9b7 Update ChangeLog for 4.3.0. 2016-11-04 15:15:40 -07:00
Jason Evans
e0a9e78374 Update ChangeLog for 4.3.0. 2016-11-04 15:15:24 -07:00
Jason Evans
28b7e42e44 Fix arena data structure size calculation.
Fix paren placement so that QUANTUM_CEILING() applies to the correct
portion of the expression that computes how much memory to base_alloc().
In practice this bug had no impact.  This was caused by
5d8db15db9 (Simplify run quantization.),
which in turn fixed an over-allocation regression caused by
3c4d92e82a (Add per size class huge
allocation statistics.).
2016-11-04 15:00:08 -07:00
Matthew Parkinson
77635bf532 Fixes to Visual Studio Project files 2016-11-04 10:01:31 -07:00
Matthew Parkinson
d30b3ea51a Fixes to Visual Studio Project files 2016-11-04 09:59:05 -07:00
Jason Evans
cb3ad659f0 Use -std=gnu11 if available.
This supersedes -std=gnu99, and enables C11 atomics.
2016-11-04 01:11:48 -07:00
Jason Evans
213667fe26 Update ChangeLog for 4.3.0. 2016-11-04 00:04:27 -07:00
Jason Evans
32896a902b Fix large allocation to search optimal size class heap.
Fix arena_run_alloc_large_helper() to not convert size to usize when
searching for the first best fit via arena_run_first_best_fit().  This
allows the search to consider the optimal quantized size class, so that
e.g. allocating and deallocating 40 KiB in a tight loop can reuse the
same memory.

This regression was nominally caused by
5707d6f952 (Quantize szad trees by size
class.), but it did not commonly cause problems until
8a03cf039c (Implement cache index
randomization for large allocations.).  These regressions were first
released in 4.0.0.

This resolves #487.
2016-11-03 22:36:30 -07:00
Jason Evans
e9012630ac Fix chunk_alloc_cache() to support decommitted allocation.
Fix chunk_alloc_cache() to support decommitted allocation, and use this
ability in arena_chunk_alloc_internal() and arena_stash_dirty(), so that
chunks don't get permanently stuck in a hybrid state.

This resolves #487.
2016-11-03 22:36:30 -07:00
Jason Evans
6d2a57cfbb Use -std=gnu11 if available.
This supersedes -std=gnu99, and enables C11 atomics.
2016-11-03 21:57:17 -07:00
Jason Evans
0760876927 Update ChangeLog for 4.3.0. 2016-11-04 00:02:43 -07:00
Jason Evans
a967fae362 Fix/simplify extent_recycle() allocation size computations.
Do not call s2u() during alloc_size computation, since any necessary
ceiling increase is taken care of later by extent_first_best_fit() -->
extent_size_quantize_ceil(), and the s2u() call may erroneously cause a
higher quantization result.

Remove an overly strict overflow check that was added in
4a7852137d (Fix extent_recycle()'s
cache-oblivious padding support.).
2016-11-03 23:49:21 -07:00
Jason Evans
4a7852137d Fix extent_recycle()'s cache-oblivious padding support.
Add padding *after* computing the size class, so that the optimal size
class isn't skipped during search for a usable extent.  This regression
was caused by b46261d58b (Implement
cache-oblivious support for huge size classes.).
2016-11-03 22:33:35 -07:00
Jason Evans
ea9961acdb Fix psz/pind edge cases.
Add an "over-size" extent heap in which to store extents which exceed
the maximum size class (plus cache-oblivious padding, if enabled).
Remove psz2ind_clamp() and use psz2ind() instead so that trying to
allocate the maximum size class can in principle succeed.  In practice,
this allows assertions to hold so that OOM errors can be successfully
generated.
2016-11-03 22:33:34 -07:00
Jason Evans
8dd5ea87ca Fix extent_alloc_cache[_locked]() to support decommitted allocation.
Fix extent_alloc_cache[_locked]() to support decommitted allocation, and
use this ability in arena_stash_dirty(), so that decommitted extents are
not needlessly committed during purging.  In practice this does not
happen on any currently supported systems, because both extent merging
and decommit must be implemented; all supported systems implement one
xor the other.
2016-11-03 22:33:23 -07:00
Jason Evans
4f7d8c2dee Update symbol mangling. 2016-11-03 15:00:02 -07:00
Jason Evans
dd3ed23aea Update symbol mangling. 2016-11-03 14:55:58 -07:00
Jason Evans
1ceae2f8cb Update ChangeLog for 4.3.0. 2016-11-02 21:42:16 -07:00
Jason Evans
62de7680ca Update project URL. 2016-11-02 21:42:16 -07:00
Jason Evans
04e1328ef1 Update ChangeLog for 4.3.0. 2016-11-02 21:39:24 -07:00
Samuel Moritz
69f027b855 Support Debian GNU/kFreeBSD.
Treat it exactly like Linux since they both use GNU libc.
2016-11-02 20:36:37 -07:00
Dave Watson
25f7bbcf28 Fix long spinning in rtree_node_init
rtree_node_init spinlocks the node, allocates, and then sets the node.
This is under heavy contention at the top of the tree if many threads
start to allocate at the same time.

Instead, take a per-rtree sleeping mutex to reduce spinning.  Tested
both pthreads and osx OSSpinLock, and both reduce spinning adequately

Previous benchmark time:
./ttest1 500 100
~15s

New benchmark time:
./ttest1 500 100
.57s
2016-11-02 20:30:53 -07:00
Dave Watson
712fde79fd Check for existance of CPU_COUNT macro before using it.
This resolves #485.
2016-11-02 20:05:40 -07:00
Dave Watson
6c56e194b0 Check for existance of CPU_COUNT macro before using it.
This resolves #485.
2016-11-02 19:54:19 -07:00
Jason Evans
eca3bc0131 Fix sycall(2) configure test for Linux. 2016-11-02 19:51:23 -07:00
Jason Evans
83ebf2fda5 Fix sycall(2) configure test for Linux. 2016-11-02 19:50:44 -07:00
Jason Evans
da206df10b Do not use syscall(2) on OS X 10.12 (deprecated). 2016-11-02 19:35:12 -07:00
Jason Evans
3f2b8d9cfa Add os_unfair_lock support.
OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended
replacement.
2016-11-02 19:35:12 -07:00
Jason Evans
a99e0fa2d2 Fix/refactor zone allocator integration code.
Fix zone_force_unlock() to reinitialize, rather than unlocking mutexes,
since OS X 10.12 cannot tolerate a child unlocking mutexes that were
locked by its parent.

Refactor; this was a side effect of experimenting with zone
{de,re}registration during fork(2).
2016-11-02 19:35:09 -07:00
Jason Evans
31db315f17 Call _exit(2) rather than exit(3) in forked child.
_exit(2) is async-signal-safe, whereas exit(3) is not.
2016-11-02 19:24:49 -07:00
Jason Evans
d82f2b3473 Do not use syscall(2) on OS X 10.12 (deprecated). 2016-11-02 19:18:33 -07:00
Jason Evans
795f6689de Add os_unfair_lock support.
OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended
replacement.
2016-11-02 18:09:45 -07:00
Jason Evans
d9f7b2a430 Fix/refactor zone allocator integration code.
Fix zone_force_unlock() to reinitialize, rather than unlocking mutexes,
since OS X 10.12 cannot tolerate a child unlocking mutexes that were
locked by its parent.

Refactor; this was a side effect of experimenting with zone
{de,re}registration during fork(2).
2016-11-02 18:06:40 -07:00