Commit Graph

1395 Commits

Author SHA1 Message Date
Jason Evans
b9408d77a6 Fix/simplify chunk_recycle() allocation size computations.
Remove outer CHUNK_CEILING(s2u(...)) from alloc_size computation, since
s2u() may overflow (and return 0), and CHUNK_CEILING() is only needed
around the alignment portion of the computation.

This fixes a regression caused by
5707d6f952 (Quantize szad trees by size
class.) and first released in 4.0.0.

This resolves #497.
2016-11-11 22:18:39 -08:00
Jason Evans
2cdf07aba9 Fix extent_quantize() to handle greater-than-huge-size extents.
Allocation requests can't directly create extents that exceed
HUGE_MAXCLASS, but extent merging can create them.

This fixes a regression caused by
8a03cf039c (Implement cache index
randomization for large allocations.) and first released in 4.0.0.

This resolves #497.
2016-11-11 22:17:27 -08:00
Jason Evans
e916d55ba1 Add configure support for *-*-linux-android.
This is tailored to Android, i.e. more specific than the *-*-linux*
configuration.

This resolves #471.
2016-11-10 15:40:23 -08:00
Samuel Moritz
092d760817 Support Debian GNU/kFreeBSD.
Treat it exactly like Linux since they both use GNU libc.
2016-11-10 15:39:33 -08:00
Jason Evans
b4486dce24 Update config.{guess,sub} from upstream. 2016-11-10 15:08:32 -08:00
Jason Evans
0110fa8451 Merge branch 'rc-4.3.1' 2016-11-07 17:21:12 -08:00
Jason Evans
b0f56583b7 Update ChangeLog for 4.3.1. 2016-11-07 16:22:25 -08:00
Jason Evans
7b8e74f48f Revert "Define 64-bits atomics unconditionally"
This reverts commit af33e9a597.

This resolves #495.
2016-11-07 11:51:05 -08:00
Jason Evans
5d6cb6eb66 Refactor prng to not use 64-bit atomics on 32-bit platforms.
This resolves #495.
2016-11-07 11:50:59 -08:00
Jason Evans
a4e83e8593 Fix run leak.
Fix arena_run_first_best_fit() to search all potentially non-empty
runs_avail heaps, rather than ignoring the heap that contains runs
larger than large_maxclass, but less than chunksize.

This fixes a regression caused by
f193fd80cf (Refactor runs_avail.).

This resolves #493.
2016-11-07 09:43:39 -08:00
Jason Evans
9bef119b42 Merge branch 'rel-4.3.0' 2016-11-04 17:46:17 -07:00
Jason Evans
8019f4c21c Merge branch 'rel-4.3.0' 2016-11-04 17:38:52 -07:00
Jason Evans
23f04ef9b7 Update ChangeLog for 4.3.0. 2016-11-04 15:15:40 -07:00
Jason Evans
28b7e42e44 Fix arena data structure size calculation.
Fix paren placement so that QUANTUM_CEILING() applies to the correct
portion of the expression that computes how much memory to base_alloc().
In practice this bug had no impact.  This was caused by
5d8db15db9 (Simplify run quantization.),
which in turn fixed an over-allocation regression caused by
3c4d92e82a (Add per size class huge
allocation statistics.).
2016-11-04 15:00:08 -07:00
Matthew Parkinson
77635bf532 Fixes to Visual Studio Project files 2016-11-04 10:01:31 -07:00
Jason Evans
cb3ad659f0 Use -std=gnu11 if available.
This supersedes -std=gnu99, and enables C11 atomics.
2016-11-04 01:11:48 -07:00
Jason Evans
213667fe26 Update ChangeLog for 4.3.0. 2016-11-04 00:04:27 -07:00
Jason Evans
32896a902b Fix large allocation to search optimal size class heap.
Fix arena_run_alloc_large_helper() to not convert size to usize when
searching for the first best fit via arena_run_first_best_fit().  This
allows the search to consider the optimal quantized size class, so that
e.g. allocating and deallocating 40 KiB in a tight loop can reuse the
same memory.

This regression was nominally caused by
5707d6f952 (Quantize szad trees by size
class.), but it did not commonly cause problems until
8a03cf039c (Implement cache index
randomization for large allocations.).  These regressions were first
released in 4.0.0.

This resolves #487.
2016-11-03 22:36:30 -07:00
Jason Evans
e9012630ac Fix chunk_alloc_cache() to support decommitted allocation.
Fix chunk_alloc_cache() to support decommitted allocation, and use this
ability in arena_chunk_alloc_internal() and arena_stash_dirty(), so that
chunks don't get permanently stuck in a hybrid state.

This resolves #487.
2016-11-03 22:36:30 -07:00
Jason Evans
dd3ed23aea Update symbol mangling. 2016-11-03 14:55:58 -07:00
Jason Evans
1ceae2f8cb Update ChangeLog for 4.3.0. 2016-11-02 21:42:16 -07:00
Jason Evans
62de7680ca Update project URL. 2016-11-02 21:42:16 -07:00
Dave Watson
6c56e194b0 Check for existance of CPU_COUNT macro before using it.
This resolves #485.
2016-11-02 19:54:19 -07:00
Jason Evans
eca3bc0131 Fix sycall(2) configure test for Linux. 2016-11-02 19:51:23 -07:00
Jason Evans
da206df10b Do not use syscall(2) on OS X 10.12 (deprecated). 2016-11-02 19:35:12 -07:00
Jason Evans
3f2b8d9cfa Add os_unfair_lock support.
OS X 10.12 deprecated OSSpinLock; os_unfair_lock is the recommended
replacement.
2016-11-02 19:35:12 -07:00
Jason Evans
a99e0fa2d2 Fix/refactor zone allocator integration code.
Fix zone_force_unlock() to reinitialize, rather than unlocking mutexes,
since OS X 10.12 cannot tolerate a child unlocking mutexes that were
locked by its parent.

Refactor; this was a side effect of experimenting with zone
{de,re}registration during fork(2).
2016-11-02 19:35:09 -07:00
Jason Evans
31db315f17 Call _exit(2) rather than exit(3) in forked child.
_exit(2) is async-signal-safe, whereas exit(3) is not.
2016-11-02 19:24:49 -07:00
Jason Evans
07ee4c5ff4 Force no lazy-lock on Windows.
Monitoring thread creation is unimplemented for Windows, which means
lazy-lock cannot function correctly.

This resolves #310.
2016-11-02 09:13:08 -07:00
Jason Evans
f19bedb04c Use <quote>...</quote> rather than &ldquo;...&rdquo; or "..." in XML. 2016-11-01 15:32:40 -07:00
Jason Evans
b599b32280 Add "J" (JSON) support to malloc_stats_print().
This resolves #474.
2016-11-01 15:32:37 -07:00
Jason Evans
4752a54eeb Refactor witness_unlock() to fix undefined test behavior.
This resolves #396.
2016-10-31 11:51:39 -07:00
Jason Evans
1d57c03e33 Use CLOCK_MONOTONIC_COARSE rather than COARSE_MONOTONIC_RAW.
The raw clock variant is slow (even relative to plain CLOCK_MONOTONIC),
whereas the coarse clock variant is faster than CLOCK_MONOTONIC, but
still has resolution (~1ms) that is adequate for our purposes.

This resolves #479.
2016-10-29 22:59:42 -07:00
Jason Evans
c443b67561 Use syscall(2) rather than {open,read,close}(2) during boot.
Some applications wrap various system calls, and if they call the
allocator in their wrappers, unexpected reentry can result.  This is not
a general solution (many other syscalls are spread throughout the code),
but this resolves a bootstrapping issue that is apparently common.

This resolves #443.
2016-10-29 22:46:52 -07:00
Jason Evans
35a108c809 Fix EXTRA_CFLAGS to not affect configuration. 2016-10-29 22:46:43 -07:00
Jason Evans
e46f8f97bc Do not mark malloc_conf as weak on Windows.
This works around malloc_conf not being properly initialized by at least
the cygwin toolchain.  Prior build system changes to use
-Wl,--[no-]whole-archive may be necessary for malloc_conf resolution to
work properly as a non-weak symbol (not tested).
2016-10-29 00:16:30 -07:00
Jason Evans
35799a5030 Do not mark malloc_conf as weak for unit tests.
This is generally correct (no need for weak symbols since no jemalloc
library is involved in the link phase), and avoids linking problems
(apparently unininitialized non-NULL malloc_conf) when using cygwin with
gcc.
2016-10-28 23:21:14 -07:00
Dave Watson
ed84764a2a Support static linking of jemalloc with glibc
glibc defines its malloc implementation with several weak and strong
symbols:

strong_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
strong_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
strong_alias (__libc_free, __free) strong_alias (__libc_free, free)
strong_alias (__libc_malloc, __malloc) strong_alias (__libc_malloc, malloc)

The issue is not with the weak symbols, but that other parts of glibc
depend on __libc_malloc explicitly.  Defining them in terms of jemalloc
API's allows the linker to drop glibc's malloc.o completely from the link,
and static linking no longer results in symbol collisions.

Another wrinkle: jemalloc during initialization calls sysconf to
get the number of CPU's.  GLIBC allocates for the first time before
setting up isspace (and other related) tables, which are used by
sysconf.  Instead, use the pthread API to get the number of
CPUs with GLIBC, which seems to work.

This resolves #442.
2016-10-28 15:10:19 -07:00
Jason Evans
b99c72f3d2 Reduce memory requirements for regression tests.
This is intended to drop memory usage to a level that AppVeyor test
instances can handle.

This resolves #393.
2016-10-28 11:56:16 -07:00
Jason Evans
eaecaad8ea Periodically purge in memory-intensive integration tests.
This resolves #393.
2016-10-28 11:55:09 -07:00
Jason Evans
2c53faf352 Periodically purge in memory-intensive integration tests.
This resolves #393.
2016-10-28 11:54:56 -07:00
Jason Evans
e7d6779918 Only link with libm (-lm) if necessary.
This fixes warnings when building with MSVC.
2016-10-28 00:48:03 -07:00
Jason Evans
875ff15e6a Only use --whole-archive with gcc.
Conditionalize use of --whole-archive on the platform plus compiler,
rather than on the ABI.  This fixes a regression caused by
7b24c6e557 (Use --whole-archive when
linking integration tests on MinGW.).
2016-10-28 00:47:53 -07:00
Jason Evans
1eb801bcad Do not force lazy lock on Windows.
This reverts 13473c7c66, which was
intended to work around bootstrapping issues when linking statically.
However, this actually causes problems in various other configurations,
so this reversion may force a future fix for the underlying problem, if
it still exists.
2016-10-28 00:47:42 -07:00
Jason Evans
dc553d52d8 Fix over-sized allocation of rtree leaf nodes.
Use the correct level metadata when allocating child nodes so that leaf
nodes don't end up over-sized (2^16 elements vs 2^4 elements).
2016-10-28 00:41:15 -07:00
Jason Evans
5569b4a42c Use --whole-archive when linking integration tests on MinGW.
Prior to this change, the malloc_conf weak symbol provided by the
jemalloc dynamic library is always used, even if the application
provides a malloc_conf symbol.  Use the --whole-archive linker option
to allow the weak symbol to be overridden.
2016-10-25 21:52:36 -07:00
Jason Evans
962a2979e3 Do not (recursively) allocate within tsd_fetch().
Refactor tsd so that tsdn_fetch() does not trigger allocation, since
allocation could cause infinite recursion.

This resolves #458.
2016-10-21 00:27:37 -07:00
Jason Evans
e2bcf037d4 Make dss operations lockless.
Rather than protecting dss operations with a mutex, use atomic
operations.  This has negligible impact on synchronization overhead
during typical dss allocation, but is a substantial improvement for
chunk_in_dss() and the newly added chunk_dss_mergeable(), which can be
called multiple times during chunk deallocations.

This change also has the advantage of avoiding tsd in deallocation paths
associated with purging, which resolves potential deadlocks during
thread exit due to attempted tsd resurrection.

This resolves #425.
2016-10-13 15:33:56 -07:00
Jason Evans
9737685943 Add/use adaptive spinning.
Add spin_t and spin_{init,adaptive}(), which provide a simple
abstraction for adaptive spinning.

Adaptively spin during busy waits in bootstrapping and rtree node
initialization.
2016-10-13 14:58:38 -07:00
Jason Evans
a2539fab95 Disallow 0x5a junk filling when running in Valgrind.
Explicitly disallow junk:true and junk:free runtime settings when
running in Valgrind, since deallocation-time junk filling and redzone
validation cause false positive Valgrind reports.

This resolves #470.
2016-10-12 22:58:40 -07:00