Commit Graph

1997 Commits

Author SHA1 Message Date
Dave Watson
ed84764a2a Support static linking of jemalloc with glibc
glibc defines its malloc implementation with several weak and strong
symbols:

strong_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
strong_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
strong_alias (__libc_free, __free) strong_alias (__libc_free, free)
strong_alias (__libc_malloc, __malloc) strong_alias (__libc_malloc, malloc)

The issue is not with the weak symbols, but that other parts of glibc
depend on __libc_malloc explicitly.  Defining them in terms of jemalloc
API's allows the linker to drop glibc's malloc.o completely from the link,
and static linking no longer results in symbol collisions.

Another wrinkle: jemalloc during initialization calls sysconf to
get the number of CPU's.  GLIBC allocates for the first time before
setting up isspace (and other related) tables, which are used by
sysconf.  Instead, use the pthread API to get the number of
CPUs with GLIBC, which seems to work.

This resolves #442.
2016-10-28 15:10:19 -07:00
Dave Watson
8309388408 Support static linking of jemalloc with glibc
glibc defines its malloc implementation with several weak and strong
symbols:

strong_alias (__libc_calloc, __calloc) weak_alias (__libc_calloc, calloc)
strong_alias (__libc_free, __cfree) weak_alias (__libc_free, cfree)
strong_alias (__libc_free, __free) strong_alias (__libc_free, free)
strong_alias (__libc_malloc, __malloc) strong_alias (__libc_malloc, malloc)

The issue is not with the weak symbols, but that other parts of glibc
depend on __libc_malloc explicitly.  Defining them in terms of jemalloc
API's allows the linker to drop glibc's malloc.o completely from the link,
and static linking no longer results in symbol collisions.

Another wrinkle: jemalloc during initialization calls sysconf to
get the number of CPU's.  GLIBC allocates for the first time before
setting up isspace (and other related) tables, which are used by
sysconf.  Instead, use the pthread API to get the number of
CPUs with GLIBC, which seems to work.

This resolves #442.
2016-10-28 15:08:19 -07:00
Jason Evans
b99c72f3d2 Reduce memory requirements for regression tests.
This is intended to drop memory usage to a level that AppVeyor test
instances can handle.

This resolves #393.
2016-10-28 11:56:16 -07:00
Jason Evans
eaecaad8ea Periodically purge in memory-intensive integration tests.
This resolves #393.
2016-10-28 11:55:09 -07:00
Jason Evans
2c53faf352 Periodically purge in memory-intensive integration tests.
This resolves #393.
2016-10-28 11:54:56 -07:00
Jason Evans
bde815dc40 Reduce memory requirements for regression tests.
This is intended to drop memory usage to a level that AppVeyor test
instances can handle.

This resolves #393.
2016-10-28 11:23:24 -07:00
Jason Evans
970d293257 Periodically purge in memory-intensive integration tests.
This resolves #393.
2016-10-28 11:00:36 -07:00
Jason Evans
963289df13 Periodically purge in memory-intensive integration tests.
This resolves #393.
2016-10-28 10:44:39 -07:00
Jason Evans
e7d6779918 Only link with libm (-lm) if necessary.
This fixes warnings when building with MSVC.
2016-10-28 00:48:03 -07:00
Jason Evans
875ff15e6a Only use --whole-archive with gcc.
Conditionalize use of --whole-archive on the platform plus compiler,
rather than on the ABI.  This fixes a regression caused by
7b24c6e557 (Use --whole-archive when
linking integration tests on MinGW.).
2016-10-28 00:47:53 -07:00
Jason Evans
1eb801bcad Do not force lazy lock on Windows.
This reverts 13473c7c66, which was
intended to work around bootstrapping issues when linking statically.
However, this actually causes problems in various other configurations,
so this reversion may force a future fix for the underlying problem, if
it still exists.
2016-10-28 00:47:42 -07:00
Jason Evans
dc553d52d8 Fix over-sized allocation of rtree leaf nodes.
Use the correct level metadata when allocating child nodes so that leaf
nodes don't end up over-sized (2^16 elements vs 2^4 elements).
2016-10-28 00:41:15 -07:00
Jason Evans
68e14c9884 Fix over-sized allocation of rtree leaf nodes.
Use the correct level metadata when allocating child nodes so that leaf
nodes don't end up over-sized (2^16 elements vs 2^4 elements).
2016-10-28 00:16:55 -07:00
Jason Evans
977103c897 Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).
This avoids warnings in some cases, and is otherwise generally good
hygiene.
2016-10-27 21:31:25 -07:00
Jason Evans
44df4a45cf Explicitly cast negative constants meant for use as unsigned. 2016-10-27 21:29:59 -07:00
Jason Evans
17aa187f6b Add cast to silence (harmless) conversion warning. 2016-10-27 21:29:00 -07:00
Jason Evans
48d4adfbeb Avoid negation of unsigned numbers.
Rather than relying on two's complement negation for alignment mask
generation, use bitwise not and addition.  This dodges warnings from
MSVC, and should be strength-reduced by compiler optimization anyway.
2016-10-27 21:26:33 -07:00
Jason Evans
d76cfec319 Only link with libm (-lm) if necessary.
This fixes warnings when building with MSVC.
2016-10-27 21:23:48 -07:00
Jason Evans
c44fa92db5 Only use --whole-archive with gcc.
Conditionalize use of --whole-archive on the platform plus compiler,
rather than on the ABI.  This fixes a regression caused by
7b24c6e557 (Use --whole-archive when
linking integration tests on MinGW.).
2016-10-27 17:10:56 -07:00
Jason Evans
583c32c305 Do not force lazy lock on Windows.
This reverts 13473c7c66, which was
intended to work around bootstrapping issues when linking statically.
However, this actually causes problems in various other configurations,
so this reversion may force a future fix for the underlying problem, if
it still exists.
2016-10-27 15:41:43 -07:00
Jason Evans
7b24c6e557 Use --whole-archive when linking integration tests on MinGW.
Prior to this change, the malloc_conf weak symbol provided by the
jemalloc dynamic library is always used, even if the application
provides a malloc_conf symbol.  Use the --whole-archive linker option
to allow the weak symbol to be overridden.
2016-10-25 22:03:14 -07:00
Jason Evans
5569b4a42c Use --whole-archive when linking integration tests on MinGW.
Prior to this change, the malloc_conf weak symbol provided by the
jemalloc dynamic library is always used, even if the application
provides a malloc_conf symbol.  Use the --whole-archive linker option
to allow the weak symbol to be overridden.
2016-10-25 21:52:36 -07:00
Jason Evans
962a2979e3 Do not (recursively) allocate within tsd_fetch().
Refactor tsd so that tsdn_fetch() does not trigger allocation, since
allocation could cause infinite recursion.

This resolves #458.
2016-10-21 00:27:37 -07:00
Jason Evans
b54d160dc4 Do not (recursively) allocate within tsd_fetch().
Refactor tsd so that tsdn_fetch() does not trigger allocation, since
allocation could cause infinite recursion.

This resolves #458.
2016-10-20 23:59:12 -07:00
Jason Evans
577d4572b0 Make dss operations lockless.
Rather than protecting dss operations with a mutex, use atomic
operations.  This has negligible impact on synchronization overhead
during typical dss allocation, but is a substantial improvement for
extent_in_dss() and the newly added extent_dss_mergeable(), which can be
called multiple times during extent deallocations.

This change also has the advantage of avoiding tsd in deallocation paths
associated with purging, which resolves potential deadlocks during
thread exit due to attempted tsd resurrection.

This resolves #425.
2016-10-13 15:37:00 -07:00
Jason Evans
e2bcf037d4 Make dss operations lockless.
Rather than protecting dss operations with a mutex, use atomic
operations.  This has negligible impact on synchronization overhead
during typical dss allocation, but is a substantial improvement for
chunk_in_dss() and the newly added chunk_dss_mergeable(), which can be
called multiple times during chunk deallocations.

This change also has the advantage of avoiding tsd in deallocation paths
associated with purging, which resolves potential deadlocks during
thread exit due to attempted tsd resurrection.

This resolves #425.
2016-10-13 15:33:56 -07:00
Jason Evans
9737685943 Add/use adaptive spinning.
Add spin_t and spin_{init,adaptive}(), which provide a simple
abstraction for adaptive spinning.

Adaptively spin during busy waits in bootstrapping and rtree node
initialization.
2016-10-13 14:58:38 -07:00
Jason Evans
e5effef428 Add/use adaptive spinning.
Add spin_t and spin_{init,adaptive}(), which provide a simple
abstraction for adaptive spinning.

Adaptively spin during busy waits in bootstrapping and rtree node
initialization.
2016-10-13 14:55:39 -07:00
Jason Evans
a2539fab95 Disallow 0x5a junk filling when running in Valgrind.
Explicitly disallow junk:true and junk:free runtime settings when
running in Valgrind, since deallocation-time junk filling and redzone
validation cause false positive Valgrind reports.

This resolves #470.
2016-10-12 22:58:40 -07:00
Jason Evans
9acd5cf178 Remove all vestiges of chunks.
Remove mallctls:
- opt.lg_chunk
- stats.cactive

This resolves #464.
2016-10-12 11:55:43 -07:00
Jason Evans
63b5657aa5 Remove ratio-based purging.
Make decay-based purging the default (and only) mode.

Remove associated mallctls:
- opt.purge
- opt.lg_dirty_mult
- arena.<i>.lg_dirty_mult
- arenas.lg_dirty_mult
- stats.arenas.<i>.lg_dirty_mult

This resolves #385.
2016-10-12 10:40:27 -07:00
Jason Evans
d419bb09ef Fix and simplify decay-based purging.
Simplify decay-based purging attempts to only be triggered when the
epoch is advanced, rather than every time purgeable memory increases.
In a correctly functioning system (not previously the case; see below),
this only causes a behavior difference if during subsequent purge
attempts the least recently used (LRU) purgeable memory extent is
initially too large to be purged, but that memory is reused between
attempts and one or more of the next LRU purgeable memory extents are
small enough to be purged.  In practice this is an arbitrary behavior
change that is within the set of acceptable behaviors.

As for the purging fix, assure that arena->decay.ndirty is recorded
*after* the epoch advance and associated purging occurs.  Prior to this
fix, it was possible for purging during epoch advance to cause a
substantially underrepresentative (arena->ndirty - arena->decay.ndirty),
i.e. the number of dirty pages attributed to the current epoch was too
low, and a series of unintended purges could result.  This fix is also
relevant in the context of the simplification described above, but the
bug's impact would be limited to over-purging at epoch advances.
2016-10-11 15:50:05 -07:00
Jason Evans
a14712b4b8 Fix decay tests to all adapt to nstime_monotonic(). 2016-10-11 15:49:55 -07:00
Jason Evans
b4b4a77848 Fix and simplify decay-based purging.
Simplify decay-based purging attempts to only be triggered when the
epoch is advanced, rather than every time purgeable memory increases.
In a correctly functioning system (not previously the case; see below),
this only causes a behavior difference if during subsequent purge
attempts the least recently used (LRU) purgeable memory extent is
initially too large to be purged, but that memory is reused between
attempts and one or more of the next LRU purgeable memory extents are
small enough to be purged.  In practice this is an arbitrary behavior
change that is within the set of acceptable behaviors.

As for the purging fix, assure that arena->decay.ndirty is recorded
*after* the epoch advance and associated purging occurs.  Prior to this
fix, it was possible for purging during epoch advance to cause a
substantially underrepresentative (arena->ndirty - arena->decay.ndirty),
i.e. the number of dirty pages attributed to the current epoch was too
low, and a series of unintended purges could result.  This fix is also
relevant in the context of the simplification described above, but the
bug's impact would be limited to over-purging at epoch advances.
2016-10-11 15:30:01 -07:00
Jason Evans
48993ed536 Fix decay tests to all adapt to nstime_monotonic(). 2016-10-11 15:28:43 -07:00
Jason Evans
45a5bf6772 Do not advance decay epoch when time goes backwards.
Instead, move the epoch backward in time.  Additionally, add
nstime_monotonic() and use it in debug builds to assert that time only
goes backward if nstime_update() is using a non-monotonic time source.
2016-10-10 22:31:37 -07:00
Jason Evans
94e7ffa979 Refactor arena->decay_* into arena->decay.* (arena_decay_t). 2016-10-10 22:22:59 -07:00
Jason Evans
5f11fb7d43 Do not advance decay epoch when time goes backwards.
Instead, move the epoch backward in time.  Additionally, add
nstime_monotonic() and use it in debug builds to assert that time only
goes backward if nstime_update() is using a non-monotonic time source.
2016-10-10 22:15:10 -07:00
Jason Evans
ee0c74b77a Refactor arena->decay_* into arena->decay.* (arena_decay_t). 2016-10-10 20:32:19 -07:00
Jason Evans
b732c395b7 Refine nstime_update().
Add missing #include <time.h>.  The critical time facilities appear to
have been transitively included via unistd.h and sys/time.h, but in
principle this omission was capable of having caused
clock_gettime(CLOCK_MONOTONIC, ...) to have been overlooked in favor of
gettimeofday(), which in turn could cause spurious non-monotonic time
updates.

Refactor nstime_get() out of nstime_update() and add configure tests for
all variants.

Add CLOCK_MONOTONIC_RAW support (Linux-specific) and
mach_absolute_time() support (OS X-specific).

Do not fall back to clock_gettime(CLOCK_REALTIME, ...).  This was a
fragile Linux-specific workaround, which we're unlikely to use at all
now that clock_gettime(CLOCK_MONOTONIC_RAW, ...) is supported, and if we
have no choice besides non-monotonic clocks, gettimeofday() is only
incrementally worse.
2016-10-10 11:40:46 -07:00
Jason Evans
e0164bc63c Refine nstime_update().
Add missing #include <time.h>.  The critical time facilities appear to
have been transitively included via unistd.h and sys/time.h, but in
principle this omission was capable of having caused
clock_gettime(CLOCK_MONOTONIC, ...) to have been overlooked in favor of
gettimeofday(), which in turn could cause spurious non-monotonic time
updates.

Refactor nstime_get() out of nstime_update() and add configure tests for
all variants.

Add CLOCK_MONOTONIC_RAW support (Linux-specific) and
mach_absolute_time() support (OS X-specific).

Do not fall back to clock_gettime(CLOCK_REALTIME, ...).  This was a
fragile Linux-specific workaround, which we're unlikely to use at all
now that clock_gettime(CLOCK_MONOTONIC_RAW, ...) is supported, and if we
have no choice besides non-monotonic clocks, gettimeofday() is only
incrementally worse.
2016-10-10 10:33:59 -07:00
Jason Evans
5d8db15db9 Simplify run quantization. 2016-10-06 15:58:38 -07:00
Jason Evans
f193fd80cf Refactor runs_avail.
Use pszind_t size classes rather than szind_t size classes, and always
reserve space for NPSIZES elements.  This removes unused heaps that are
not multiples of the page size, and adds (currently) unused heaps for
all huge size classes, with the immediate benefit that the size of
arena_t allocations is constant (no longer dependent on chunk size).
2016-10-04 19:48:50 -07:00
Jason Evans
1abb49f09d Implement pz2ind(), pind2sz(), and psz2u().
These compute size classes and indices similarly to size2index(),
index2size() and s2u(), respectively, but using the subset of size
classes that are multiples of the page size.  Note that pszind_t and
szind_t are not interchangeable.
2016-10-04 16:29:19 -07:00
Jason Evans
bcd5424b1c Use TSDN_NULL rather than NULL as appropriate. 2016-10-04 15:56:56 -07:00
Jason Evans
c19b48fe73 Fix a typo. 2016-10-04 15:56:26 -07:00
Mike Hommey
af33e9a597 Define 64-bits atomics unconditionally
They are used on all platforms in prng.h.
2016-10-04 12:18:14 -07:00
Jason Evans
b6c0867142 Reduce "thread.arena" mallctl contention.
This resolves #460.
2016-10-04 09:54:18 -07:00
Jason Evans
a5a8d7ae8d Remove a size class assertion from extent_size_quantize_floor().
Extent coalescence can result in legitimate calls to
extent_size_quantize_floor() with size larger than LARGE_MAXCLASS.
2016-10-03 14:45:27 -07:00
Jason Evans
871a9498e1 Fix size class overflow bugs.
Avoid calling s2u() on raw extent sizes in extent_recycle().

Clamp psz2ind() (implemented as psz2ind_clamp()) when inserting/removing
into/from size-segregated extent heaps.
2016-10-03 14:18:55 -07:00