Commit Graph

3218 Commits

Author SHA1 Message Date
David T. Goldblatt
dd7e283b6f Tweak the ticker paths to help GCC generate better code.
GCC on its own isn't quite able to turn the ticker subtract into a memory
operation followed by a js.
2018-02-21 16:04:23 -08:00
David Goldblatt
ae0f5d5c3f CI: Remove "catgets" dependency on appveyor.
This seems to cause a configuration error with msys2.
2018-02-14 16:21:44 -08:00
Maks Naumov
a3abbb4bdf Fix MSVC build 2018-02-12 10:35:53 -08:00
rustyx
83aa9880b7 Make generated headers usable in both x86 and x64 mode in Visual Studio 2018-01-30 13:11:41 -08:00
rustyx
ed52d24f74 Define JEMALLOC_NO_PRIVATE_NAMESPACE also in Visual Studio x86 targets 2018-01-30 13:11:41 -08:00
Christopher Ferris
f78d4ca3fb Modify configure to determine return value of strerror_r.
On glibc and Android's bionic, strerror_r returns char* when
_GNU_SOURCE is defined.

Add a configure check for this rather than assume glibc is the
only libc that behaves this way.
2018-01-10 21:01:18 -08:00
Qi Wang
ba5992fe9a Improve the fit for aligned allocation.
We compute the max size required to satisfy an alignment.  However this can be
quite pessimistic, especially with frequent reuse (and combined with state-based
fragmentation).  This commit adds one more fit step specific to aligned
allocations, searching in all potential fit size classes.
2018-01-05 14:27:58 -08:00
Qi Wang
41790f4fa4 Check tsdn_null before reading reentrancy level. 2018-01-05 13:05:17 -08:00
Qi Wang
91b247d311 In iallocztm, check lock rank only when not in reentrancy. 2018-01-05 13:05:17 -08:00
Nehal J Wani
78a87e4a80 Make sure JE_CXXFLAGS_ADD uses CPP compiler
All the invocations of AC_COMPILE_IFELSE inside JE_CXXFLAGS_ADD were
running 'the compiler and compilation flags of the current language'
which was always the C compiler and the CXXFLAGS were never being tested
against a C++ compiler. This patch fixes this issue by temporarily
changing the chosen compiler to C++ by pushing it over the stack and
popping it immediately after the compilation check.
2018-01-04 11:14:46 -08:00
marxin
433c2edabc Disable JEMALLOC_HAVE_MADVISE_HUGE for arm* CPUs. 2018-01-04 11:13:32 -08:00
Rajeev Misra
72bdbc35e3 extent_t bitpacking logic refactoring 2018-01-04 11:11:04 -08:00
Rajeev Misra
f47e39d11a handle 32 bit mutex counters 2018-01-04 11:08:17 -08:00
David Goldblatt
d41b19f9c7 Implement arena regind computation using div_info_t.
This eliminates the need to generate an enormous switch statement in
arena_slab_regind.
2017-12-21 14:25:43 -08:00
David Goldblatt
21f7c13d0b Add the div module, which allows fast division by dynamic values. 2017-12-21 14:25:43 -08:00
David T. Goldblatt
7f1b02e3fa Split up and standardize naming of stats code.
The arena-associated stats are now all prefixed with arena_stats_, and live in
their own file.  Likewise, malloc_bin_stats_t -> bin_stats_t, also in its own
file.
2017-12-18 16:29:10 -08:00
David T. Goldblatt
901d94a2b0 Rename cache_alloc_easy to cache_bin_alloc_easy.
This lives in the cache_bin module; just a typo.
2017-12-18 16:29:10 -08:00
David T. Goldblatt
8aafa270fd Move bin stats code from arena to bin module. 2017-12-18 16:29:10 -08:00
David T. Goldblatt
48bb4a056b Move bin forking code from arena to bin module. 2017-12-18 16:29:10 -08:00
David T. Goldblatt
a8dd8876fb Move bin initialization from arena module to bin module. 2017-12-18 16:29:10 -08:00
David T. Goldblatt
4bf4a1c4ea Pull out arena_bin_info_t and arena_bin_t into their own file.
In the process, kill arena_bin_index, which is unused.  To follow are several
diffs continuing this separation.
2017-12-18 16:29:10 -08:00
Qi Wang
740bdd68b1 Over purge by 1 extent always.
When purging, large allocations are usually the ones that cross the npages_limit
threshold, simply because they are "large".  This means we often leave the large
extent around for a while, which has the downsides of: 1) high RSS and 2) more
chance of them getting fragmented.  Given that they are not likely to be reused
very soon (LRU), let's over purge by 1 extent (which is often large and not
reused frequently).
2017-12-18 12:57:07 -08:00
Qi Wang
f70785de91 Skip test/unit/pack when profiling is enabled.
The test assumes no sampled allocations.
2017-12-18 12:47:46 -08:00
Qi Wang
5e0332890f Output opt.lg_extent_max_active_fit in stats. 2017-12-14 15:49:15 -08:00
nicolov
22460cbebd jemalloc_mangle.sh: set sh in strict mode 2017-12-11 23:35:20 -08:00
Ed Schouten
749caf14ae Also use __riscv to detect builds for RISC-V CPUs.
According to the RISC-V toolchain conventions, __riscv__ is the old
spelling of this definition. __riscv should be used going forward.

https://github.com/riscv/riscv-toolchain-conventions#cc-preprocessor-definitions
2017-12-09 10:10:42 -08:00
Qi Wang
955b1d9cc5 Fix extent deregister on the leak path.
On leak path we should not adjust gdump when deregister.
2017-12-08 22:22:03 -08:00
Qi Wang
b5ab3f91ea Fix test/integration/extent.
Should only run the hook tests without background threads.  This was introduced
in 6e841f6.
2017-12-08 22:22:03 -08:00
Qi Wang
6e841f618a Add more tests for extent hooks failure paths. 2017-11-28 21:52:49 -08:00
Qi Wang
26a8f82c48 Add missing deregister before extents_leak.
This fixes an regression introduced by 211b1f3 (refactor extent split).
2017-11-19 21:12:40 -08:00
Qi Wang
e475d03752 Avoid setting zero and commit if split fails in extent_recycle. 2017-11-19 21:12:27 -08:00
Qi Wang
3e64dae802 Eagerly coalesce large extents.
Coalescing is a small price to pay for large allocations since they happen less
frequently.  This reduces fragmentation while also potentially improving
locality.
2017-11-16 15:32:02 -08:00
Qi Wang
eb1b08daae Fix an extent coalesce bug.
When coalescing, we should take both extents off the LRU list; otherwise decay
can grab the existing outer extent through extents_evict.
2017-11-16 15:32:02 -08:00
Qi Wang
fac706836f Add opt.lg_extent_max_active_fit
When allocating from dirty extents (which we always prefer if available), large
active extents can get split even if the new allocation is much smaller, in
which case the introduced fragmentation causes high long term damage.  This new
option controls the threshold to reuse and split an existing active extent.  We
avoid using a large extent for much smaller sizes, in order to reduce
fragmentation.  In some workload, adding the threshold improves virtual memory
usage by >10x.
2017-11-16 15:32:02 -08:00
Qi Wang
282a3faa17 Use extent_heap_first for best fit.
extent_heap_any makes the layout less predictable and as a result incurs more
fragmentation.
2017-11-16 15:32:02 -08:00
Dave Watson
d6feed6e66 Use tsd offset_state instead of atomic
While working on #852, I noticed the prng state is atomic.  This is the only
atomic use of prng in all of jemalloc.  Instead, use a threadlocal prng
state if possible to avoid unnecessary cache line contention.
2017-11-14 08:58:18 -08:00
Qi Wang
cb3b72b975 Fix base allocator THP auto mode locking and stats.
Added proper synchronization for switching to using THP in auto mode.  Also
fixed stats for number of THPs used.
2017-11-09 16:14:12 -08:00
Qi Wang
b5d071c266 Fix unbounded increase in stash_decayed.
Added an upper bound on how many pages we can decay during the current run.
Without this, decay could have unbounded increase in stashed, since other
threads could add new pages into the extents.
2017-11-08 16:33:30 -08:00
Qi Wang
6dd5681ab7 Use hugepage alignment for base allocator.
This gives us an easier way to tell if the allocation is for metadata in the
extent hooks.
2017-11-03 19:37:13 -07:00
Qi Wang
e422fa8e7e Add arena.i.retain_grow_limit
This option controls the max size when grow_retained.  This is useful when we
have customized extent hooks reserving physical memory (e.g. 1G huge pages).
Without this feature, the default increasing sequence could result in fragmented
and wasted physical memory.
2017-11-03 13:53:33 -07:00
Edward Tomasz Napierala
9f455e2786 Try to use sysctl(3) instead of sysctlbyname(3).
This attempts to use VM_OVERCOMMIT OID - newly introduced in -CURRENT
few days ago, specifically for this purpose - instead of querying the
sysctl by its string name.  Due to how syctlbyname(3) works, this means
we do one syscall during binary startup instead of two.

Signed-off-by: Edward Tomasz Napierala <trasz@FreeBSD.org>
2017-11-03 08:25:39 -07:00
Edward Tomasz Napierala
d591df05c8 Use getpagesize(3) under FreeBSD.
This avoids sysctl(2) syscall during binary startup, using the value
passed in the ELF aux vector instead.

Signed-off-by: Edward Tomasz Napierala <trasz@FreeBSD.org>
2017-11-03 08:25:39 -07:00
Qi Wang
58eba024c0 metadata_thp: auto mode adjustment for a0.
We observed that arena 0 can have much more metadata allocated comparing to
other arenas.  Tune the auto mode to only switch to huge page on the 5th block
(instead of 3 previously) for a0.
2017-11-01 13:52:06 -07:00
Qi Wang
47203d5f42 Output all counters for bin mutex stats.
The saved space is not worth the trouble of missing counters.
2017-10-19 16:31:54 -07:00
David Goldblatt
d14bbf8d81 Add a "dumpable" bit to the extent state.
Currently, this is unused (i.e. all extents are always marked dumpable).  In the
future, we'll begin using this functionality.
2017-10-16 15:35:49 -07:00
David Goldblatt
bbaa72422b Add pages_dontdump and pages_dodump.
This will, eventually, enable us to avoid dumping eden regions.
2017-10-16 15:35:49 -07:00
David Goldblatt
ccd09050aa Add configure-time detection for madvise(..., MADV_DO[NT]DUMP) 2017-10-16 15:35:49 -07:00
David Goldblatt
211b1f3c7d Factor out extent-splitting core from extent lifetime management.
Before this commit, extent_recycle_split intermingles the splitting of an extent
and the return of parts of that extent to a given extents_t.  After it, that
logic is separated.  This will enable splitting extents that don't live in any
extents_t (as the grow retained region soon will).
2017-10-16 15:35:49 -07:00
David Goldblatt
5bad01c38e Document some of the internal extent functions. 2017-10-16 15:35:49 -07:00
rustyx
33df2fa169 Fix MSVC 2015 project and add a VS 2017 solution 2017-10-16 10:26:54 -07:00