Commit Graph

3177 Commits

Author SHA1 Message Date
Qi Wang
18450d0abe Guard libgcc unwind init with opt_prof.
Only triggers libgcc unwind init when prof is enabled.  This helps workaround
some bootstrapping issues.
2019-02-21 16:04:47 -08:00
Jason Evans
dca7060d5e Avoid redefining tsd_t.
This fixes a build failure when integrating with FreeBSD's libc.  This
regression was introduced by d1e11d48d4
(Move tsd link and in_hook after tcache.).
2019-02-20 20:27:55 -08:00
Qi Wang
9015deb126 Add build_doc by default.
However, skip building the docs (and output warnings) if XML support is missing.
This allows `make install` to succeed w/o `make dist`.
2019-02-08 14:13:20 -08:00
Qi Wang
23b15e764b Add --disable-libdl to travis. 2019-02-06 21:00:59 -08:00
Qi Wang
2db2d2ef5e Make background_thread not dependent on libdl.
When not using libdl, still allows background_thread to be enabled.
2019-02-06 21:00:59 -08:00
Qi Wang
1f55a15467 Add configure option --disable-libdl.
This makes it possible to build full static binary.
2019-02-06 21:00:59 -08:00
Qi Wang
8e9a613122 Disable muzzy decay by default. 2019-02-04 14:38:54 -08:00
Qi Wang
e13400c919 Sanity check szind on tcache flush.
This adds some overhead to the tcache flush path (which is one of the
popular paths).  Guard it behind a config option.
2019-02-01 12:31:34 -08:00
Qi Wang
b33eb26dee Tweak the spacing for the total_wait_time per second. 2019-01-28 15:37:19 -08:00
Qi Wang
374dc30d3d Update copyright dates. 2019-01-25 13:25:20 -08:00
Qi Wang
e3db480f6f Rename huge_threshold to oversize_threshold.
The keyword huge tend to remind people of huge pages which is not relevent to
the feature.
2019-01-25 13:15:45 -08:00
Qi Wang
350809dc5d Set huge_threshold to 8M by default.
This feature uses an dedicated arena to handle huge requests, which
significantly improves VM fragmentation.  In production workload we tested it
often reduces VM size by >30%.
2019-01-24 13:29:23 -08:00
Qi Wang
d3145014a0 Explicitly use arena 0 in alignment and OOM tests.
This helps us avoid issues with size based routing (i.e. the huge_threshold
feature).
2019-01-24 13:29:23 -08:00
Edward Tomasz Napierala
a7b0a124c3 Mention different mmap(2) behaviour with retain:true. 2019-01-23 18:34:59 -08:00
Qi Wang
522d1e7b4b Tweak the spacing for nrequests in stats output. 2019-01-23 17:42:12 -08:00
Qi Wang
8c9571376e Fix stats output (rate for total # of requests).
The rate calculation for the total row was missing.
2019-01-23 17:42:12 -08:00
Qi Wang
7a815c1b7c Un-experimental the huge_threshold feature. 2019-01-16 12:28:57 -08:00
Qi Wang
bbe8e6a909 Avoid creating bg thds for huge arena lone.
For low arena count settings, the huge threshold feature may trigger an unwanted
bg thd creation.  Given that the huge arena does eager purging by default,
bypass bg thd creation when initializing the huge arena.
2019-01-15 16:00:34 -08:00
Jason Evans
b6f1f2669a Revert "Customize cloning to include tags so that VERSION is valid."
This reverts commit 646af596d8.
2019-01-14 10:35:48 -08:00
Jason Evans
225d89998b Revert "Remove --branch=${CIRRUS_BASE_BRANCH} in git clone command."
This reverts commit fc13a7f1fa.
2019-01-14 10:35:48 -08:00
Qi Wang
f459454afe Avoid potential issues on extent zero-out.
When custom extent_hooks or transparent huge pages are in use, the purging
semantics may change, which means we may not get zeroed pages on repopulating.
Fixing the issue by manually memset for such cases.
2019-01-11 19:16:12 -08:00
Qi Wang
0ecd5addb1 Force purge on thread death only when w/o bg thds. 2019-01-11 19:15:34 -08:00
Jason Evans
fc13a7f1fa Remove --branch=${CIRRUS_BASE_BRANCH} in git clone command.
The --branch parameter is unnecessary, and may avoid problems when
testing directly on the dev branch.
2019-01-11 13:50:56 -08:00
Jason Evans
646af596d8 Customize cloning to include tags so that VERSION is valid. 2019-01-10 15:14:33 -08:00
Li-Wen Hsu
6910fcb208 Add Cirrus-CI config for FreeBSD builds 2019-01-10 15:14:33 -08:00
Faidon Liambotis
471191075d Replace -lpthread with -pthread
This automatically adds -latomic if and when needed, e.g. on riscv64
systems.

Fixes #1401.
2019-01-09 13:43:33 -08:00
Leonardo Santagada
daa0e436ba implement malloc_getcpu for windows 2019-01-08 14:34:45 -08:00
John Ericson
4e920d2c9d Add --{enable,disable}-{static,shared} to configure script
My distro offers a custom toolchain where it's not possible to make
static libs, so it's insufficient to just delete the libs I don't want.
I actually need to avoid building them in the first place.
2018-12-19 13:34:26 -08:00
Qi Wang
7241bf5b74 Only read arena index from extent on the tcache flush path.
Add exten_arena_ind_get() to avoid loading the actual arena ptr in case we just
need to check arena matching.
2018-12-18 15:19:30 -08:00
Qi Wang
441335d924 Add unit test for producer-consumer pattern. 2018-12-18 15:09:53 -08:00
Alexander Zinoviev
36de5189c7 Add rate counters to stats 2018-12-18 09:59:41 -08:00
Qi Wang
99f4eefb61 Fix incorrect stats mreging with sharded bins.
With sharded bins, we may not flush all items from the same arena in one run.
Adjust the stats merging logic accordingly.
2018-12-07 18:16:15 -08:00
Qi Wang
711a61f3b4 Add unit test for sharded bins. 2018-12-03 17:17:03 -08:00
Qi Wang
98b56ab23d Store the bin shard selection in TSD.
This avoids having to choose bin shard on the fly, also will allow flexible bin
binding for each thread.
2018-12-03 17:17:03 -08:00
Qi Wang
45bb4483ba Add stats for arenas.bin.i.nshards. 2018-12-03 17:17:03 -08:00
Qi Wang
3f9f2833f6 Add opt.bin_shards to specify number of bin shards.
The option uses the same format as "slab_sizes" to specify number of shards for
each bin size.
2018-12-03 17:17:03 -08:00
Qi Wang
37b8913925 Add support for sharded bins within an arena.
This makes it possible to have multiple set of bins in an arena, which improves
arena scalability because the bins (especially the small ones) are always the
limiting factor in production workload.

A bin shard is picked on allocation; each extent tracks the bin shard id for
deallocation.  The shard size will be determined using runtime options.
2018-12-03 17:17:03 -08:00
Dave Watson
b23336af96 mutex: fix trylock spin wait contention
If there are 3 or more threads spin-waiting on the same mutex,
there will be excessive exclusive cacheline contention because
pthread_trylock() immediately tries to CAS in a new value, instead
of first checking if the lock is locked.

This diff adds a 'locked' hint flag, and we will only spin wait
without trylock()ing while set.  I don't know of any other portable
way to get the same behavior as pthread_mutex_lock().

This is pretty easy to test via ttest, e.g.

./ttest1 500 3 10000 1 100

Throughput is nearly 3x as fast.

This blames to the mutex profiling changes, however, we almost never
have 3 or more threads contending in properly configured production
workloads, but still worth fixing.
2018-11-28 15:17:02 -08:00
Qi Wang
c4063ce439 Set the default number of background threads to 4.
The setting has been tested in production for a while.  No negative effect while
we were able to reduce number of threads per process.
2018-11-16 09:35:12 -08:00
Qi Wang
43f3b1ad0c Deprecate OSSpinLock. 2018-11-14 08:44:05 -08:00
Dave Watson
13c237c7ef Add a fastpath for arena_slab_reg_alloc_batch
Also adds a configure.ac check for __builtin_popcount, which is used
in the new fastpath.
2018-11-14 07:09:11 -08:00
Dave Watson
17aa470760 add extent_nfree_sub 2018-11-14 07:09:11 -08:00
Dave Watson
4b82872ebf arena: Refactor tcache_fill to batch fill from slab
Refactor tcache_fill, introducing a new function arena_slab_reg_alloc_batch,
which will fill multiple pointers from a slab.

There should be no functional changes here, but allows future optimization
on reg_alloc_batch.
2018-11-14 07:09:11 -08:00
Qi Wang
57553c3b1a Avoid touching all pages in extent_recycle for debug build.
We may have a large number of pages with *zero set (since they are populated on
demand).  Only check the first page to avoid paging in all of them.
2018-11-13 08:54:48 -08:00
Qi Wang
1f56115704 Fix tcache_flush (follow up cd2931a).
Also catch invalid tcache id.
2018-11-13 08:54:09 -08:00
Dave Watson
794e29c0ab Add a free() and sdallocx(where flags=0) fastpath
Add unsized and sized deallocation fastpaths.  Similar to the malloc()
fastpath, this removes all frame manipulation for the majority of
free() calls.  The performance advantages here are less than that
of the malloc() fastpath, but from prod tests seems to still be half
a percent or so of improvement.

Stats and sampling a both supported (sdallocx needs a sampling check,
for rtree lookups slab will only be set for unsampled objects).

We don't support flush, any flush requests go to the slowpath.
2018-11-12 13:20:37 -08:00
Dave Watson
e2ab215324 refactor tcache_dalloc_small
Add a cache_bin_dalloc_easy (to match the alloc_easy function),
and use it in tcache_dalloc_small.  It will also be used in the
new free fastpath.
2018-11-12 13:20:37 -08:00
Dave Watson
5e795297b3 rtree: add rtree_szind_slab_read_fast
For a free fastpath, we want something that will not make additional
calls.  Assume most free() calls will hit the L1 cache, and use
a custom rtree function for this.

Additionally, roll the ptr=NULL check in to the rtree cache check.
2018-11-12 13:20:37 -08:00
Edward Tomasz Napierala
a4c6b9ae01 Restore a FreeBSD-specific getpagesize(3) optimization.
It was removed in 0771ff2cea.
Add a comment explaining its purpose.
2018-11-09 14:14:49 -08:00
Qi Wang
cd2931ad9b Fix tcaches_flush.
The regression was introduced in 3a1363b.
2018-11-09 13:11:37 -08:00