David Goldblatt
63677dde63
Pages: Statically detect if pages_huge may succeed
2020-12-07 06:21:08 -08:00
David Goldblatt
c1b2a77933
psset: Move in stats.
...
A later change will benefit from having these functions pulled into a
psset-module set of functions.
2020-12-07 06:21:08 -08:00
David Goldblatt
d0a991d47b
psset: Add insert/remove functions.
...
These will allow us to (for instance) move pageslabs from a psset dedicated to
not-yet-hugeified pages to one dedicated to hugeified ones.
2020-12-07 06:21:08 -08:00
David Goldblatt
ecd39418ac
Add fxp: A fixed-point math library.
...
This will be used in the next commit to allow non-integer values for
narenas_ratio.
2020-12-04 23:48:19 -08:00
David Carlier
520b75fa2d
utrace support with label based signature.
2020-11-30 11:43:00 -08:00
Yinan Zhang
be5e49f4fa
Add a batch mode for cache_bin_alloc()
2020-11-16 20:58:01 -08:00
Yinan Zhang
566c4a8594
Slight changes to cache bin internal functions
2020-11-16 20:58:01 -08:00
David Goldblatt
cf2549a149
Add a per-arena oversize_threshold.
...
This can let manual arenas trade off memory and CPU the way auto arenas do.
2020-11-13 13:45:35 -08:00
David Goldblatt
4ca3d91e96
Rename geom_grow -> exp_grow.
...
This was promised in the review of the introduction of geom_grow, but would have
been painful to do there because of the series that introduced it. Now that
those are comitted, renaming is easier.
2020-11-13 13:42:33 -08:00
David Goldblatt
b4c37a6e81
Rename edata_tree_t -> edata_avail_t.
...
This isn't a tree any more, and it mildly irritates me any time I see it.
2020-11-13 13:42:11 -08:00
David Carlier
95f0a77fde
Detect pthread_getname_np explicitly.
...
At least one libc (musl) defines pthread_setname_np without defining
pthread_getname_np. Detect the presence of each individually, rather than
inferring both must be defined if set is.
2020-11-11 17:31:22 -08:00
David Goldblatt
589638182a
Use the edata_cache_small_t in the HPA.
2020-11-05 12:34:43 -08:00
David Goldblatt
03a6047111
Edata cache small: rewrite.
...
In previous designs, this was intended to be a sort of cache that couldn't fail.
In the current design, we want to use it just as a contention reduction
mechanism. Rewrite it with those goals in mind.
2020-11-05 12:34:43 -08:00
David Goldblatt
1b3ee75667
Add experimental.thread.activity_callback.
...
This (experimental, undocumented) functionality can be used by users to track
various statistics of interest at a finer level of granularity than the thread.
2020-11-05 12:33:25 -08:00
David Carlier
d2d941017b
MADV_DO[NOT]DUMP support equivalence on FreeBSD.
2020-11-02 09:15:15 -08:00
DC
ef6d51ed44
DragonFlyBSD build support.
2020-10-27 12:35:19 -07:00
Qi Wang
bf72188f80
Allow opt.tcache_max to accept small size classes.
...
Previously all the small size classes were cached. However this has downsides
-- particularly when page size is greater than 4K (e.g. iOS), which will result
in much higher SMALL_MAXCLASS.
This change allows tcache_max to be set to lower values, to better control
resources taken by tcache.
2020-10-24 20:43:44 -07:00
David Goldblatt
ea32060f9c
SEC: Implement thread affinity.
...
For now, just have every thread pick a shard once and stick with it.
2020-10-23 11:14:34 -07:00
David Goldblatt
d16849c91d
psset: Do first-fit based on slab age.
...
This functions more like the serial number strategy of the ecache and
hpa_central_t. Longer-lived slabs are more likely to continue to live for
longer in the future.
2020-10-23 11:14:34 -07:00
David Goldblatt
634ec6f50a
Edata: add an "age" field.
2020-10-23 11:14:34 -07:00
David Goldblatt
6599651aee
PA: Use an SEC in fron of the HPA shard.
2020-10-23 11:14:34 -07:00
David Goldblatt
ea51e97bb8
Add SEC module: a small extent cache.
...
This can be used to take pressure off a more centralized, worse-sharded
allocator without requiring a full break of the arena abstraction.
2020-10-23 11:14:34 -07:00
David Goldblatt
1964b08394
HPA: Add stats for the hpa_shard.
2020-10-23 11:14:34 -07:00
David Goldblatt
534504d4a7
HPA: add size-exclusion functionality.
...
I.e. only allowing allocations under or over certain sizes.
2020-10-23 11:14:34 -07:00
David Goldblatt
484f04733e
HPA: Add central mutex contention stats.
2020-10-23 11:14:34 -07:00
David Goldblatt
bf025d2ec8
HPA: Make slab sizes and maxes configurable.
...
This allows easy experimentation with them as tuning parameters.
2020-10-23 11:14:34 -07:00
David Goldblatt
1c7da33317
HPA: Tie components into a PAI implementation.
2020-10-23 11:14:34 -07:00
Qi Wang
c8209150f9
Switch from opt.lg_tcache_max to opt.tcache_max
...
Though for convenience, keep parsing lg_tcache_max.
2020-10-22 20:40:41 -07:00
Qi Wang
5e41ff9b74
Add a hard limit on tcache max size class.
...
For locality reasons, tcache bins are integrated in TSD. Allowing all size
classes to be cached has little benefit, but takes up much thread local storage.
In addition, it complicates the layout which we try hard to optimize.
2020-10-16 13:49:51 -07:00
Qi Wang
3de19ba401
Eagerly detect double free and sized dealloc bugs for large sizes.
2020-10-15 10:03:16 -07:00
David Goldblatt
21b70cb540
Add hpa_central module
...
This will be the centralized component of the coming hugepage allocator; the
source of larger chunks of memory from which smaller ones can be obtained.
2020-10-05 19:55:57 -07:00
David Goldblatt
1ed7ec369f
Emap: Add emap_assert_not_mapped.
...
The counterpart to emap_assert_mapped, it lets callers check that some edata is
not already in the emap.
2020-10-05 19:55:57 -07:00
David Goldblatt
9e6aa77ab9
PRNG: Remove atomic functionality.
...
These had no uses and complicated the API. As a rule we now expect to only use
thread-local randomization for contention-reduction reasons, so we only pay the
API costs and never get the functionality benefits.
2020-10-05 19:55:57 -07:00
David Goldblatt
0513047170
PRNG: Allow a a range argument of 1.
...
This is convenient when the range argument itself is generated from some
computation whose value we don't know in advance.
2020-10-05 19:55:57 -07:00
David Goldblatt
259c5e3e8f
psset: Add stats
2020-09-18 12:39:25 -07:00
David Goldblatt
018b162d67
Add psset: a set of pageslabs.
...
This introduces a new sort of edata_t; a pageslab, and a set to manage them.
This is part of a series of a commits to implement a hugepage allocator; the
pageset will be per-arena, and track small page allocations requests within a
larger extent allocated from a centralized hugepage allocator.
2020-09-18 12:39:25 -07:00
David Goldblatt
ed99d300b9
Flat bitmap: Add longest-range computation.
...
This will come in handy in the (upcoming) page-slab set assertions.
2020-09-18 12:39:25 -07:00
David Goldblatt
e034500698
Edata: rename "ranged" bit to "pai".
...
This better represents its intended purpose; the hugepage allocator design
evolved away from needing contiguity of hugepage virtual address space.
2020-09-18 12:39:25 -07:00
David Goldblatt
7ad2f78663
Avoid a -Wundef warning on LG_SLAB_MAXREGS.
2020-09-17 10:05:40 -07:00
Hao Liu
1541ffc765
configure: add --with-lg-slab-maxregs configure option.
...
Specify the maximum number of regions in a slab, which is
(<lg-page> - <lg-tiny-min>) by default. This increases the limit of slab sizes
specified by "slab_sizes" in malloc_conf. This should never be less than
the default value. The max value of this option is related to LG_BITMAP_MAXBITS
(see more in bitmap.h).
For example, on a 4k page size system, if we:
1) configure jemalloc with with --with-lg-slab-maxregs=12.
2) export MALLOC_CONF="slab_sizes:9-16:4"
The slab size of 16 bytes is set to 4 pages. Previously, the default
lg-slab-maxregs is 9 (i.e. 12 - 3). The max slab size of 16 bytes is 2 pages
(i.e. (1<<9) * 16 bytes). By increasing the value from 9 to 12, the max slab
size can be set by MALLOC_CONF is 16 pages (i.e. (1<<12) * 16 bytes).
2020-09-16 13:58:38 -07:00
Yinan Zhang
b549389e4a
Correct usize in prof last-N record
2020-09-09 13:31:35 -07:00
Yinan Zhang
866231fc61
Do not repeat reentrancy test in profiling
2020-08-25 16:49:32 -07:00
Yinan Zhang
20f2479ed7
Do not create size class tables for non-prof builds
2020-08-24 20:10:02 -07:00
Yinan Zhang
8efcdc3f98
Move unbias data to prof_data
2020-08-24 20:10:02 -07:00
David Goldblatt
5e90fd006e
Geom_grow: Don't keep the mutex internal.
...
We're about to use it in ways that will have external synchronization.
2020-08-19 16:53:21 -07:00
David Goldblatt
c57494879f
Geom_grow: Don't take tsdn at init.
...
It's never used.
2020-08-19 16:53:21 -07:00
David Goldblatt
ffe552223c
Geom_grow: Move in advancing logic.
2020-08-19 16:53:21 -07:00
David Goldblatt
131b1b5338
Rename ecache_grow -> geom_grow.
...
We're about to start using it outside of the ecaches, in the HPA central
allocator.
2020-08-19 16:53:21 -07:00
David Goldblatt
9e18ae639f
Config: safety checks don't imply size checks.
...
The commit introducing size checks accidentally enabled them whenever any safety
checks were on. This ends up causing the regression that splitting up the
features was intended to avoid. Fix the issue.
2020-08-12 13:00:19 -07:00
David Goldblatt
eaed1e39be
Add sized-delete size-checking functionality.
...
The existing checks are good at finding such issues (on tcache flush), but not
so good at pinpointing them. Debug mode can find them, but sometimes debug mode
slows down a program so much that hard-to-hit bugs can take a long time to
crash.
This commit adds functionality to keep programs mostly on their fast paths,
while also checking every sized delete argument they get.
2020-08-05 19:34:05 -07:00
David Goldblatt
60993697d8
Prof: Add prof_unbias.
...
This gives more accurate attribution of bytes and counts to stack traces,
without introducing backwards incompatibilities in heap-profile parsing tools.
We track the ideal reported (to the end user) number of bytes more carefully
inside core jemalloc. When dumping heap profiles, insteading of outputting our
counts directly, we output counts that will cause parsing tools to give a result
close to the value we want.
We retain the old version as an opt setting, to let users who are tracking
values on a per-component basis to keep their metrics stable until they decide
to switch.
2020-08-05 18:33:55 -07:00
David Goldblatt
81c2f841e5
Add a simple utility to detect profiling bias.
2020-08-05 18:33:55 -07:00
Yinan Zhang
978f830ee3
Add batch allocation API
2020-07-31 09:16:50 -07:00
Yinan Zhang
c6f59e9bb4
Add surplus reading API for thread event lookahead
2020-07-31 09:16:50 -07:00
Yinan Zhang
f805468957
Add zero option to arena batch allocation
2020-07-31 09:16:50 -07:00
Yinan Zhang
49e5c2fe7d
Add batch allocation from fresh slabs
2020-07-31 09:16:50 -07:00
Yinan Zhang
2bb8060d57
Add empty test and concat for typed list
2020-07-31 09:16:50 -07:00
Yinan Zhang
f28cc2bc87
Extract bin shard selection out of bin locking
2020-07-31 09:16:50 -07:00
David Goldblatt
ddb8dc4ad0
FB: Add range iteration support.
2020-07-30 15:25:23 -07:00
David Goldblatt
ceee823519
Add flat_bitmap.
...
The flat_bitmap module offers an extended API, at the cost of decreased
performance in the case of very large bitmaps.
2020-07-30 15:25:23 -07:00
David Goldblatt
22da836094
bit_util: Add fls_ functions; "find last set".
...
These simplify a lot of the bit_util module, which had grown bits and pieces of
this functionality across a variety of places over the years.
While we're here, kill off BIT_UTIL_INLINE and don't do reentrancy testing for
bit_util.
2020-07-30 15:25:23 -07:00
David Goldblatt
1ed0288d9c
bit_util: Change ffs functions indexing.
...
Making these 0-based instead of 1-based makes calling code simpler and will be
more consistent with functions introduced in subsequent diffs.
2020-07-30 15:25:23 -07:00
David Goldblatt
6107857b7b
PA->PAC: Move in PAI implementation.
2020-07-09 13:41:04 -07:00
David Goldblatt
6041aaba97
PA -> PAC: Move in destruction functions.
2020-07-09 13:41:04 -07:00
David Goldblatt
471eb5913c
PAC: Move in decay rate setting.
2020-07-09 13:41:04 -07:00
David Goldblatt
6a2774719f
PA->PAC: Move in decay functions.
2020-07-09 13:41:04 -07:00
David Goldblatt
4ee75be3a3
PA -> PAC: Move in decay_purge enum.
2020-07-09 13:41:04 -07:00
David Goldblatt
72435b0aba
PA->PAC: Make extent.c forget about PA.
2020-07-09 13:41:04 -07:00
David Goldblatt
dee5d1c42d
PA->PAC: Move in extent_sn.
2020-07-09 13:41:04 -07:00
David Goldblatt
7391382349
PA->PAC: Move in stats.
2020-07-09 13:41:04 -07:00
David Goldblatt
db211eefbf
PAC: Move in decay.
2020-07-09 13:41:04 -07:00
David Goldblatt
c81e389996
PAC: Move in ecache_grow.
2020-07-09 13:41:04 -07:00
David Goldblatt
65803171a7
PAC: move in emap
2020-07-09 13:41:04 -07:00
David Goldblatt
7efcb946c4
PAC: Add an init function.
2020-07-09 13:41:04 -07:00
David Goldblatt
722652222a
PAC: Move in edata_cache accesses.
2020-07-09 13:41:04 -07:00
David Goldblatt
777b0ba965
Add PAC: Page allocator classic.
...
For now, this is just a stub containing the ecaches, with no surrounding code
changed. Eventually all the core allocator bits will be moved in, in the
subsequent stack of commits.
2020-07-09 13:41:04 -07:00
David Goldblatt
1b5f632e0f
Introduce PAI: Page allocator interface
2020-07-09 13:41:04 -07:00
David Goldblatt
3cf19c6e5e
atomic: add atomic_load_sub_store
2020-07-09 13:41:04 -07:00
David Goldblatt
ae541d3fab
Edata: Reserve some space for hugepages.
2020-07-08 13:20:59 -07:00
David Goldblatt
392f645f4d
Edata: split up different list linkage uses.
2020-07-08 13:20:59 -07:00
David Goldblatt
129b727058
Add typed-list module.
...
This gives some named convenience wrappers.
2020-07-08 13:20:59 -07:00
David Carlier
00f06c9beb
enabling mpss on solaris/illumos.
...
reusing slighty linux configuration as possible, aligning the
address range to HUGEPAGE.
2020-07-06 09:59:10 -07:00
Yinan Zhang
c2e7a06392
No need to intercept prof_dump_header() in tests
2020-06-29 14:27:50 -07:00
Yinan Zhang
f58ebdff7a
Generalize prof_cnt_all() for testing
2020-06-29 14:27:50 -07:00
Yinan Zhang
d4259ea53b
Simplify signatures for prof dump functions
2020-06-29 14:27:50 -07:00
Yinan Zhang
1f5fe3a3e3
Pass write callback explicitly in prof_data
2020-06-29 14:27:50 -07:00
Yinan Zhang
dad821bb22
Move unwind to prof_sys
2020-06-29 14:27:50 -07:00
Yinan Zhang
d128efcb6a
Relocate a few prof utilities to the right modules
2020-06-29 14:27:50 -07:00
Yinan Zhang
4736fb4fc9
Move file handling logic in prof_data to prof_sys
2020-06-29 14:27:50 -07:00
Yinan Zhang
767a2e1790
Move file handling logic in prof to prof_sys
2020-06-29 14:27:50 -07:00
Yinan Zhang
03ae509f32
Create prof_sys module for reading system thread name
2020-06-29 14:27:50 -07:00
Yinan Zhang
adfd9d7b1d
Change tsdn to tsd for thread name allocation
2020-06-29 14:27:50 -07:00
Yinan Zhang
841af2b426
Move thread name handling to prof_data module
2020-06-29 14:27:50 -07:00
Yinan Zhang
8118056c03
Expose prof_data testing internals only in prof tests
2020-06-29 14:27:50 -07:00
Yinan Zhang
f43ac8543e
Correct prof header macro namings
2020-06-29 14:27:50 -07:00
Yinan Zhang
5d292b5660
Push error handling logic out of core dumping logic
2020-06-29 14:27:50 -07:00
Yinan Zhang
f541871f5d
Reduce prof dump buffer size in debug build
2020-06-29 14:27:50 -07:00
Yinan Zhang
354183b10d
Define prof dump buffer size centrally
2020-06-29 14:27:50 -07:00
Yinan Zhang
7455813e57
Make dump file writing replaceable in test
2020-06-29 14:27:50 -07:00
Yinan Zhang
21e44c45d9
Make maps file opening replaceable in test
2020-06-29 14:27:50 -07:00
Yinan Zhang
f307b25804
Only replace the dump file opening function in test
2020-06-29 14:27:50 -07:00
Yinan Zhang
d460333efb
Improve naming for prof system thread name option
2020-06-24 14:32:01 -07:00
David T. Goldblatt
25e43c6022
Witness: Make ranks an enum.
...
This lets us avoid having to increment a bunch of values manually every time we
add a new sort of lock.
2020-06-19 18:05:08 -07:00
Yinan Zhang
b7858abfc0
Expose prof testing internal functions
2020-06-19 09:16:51 -07:00
Jon Haslam
4aea743279
High Resolution Timestamps for Profiling
2020-06-15 12:12:49 -07:00
David Goldblatt
d82a164d0d
Add thread.peak.[read|reset] mallctls.
...
These can be used to track net allocator activity on a per-thread basis.
2020-06-11 13:54:22 -07:00
David Goldblatt
fe7108305a
Add peak_t, for tracking allocator net max.
2020-06-11 13:54:22 -07:00
Yinan Zhang
3e19ebd2ea
Add lock to protect prof last-N dumping
2020-06-09 17:03:05 -07:00
Yinan Zhang
857ebd3daf
Make edata pointer on prof recent record an atomic fence
2020-06-09 17:03:05 -07:00
David Goldblatt
6cdac3c573
Tcache: Make flush fractions configurable.
2020-05-16 13:34:23 -07:00
David Goldblatt
ee72bf1cfd
Tcache: Add tcache gc delay option.
...
This can reduce flushing frequency for small size classes.
2020-05-16 13:34:23 -07:00
David Goldblatt
d338dd45d7
Tcache: Make incremental gc bytes configurable.
2020-05-16 13:34:23 -07:00
David Goldblatt
ec0b579563
Tcache: Privatize opt_lg_tcache_max default.
2020-05-16 13:34:23 -07:00
David Goldblatt
10b96f6351
Tcache: Remove some unused gc constants.
2020-05-16 13:34:23 -07:00
David Goldblatt
181093173d
Tcache: make slot sizing configurable.
2020-05-16 13:34:23 -07:00
David Goldblatt
b58dea8d1b
Cache bin: expose ncached_max publicly.
2020-05-16 13:34:23 -07:00
David Goldblatt
634afc4124
Tcache: Make size computation configurable.
2020-05-16 13:34:23 -07:00
Brooks Davis
27f29e424b
LQ_QUANTUM should be 4 on mips64 hardware.
...
This matches the ABI stack alignment requirements.
2020-05-14 10:30:37 -07:00
David Goldblatt
eda9c2858f
Edata: zero stack edatas before initializing.
...
This avoids some UB. No compilers take advantage of it for now, but no sense in
tempting fate.
2020-05-14 10:30:20 -07:00
Yinan Zhang
dcea2c0f8b
Get rid of TSD -> thread event dependency
2020-05-12 09:16:16 -07:00
Yinan Zhang
b06dfb9ccc
Push event handlers to constituent modules
2020-05-12 09:16:16 -07:00
Yinan Zhang
abd4674931
Extract out per event postponed wait time fetching
2020-05-12 09:16:16 -07:00
Yinan Zhang
f72014d097
Only compute thread event threshold once per trigger
2020-05-12 09:16:16 -07:00
Yinan Zhang
6de77799de
Move thread event wait time update to local
2020-05-12 09:16:16 -07:00
Yinan Zhang
733ae918f0
Extract out per event new wait time fetching
2020-05-12 09:16:16 -07:00
Yinan Zhang
1e2524e15a
Do not reset sample wait time when re-initing tdata
2020-05-12 09:16:16 -07:00
Yinan Zhang
855d20f6f3
Remove outdated comments in thread event
2020-05-12 09:16:16 -07:00
Yinan Zhang
fc052ff728
Migrate counter to use locked int
2020-05-12 08:23:15 -07:00
Yinan Zhang
b543c20a94
Minor update to locked int
2020-05-12 08:23:15 -07:00
Yinan Zhang
f533ab6da6
Add forking handling for stats
2020-05-11 15:35:06 -07:00
Yinan Zhang
4d970f8bfc
Add forking handling for counter module
2020-05-11 15:35:06 -07:00
Yinan Zhang
2097e1945b
Unify write callback signature
2020-05-11 14:51:24 -07:00
Yinan Zhang
fef9abdcc0
Cleanup tcache allocation logic
...
The logic in tcache allocation no longer involves profiling or
filling.
2020-05-11 12:24:56 -07:00
Yinan Zhang
e6cb6919c0
Consolidate prof inline function headers
...
The prof inline functions are no longer involved in a circular
dependency, so consolidate the two headers into one.
2020-05-11 12:24:56 -07:00
Yinan Zhang
d454af90f1
Remove unused prof_accum field from arena
2020-05-11 12:24:56 -07:00
Yinan Zhang
8be5584494
Initialize prof idump counter once rather than once per arena
2020-05-11 12:24:56 -07:00
Yinan Zhang
e10e5059e8
Make prof_idump_accum() non-inline
2020-05-11 12:24:56 -07:00
Yinan Zhang
039bfd4e30
Do not rollback prof idump counter in arena_prof_promote()
2020-05-11 12:24:56 -07:00
David Goldblatt
46471ea327
SC: Name the max lookup constant.
2020-05-04 12:27:07 -07:00
David Goldblatt
79dd0c04ed
SC: Simplify SC_NPSIZES computation.
...
Rather than taking all the sizes and subtracting out those that don't fit, we
instead just add up all the ones that do.
2020-05-04 12:27:07 -07:00
David Goldblatt
4f8efba824
TSD: Make rtree_ctx a slow-path field.
...
Performance-sensitive users will use sized deallocation facilities, so that
actually touching the rtree_ctx is unnecessary. We make it the last element of
the slow data, so that it is for practical purposes almost-fast.
2020-04-14 15:20:19 -07:00
David Goldblatt
cd29ebefd0
Tcache: treat small and large cache bins uniformly
2020-04-14 15:20:19 -07:00
David Goldblatt
a13fbad374
Tcache: split up fast and slow path data.
2020-04-14 15:20:19 -07:00
David Goldblatt
7099c66205
Arena: fill in terms of cache_bins.
2020-04-14 15:20:19 -07:00
David Goldblatt
40e7aed59e
TSD: Move in some of the tcache fields.
...
We had put these in the tcache for cache optimization reasons. After the
previous diff, these no longer apply.
2020-04-14 15:20:19 -07:00
David Goldblatt
58a00df238
TSD: Put all fast-path data together.
2020-04-14 15:20:19 -07:00
David Goldblatt
877af247a8
QL, QR: Add documentation.
2020-04-11 10:32:11 -07:00
David Goldblatt
79ae7f9211
Rtree: Remove the per-field accessors.
...
We instead split things into "edata" and "metadata".
2020-04-10 13:12:47 -07:00
David Goldblatt
bb6a418523
Emap: Drop szind/slab splitting parameters.
...
After the previous diff, these are constants.
2020-04-10 13:12:47 -07:00
David Goldblatt
50289750b3
Extent: Remove szind/slab knowledge.
2020-04-10 13:12:47 -07:00
David Goldblatt
dc26b30094
Rtree: Clean up compact/non-compact split.
2020-04-10 13:12:47 -07:00
David Goldblatt
93b99dd140
Extent: Stop passing an edata_cache everywhere.
...
We already pass the pa_shard_t around everywhere; we can just use that.
2020-04-10 13:12:47 -07:00
David Goldblatt
11c47cb133
Extent: Take "bool zero" over "bool *zero".
2020-04-10 13:12:47 -07:00
David Goldblatt
1a1124462e
PA: Take zero as a bool rather than as a bool *.
...
Now that we've moved junking to a higher level of the allocation stack, we don't
care about this performance optimization (which only occurred in debug modes).
2020-04-10 13:12:47 -07:00
David Goldblatt
294b276fc7
PA: Parameterize emap. Move emap_global to arena.
...
This lets us test the PA module without interfering with the global emap used by
the real allocator (the one not under test).
2020-04-10 13:12:47 -07:00
David Goldblatt
f730577277
Eset: Parameterize last globals accesses.
...
I.e. opt_retain and maps_coalesce.
2020-04-10 13:12:47 -07:00
David Goldblatt
7bb6e2dc0d
Eset: take opt_lg_max_active_fit as a parameter.
...
This breaks its dependence on the global.
2020-04-10 13:12:47 -07:00
David Goldblatt
883ab327cc
Emap: Move out last edata state touching.
2020-04-10 13:12:47 -07:00
David Goldblatt
12eb888e54
Edata: Add a ranged bit.
...
We steal the dumpable bit, which we ended up not needing.
2020-04-10 13:12:47 -07:00
David Goldblatt
bd4fdf295e
Rtree: Pull leaf contents into their own struct.
2020-04-10 13:12:47 -07:00
David Goldblatt
faec7219b2
PA: Move in decay initialization.
2020-04-10 13:12:47 -07:00
David Goldblatt
45671e4a27
PA: Move in retain growth limit setting.
2020-04-10 13:12:47 -07:00
David Goldblatt
daefde88fe
PA: Move in mutex stats reading.
2020-04-10 13:12:47 -07:00
David Goldblatt
07675840a5
PA: Move in some more internals accesses.
2020-04-10 13:12:47 -07:00
David Goldblatt
238f3c7430
PA: Move in full stats merging.
2020-04-10 13:12:47 -07:00
David Goldblatt
81c6027592
Arena stats: Give it its own "mapped".
...
This distinguishes it from the PA mapped stat, which is now named "pa_mapped" to
avoid confusion. The (derived) arena stat includes base memory, and the PA stat
is no longer partially derived.
2020-04-10 13:12:47 -07:00
David Goldblatt
506d907e40
PA: Move in basic stats merging.
2020-04-10 13:12:47 -07:00
David Goldblatt
f29f6090f5
PA: Add pa_extra.c and put PA forking there.
2020-04-10 13:12:47 -07:00
David Goldblatt
565045ef71
Arena: Make more derived stats non-atomic/locked.
2020-04-10 13:12:47 -07:00
David Goldblatt
d0c43217b5
Arena stats: Move retained to PA, use plain ints.
...
Retained is a property of the allocated pages. The derived fields no longer
require any locking; they're computed on demand.
2020-04-10 13:12:47 -07:00
David Goldblatt
e2cf3fb1a3
PA: Move in all modifications of mapped.
2020-04-10 13:12:47 -07:00
David Goldblatt
436789ad96
PA: Make mapped stat atomic.
...
We always have atomic_zu_t, and mapped/unmapped transitions are always expensive
enough that trying to piggyback on a lock is a waste of time.
2020-04-10 13:12:47 -07:00
David Goldblatt
3c28aa6f17
PA: Move edata_avail stat in, make it non-atomic.
2020-04-10 13:12:47 -07:00
David Goldblatt
f6bfa3dcca
Move extent stats to the PA module.
...
While we're at it, make them non-atomic -- they are purely derived statistics
(and in fact aren't even in the arena_t or pa_shard_t).
2020-04-10 13:12:47 -07:00
David Goldblatt
527dd4cdb8
PA: Move in nactive counter.
2020-04-10 13:12:47 -07:00
David Goldblatt
c075fd0bcb
PA: Minor cleanups and comment fixes.
2020-04-10 13:12:47 -07:00
David Goldblatt
46a9d7fc0b
PA: Move in rest of purging.
2020-04-10 13:12:47 -07:00
David Goldblatt
2d6eec7b5c
PA: Move in decay-all pathway.
2020-04-10 13:12:47 -07:00
David Goldblatt
65698b7f2e
PA: Remove public visibility of some internals.
2020-04-10 13:12:47 -07:00
David Goldblatt
f012c43be0
PA: Move in decay_to_limit
2020-04-10 13:12:47 -07:00
David Goldblatt
3034f4a508
PA: Move in decay_stashed.
2020-04-10 13:12:47 -07:00
David Goldblatt
aef28b2f8f
PA: Move in stash_decayed.
2020-04-10 13:12:47 -07:00
David Goldblatt
71fc0dc968
PA: Move in remaining page allocation functions.
2020-04-10 13:12:47 -07:00
David Goldblatt
74958567a4
PA: have expand take sizes instead of new usize.
...
This avoids involving usize, which makes some of the stats modifications more
intuitively correct.
2020-04-10 13:12:47 -07:00
David Goldblatt
5bcc2c2ab9
PA: Have expand take szind and slab.
...
This isn't really necessary, but having a uniform API will help us later.
2020-04-10 13:12:47 -07:00
David Goldblatt
0880c2ab97
PA: Have large expands use it.
2020-04-10 13:12:47 -07:00
David Goldblatt
9f93625c14
PA: Move in arena large allocation functionality.
2020-04-10 13:12:47 -07:00
David Goldblatt
7624043a41
PA: Add ehook-getting support.
2020-04-10 13:12:47 -07:00
David Goldblatt
eba35e2e48
Remove extent knowledge of arena.
2020-04-10 13:12:47 -07:00
David Goldblatt
e77f47a85a
Move arena decay getters to PA.
2020-04-10 13:12:47 -07:00
David Goldblatt
f77cec311e
Decay: Take current time as an argument.
...
This better facilitates testing.
2020-04-10 13:12:47 -07:00
David Goldblatt
d1d7e1076b
Decay: move in some background_thread accesses.
2020-04-10 13:12:47 -07:00
David Goldblatt
cdb916ed3f
Decay: Add comments for the public API.
2020-04-10 13:12:47 -07:00
David Goldblatt
8f2193dc8d
Decay: Move in arena decay functions.
2020-04-10 13:12:47 -07:00
David Goldblatt
7b62885476
Introduce decay module and put decay objects in PA
2020-04-10 13:12:47 -07:00
David Goldblatt
497836dbc8
Arena stats: mark edata_avail as derived.
...
The true number is in the edata_cache itself.
2020-04-10 13:12:47 -07:00
David Goldblatt
3192d6b77d
Extents: Have extent_dalloc_gap take ehooks.
...
We're almost to the point where the extent code doesn't know about arenas at
all. In that world, we shouldn't pull them out of the arena.
2020-04-10 13:12:47 -07:00
David Goldblatt
22a0a7b93a
Move arena_decay_extent to extent module.
2020-04-10 13:12:47 -07:00
David Goldblatt
70d12ffa05
PA: Move mapped into pa stats.
2020-04-10 13:12:47 -07:00
David Goldblatt
6ca918d0cf
PA: Add a stats comment.
2020-04-10 13:12:47 -07:00
David Goldblatt
ce8c0d6c09
PA: Move in arena extent_sn counter.
...
Just another step towards making PA self-contained.
2020-04-10 13:12:47 -07:00
David Goldblatt
1ad368c8b7
PA: Move in decay stats.
2020-04-10 13:12:47 -07:00
David Goldblatt
356aaa7dc6
Introduce lockedint module.
...
This pulls out the various abstractions where some stats counter is sometimes an
atomic, sometimes a plain variable, sometimes always protected by a lock,
sometimes protected by reads but not writes, etc. With this change, these cases
are treated consistently, and access patterns tagged.
In the process, we fix a few missed-update bugs (where one caller assumes
"protected-by-a-lock" semantics and another does not).
2020-04-10 13:12:47 -07:00
David Goldblatt
acd0bf6a26
PA: move in ecache_grow.
2020-04-10 13:12:47 -07:00
David Goldblatt
32cb7c2f0b
PA: Add a stats type.
2020-04-10 13:12:47 -07:00
David Goldblatt
688fb3eb89
PA: Move in the arena edata_cache.
2020-04-10 13:12:47 -07:00
David Goldblatt
8433ad84ea
PA: move in shard initialization.
2020-04-10 13:12:47 -07:00
David Goldblatt
a24faed569
PA: Move in the ecache_t objects.
2020-04-10 13:12:47 -07:00
David Goldblatt
585f925055
Move cache index randomization out of extent.
...
This is logically at a higher level of the stack; extent should just allocate
things at the page-level; it shouldn't care exactly why the callers wants a
given number of pages.
2020-04-10 13:12:47 -07:00
David Goldblatt
12be9f5727
Add a stub PA module -- a page allocator.
2020-04-10 13:12:47 -07:00
Yinan Zhang
c4e9ea8cc6
Get rid of locks in prof recent test
2020-04-07 17:22:24 -07:00
Yinan Zhang
2deabac079
Get rid of custom iterator for last-N records
2020-04-07 17:22:24 -07:00
Yinan Zhang
a5ddfa7d91
Use ql for prof last-N list
2020-04-07 17:22:24 -07:00
Yinan Zhang
ce17af4221
Better structure ql module
2020-04-06 09:50:27 -07:00
Yinan Zhang
4b66297ea0
Add move constructor to ql module
2020-04-06 09:50:27 -07:00
Yinan Zhang
a62b7ed928
Add emptiness checking to ql module
2020-04-06 09:50:27 -07:00
Yinan Zhang
1dd24ca6d2
Add rotate functionality to ql module
2020-04-06 09:50:27 -07:00
Yinan Zhang
0dc95a882f
Add concat and split functionality to ql module
2020-04-06 09:50:27 -07:00
Yinan Zhang
1ad06aa53b
deduplicate insert and delete logic in qr module
2020-04-06 09:50:27 -07:00
Yinan Zhang
c9d56cddf2
Optimize meld in qr module
...
The goal of `qr_meld()` is to change the following four fields
`(a->prev, a->prev->next, b->prev, b->prev->next)` from the values
`(a->prev, a, b->prev, b)` to `(b->prev, b, a->prev, a)`.
This commit changes
```
a->prev->next = b;
b->prev->next = a;
temp = a->prev;
a->prev = b->prev;
b->prev = temp;
```
to
```
temp = a->prev;
a->prev = b->prev;
b->prev = temp;
a->prev->next = a;
b->prev->next = b;
```
The benefit is that we can use `b->prev->next` for `temp`, and so
there's no need to pass in `a_type`.
The restriction is that `b` cannot be a `qr_next()` macro, so users
of `qr_meld()` must pay attention. (Before this change, neither `a`
nor `b` could be a `qr_next()` macro.)
2020-04-06 09:50:27 -07:00
Yinan Zhang
f9aad7a49b
Add piping API to buffered writer
2020-04-01 09:41:20 -07:00
Yinan Zhang
09cd79495f
Encapsulate buffer allocation failure in buffered writer
2020-04-01 09:41:20 -07:00
David Goldblatt
3b4a03b92b
Mac: don't declare system functions as nothrow.
...
This contradicts the system headers, which can lead to breakages.
2020-03-26 14:11:24 -07:00
Yinan Zhang
2256ef8961
Add option to fetch system thread name on each prof sample
2020-03-24 21:39:57 -07:00
Yinan Zhang
a5780598b3
Remove thread_event_rollback()
2020-03-12 13:55:00 -07:00
Yinan Zhang
ba783b3a0f
Remove prof -> thread_event dependency
2020-03-12 13:55:00 -07:00
Yinan Zhang
441d88d1c7
Rewrite profiling thread event
2020-03-12 13:55:00 -07:00
David Goldblatt
99b1291d17
Edata cache: add edata_cache_small_t.
...
This can be used to amortize the synchronization costs of edata_cache accesses.
2020-03-12 11:58:09 -07:00
David Goldblatt
92485032b2
Cache bin: improve comments.
2020-03-12 11:54:19 -07:00
David Goldblatt
d701a085c2
Fast path: allow low-water mark changes.
...
This lets us put more allocations on an "almost as fast" path after a flush.
This results in around a 4% reduction in malloc cycles in prod workloads
(corresponding to about a 0.1% reduction in overall cycles).
2020-03-12 11:54:19 -07:00
David Goldblatt
397da03865
Cache bin: rewrite to track more state.
...
With this, we track all of the empty, full, and low water states together. This
simplifies a lot of the tracking logic, since we now don't need the
cache_bin_info_t for state queries (except for some debugging).
2020-03-12 11:54:19 -07:00
David Goldblatt
0a2fcfac01
Tcache: Hold cache bin allocation explicitly.
2020-03-12 11:54:19 -07:00
David Goldblatt
d498a4bb08
Cache bin: Add an emptiness assertion.
2020-03-12 11:54:19 -07:00
David Goldblatt
6a7aa46ef7
Cache bin: Add a debug method for init checking.
2020-03-12 11:54:19 -07:00
David Goldblatt
370c1ea007
Cache bin: Write the unit test in terms of the API
...
I.e. stop allowing the unit test to have secret access to implementation
internals.
2020-03-12 11:54:19 -07:00
David Goldblatt
7f5ebd211c
Cache bin: set low-water internally.
2020-03-12 11:54:19 -07:00
David Goldblatt
60113dfe3b
Cache bin: Move in initialization code.
2020-03-12 11:54:19 -07:00
David Goldblatt
44529da852
Cache-bin: Make flush modifications internal
...
I.e. the tcache code just calls a cache-bin function to finish flush (and move
pointers around, etc.). It doesn't directly access the cache-bin's owned memory
any more.
2020-03-12 11:54:19 -07:00
David Goldblatt
ff6acc6ed5
Cache bin: simplify names and argument ordering.
...
We always start with the cache bin, then its info (if necessary).
2020-03-12 11:54:19 -07:00
David Goldblatt
e1dcc557d6
Cache bin: Only take the relevant cache_bin_info_t
...
Previously, we took an array of cache_bin_info_ts and an index, and dereferenced
ourselves. But infos for other cache_bins aren't relevant to any particular
cache bin, so that should be the caller's job.
2020-03-12 11:54:19 -07:00
David Goldblatt
1b00d808d7
cache_bin: Don't let arena see empty position.
2020-03-12 11:54:19 -07:00
David Goldblatt
d303f30796
cache_bin nflush -> n.
...
We're going to use it on the fill pathway as well.
2020-03-12 11:54:19 -07:00
David Goldblatt
74d36d78ef
Cache bin: Make ncached_max a query on the info_t.
2020-03-12 11:54:19 -07:00
David Goldblatt
b66c0973cc
cache_bin: Don't allow direct internals access.
2020-03-12 11:54:19 -07:00
David Goldblatt
da68f73296
Move percpu_arena_update.
...
It's not really part of the API of the arena; it changes which arena we're using
that API on.
2020-03-12 11:54:19 -07:00
David Goldblatt
909c501b07
Cache_bin: Shouldn't know about tcache.
...
Instead, have it take the cache_bin_info_ts to use by pointer. While we're
here, add a src file for the cache bin.
2020-03-12 11:54:19 -07:00
David Goldblatt
79f1ee2fc0
Move junking out of arena/tcache code.
...
This is debug only and we keep it off the fast path. Moving it here simplifies
the internal logic.
This never tries to junk on regions that were shrunk via xallocx. I think this
is fine for two reasons:
- The shrunk-with-xallocx case is rare.
- We don't always do that anyway before this diff (it depends on the opt
settings and extent hooks in effect).
2020-03-12 11:54:19 -07:00
David T. Goldblatt
6c3491ad31
Tcache: Unify bin flush logic.
...
The small and large pathways share most of their logic, even if some of the
individual operations are different. We pull out the common logic into a
force-inlined function, and then specialize twice, once for each value of
"small".
2020-02-25 10:21:03 -08:00
David T. Goldblatt
9f4fc27389
Ehooks: Fix a build warning.
...
We wrote `return some_void_func()` in a function returning void, which is
confusing and triggers warnings on MSVC.
2020-02-25 10:21:03 -08:00
David T. Goldblatt
162c2bcf31
Background thread: take base as a parameter.
2020-02-18 11:22:09 -08:00