Yinan Zhang
40fa6674a9
Fix prof timestamp conf reading
2020-06-17 16:02:51 -07:00
David Goldblatt
40672b0b78
Remove duplicate logging in malloc.
2020-06-16 10:33:55 -07:00
Jon Haslam
4aea743279
High Resolution Timestamps for Profiling
2020-06-15 12:12:49 -07:00
David Goldblatt
d82a164d0d
Add thread.peak.[read|reset] mallctls.
...
These can be used to track net allocator activity on a per-thread basis.
2020-06-11 13:54:22 -07:00
Yinan Zhang
3e19ebd2ea
Add lock to protect prof last-N dumping
2020-06-09 17:03:05 -07:00
Yinan Zhang
a835d9cf85
Make prof last-N dumping non-blocking
2020-06-09 17:03:05 -07:00
Yinan Zhang
fc8bc4b5c0
Increase dump buffer for prof last-N list
2020-06-09 17:03:05 -07:00
Yinan Zhang
264d89d641
Extract restore and async cleanup functions for prof last-N list
2020-06-09 17:03:05 -07:00
Yinan Zhang
857ebd3daf
Make edata pointer on prof recent record an atomic fence
2020-06-09 17:03:05 -07:00
Yinan Zhang
730658f72f
Extract alloc/dalloc utility for last-N nodes
2020-06-09 17:03:05 -07:00
Yinan Zhang
035be44867
Separate out dumping for each prof recent record
2020-06-09 17:03:05 -07:00
David Goldblatt
8da0896b79
Tcache: Make an integer conversion explicit.
2020-05-28 15:52:40 -07:00
David Goldblatt
6cdac3c573
Tcache: Make flush fractions configurable.
2020-05-16 13:34:23 -07:00
David Goldblatt
7503b5b33a
Stats, CTL: Expose new tcache settings.
2020-05-16 13:34:23 -07:00
David Goldblatt
ee72bf1cfd
Tcache: Add tcache gc delay option.
...
This can reduce flushing frequency for small size classes.
2020-05-16 13:34:23 -07:00
David Goldblatt
d338dd45d7
Tcache: Make incremental gc bytes configurable.
2020-05-16 13:34:23 -07:00
David Goldblatt
ec0b579563
Tcache: Privatize opt_lg_tcache_max default.
2020-05-16 13:34:23 -07:00
David Goldblatt
181093173d
Tcache: make slot sizing configurable.
2020-05-16 13:34:23 -07:00
David Goldblatt
b58dea8d1b
Cache bin: expose ncached_max publicly.
2020-05-16 13:34:23 -07:00
David Goldblatt
634afc4124
Tcache: Make size computation configurable.
2020-05-16 13:34:23 -07:00
David Goldblatt
eda9c2858f
Edata: zero stack edatas before initializing.
...
This avoids some UB. No compilers take advantage of it for now, but no sense in
tempting fate.
2020-05-14 10:30:20 -07:00
David Goldblatt
5dead37a9d
Allow narenas:default.
...
This can be useful when you know you want to override some lower-priority
configuration setting with its default value, but don't know what that value
would be.
2020-05-14 10:30:08 -07:00
Yinan Zhang
75dae934a1
Always initialize TE counters in TSD init
2020-05-12 09:16:16 -07:00
Yinan Zhang
b06dfb9ccc
Push event handlers to constituent modules
2020-05-12 09:16:16 -07:00
Yinan Zhang
381c97caa4
Treat postponed prof sample event as new event
2020-05-12 09:16:16 -07:00
Yinan Zhang
abd4674931
Extract out per event postponed wait time fetching
2020-05-12 09:16:16 -07:00
Yinan Zhang
f72014d097
Only compute thread event threshold once per trigger
2020-05-12 09:16:16 -07:00
Yinan Zhang
7324c4f85f
Break down event init and handler functions
2020-05-12 09:16:16 -07:00
Yinan Zhang
6de77799de
Move thread event wait time update to local
2020-05-12 09:16:16 -07:00
Yinan Zhang
733ae918f0
Extract out per event new wait time fetching
2020-05-12 09:16:16 -07:00
Yinan Zhang
1e2524e15a
Do not reset sample wait time when re-initing tdata
2020-05-12 09:16:16 -07:00
Yinan Zhang
fc052ff728
Migrate counter to use locked int
2020-05-12 08:23:15 -07:00
Yinan Zhang
f533ab6da6
Add forking handling for stats
2020-05-11 15:35:06 -07:00
Yinan Zhang
508303077b
Add forking handling for prof idump counter
2020-05-11 15:35:06 -07:00
Yinan Zhang
4d970f8bfc
Add forking handling for counter module
2020-05-11 15:35:06 -07:00
Yinan Zhang
2097e1945b
Unify write callback signature
2020-05-11 14:51:24 -07:00
Yinan Zhang
8be5584494
Initialize prof idump counter once rather than once per arena
2020-05-11 12:24:56 -07:00
Yinan Zhang
e10e5059e8
Make prof_idump_accum() non-inline
2020-05-11 12:24:56 -07:00
Yinan Zhang
039bfd4e30
Do not rollback prof idump counter in arena_prof_promote()
2020-05-11 12:24:56 -07:00
Yinan Zhang
0295aa38a2
Deduplicate entries in witness error message
2020-05-11 12:04:02 -07:00
David Goldblatt
f1f8a75496
Let opt.zero propagate to core allocation.
...
I.e. set dopts->zero early on if opt.zero is true, rather than leaving it set by
the entry-point function (malloc, calloc, etc.) and then memsetting. This
avoids situations where we zero once in the large-alloc pathway and then again
via memset.
2020-05-04 12:36:45 -07:00
David Goldblatt
46471ea327
SC: Name the max lookup constant.
2020-05-04 12:27:07 -07:00
David Goldblatt
cd29ebefd0
Tcache: treat small and large cache bins uniformly
2020-04-14 15:20:19 -07:00
David Goldblatt
a13fbad374
Tcache: split up fast and slow path data.
2020-04-14 15:20:19 -07:00
David Goldblatt
7099c66205
Arena: fill in terms of cache_bins.
2020-04-14 15:20:19 -07:00
David Goldblatt
40e7aed59e
TSD: Move in some of the tcache fields.
...
We had put these in the tcache for cache optimization reasons. After the
previous diff, these no longer apply.
2020-04-14 15:20:19 -07:00
David Goldblatt
3589571bfd
SC: use SC_LG_NGROUP instead of its value.
...
This magic constant introduces inconsistencies. We should be able to change its
value solely by adjusting the definition in the header.
2020-04-13 10:01:30 -07:00
David Goldblatt
79ae7f9211
Rtree: Remove the per-field accessors.
...
We instead split things into "edata" and "metadata".
2020-04-10 13:12:47 -07:00
David Goldblatt
bb6a418523
Emap: Drop szind/slab splitting parameters.
...
After the previous diff, these are constants.
2020-04-10 13:12:47 -07:00
David Goldblatt
50289750b3
Extent: Remove szind/slab knowledge.
2020-04-10 13:12:47 -07:00
David Goldblatt
dc26b30094
Rtree: Clean up compact/non-compact split.
2020-04-10 13:12:47 -07:00
David Goldblatt
93b99dd140
Extent: Stop passing an edata_cache everywhere.
...
We already pass the pa_shard_t around everywhere; we can just use that.
2020-04-10 13:12:47 -07:00
David Goldblatt
a4759a1911
Ehooks: avoid touching arena_emap_global in tests.
...
That breaks our ability to test custom emaps in isolation.
2020-04-10 13:12:47 -07:00
David Goldblatt
11c47cb133
Extent: Take "bool zero" over "bool *zero".
2020-04-10 13:12:47 -07:00
David Goldblatt
1a1124462e
PA: Take zero as a bool rather than as a bool *.
...
Now that we've moved junking to a higher level of the allocation stack, we don't
care about this performance optimization (which only occurred in debug modes).
2020-04-10 13:12:47 -07:00
David Goldblatt
294b276fc7
PA: Parameterize emap. Move emap_global to arena.
...
This lets us test the PA module without interfering with the global emap used by
the real allocator (the one not under test).
2020-04-10 13:12:47 -07:00
David Goldblatt
f730577277
Eset: Parameterize last globals accesses.
...
I.e. opt_retain and maps_coalesce.
2020-04-10 13:12:47 -07:00
David Goldblatt
7bb6e2dc0d
Eset: take opt_lg_max_active_fit as a parameter.
...
This breaks its dependence on the global.
2020-04-10 13:12:47 -07:00
David Goldblatt
883ab327cc
Emap: Move out last edata state touching.
2020-04-10 13:12:47 -07:00
David Goldblatt
0c96a2f03b
Emap: Move out remaining edata modifications.
2020-04-10 13:12:47 -07:00
David Goldblatt
dfef0df71a
Emap: Move edata modification out of emap_remap.
2020-04-10 13:12:47 -07:00
David Goldblatt
12eb888e54
Edata: Add a ranged bit.
...
We steal the dumpable bit, which we ended up not needing.
2020-04-10 13:12:47 -07:00
David Goldblatt
bd4fdf295e
Rtree: Pull leaf contents into their own struct.
2020-04-10 13:12:47 -07:00
David Goldblatt
faec7219b2
PA: Move in decay initialization.
2020-04-10 13:12:47 -07:00
David Goldblatt
45671e4a27
PA: Move in retain growth limit setting.
2020-04-10 13:12:47 -07:00
David Goldblatt
daefde88fe
PA: Move in mutex stats reading.
2020-04-10 13:12:47 -07:00
David Goldblatt
07675840a5
PA: Move in some more internals accesses.
2020-04-10 13:12:47 -07:00
David Goldblatt
238f3c7430
PA: Move in full stats merging.
2020-04-10 13:12:47 -07:00
David Goldblatt
81c6027592
Arena stats: Give it its own "mapped".
...
This distinguishes it from the PA mapped stat, which is now named "pa_mapped" to
avoid confusion. The (derived) arena stat includes base memory, and the PA stat
is no longer partially derived.
2020-04-10 13:12:47 -07:00
David Goldblatt
506d907e40
PA: Move in basic stats merging.
2020-04-10 13:12:47 -07:00
David Goldblatt
f29f6090f5
PA: Add pa_extra.c and put PA forking there.
2020-04-10 13:12:47 -07:00
David Goldblatt
8164fad404
Stats: Fix edata_cache size merging.
...
Previously, we assigned to the output rather than incrementing it.
2020-04-10 13:12:47 -07:00
David Goldblatt
565045ef71
Arena: Make more derived stats non-atomic/locked.
2020-04-10 13:12:47 -07:00
David Goldblatt
d0c43217b5
Arena stats: Move retained to PA, use plain ints.
...
Retained is a property of the allocated pages. The derived fields no longer
require any locking; they're computed on demand.
2020-04-10 13:12:47 -07:00
David Goldblatt
e2cf3fb1a3
PA: Move in all modifications of mapped.
2020-04-10 13:12:47 -07:00
David Goldblatt
436789ad96
PA: Make mapped stat atomic.
...
We always have atomic_zu_t, and mapped/unmapped transitions are always expensive
enough that trying to piggyback on a lock is a waste of time.
2020-04-10 13:12:47 -07:00
David Goldblatt
3c28aa6f17
PA: Move edata_avail stat in, make it non-atomic.
2020-04-10 13:12:47 -07:00
David Goldblatt
f6bfa3dcca
Move extent stats to the PA module.
...
While we're at it, make them non-atomic -- they are purely derived statistics
(and in fact aren't even in the arena_t or pa_shard_t).
2020-04-10 13:12:47 -07:00
David Goldblatt
527dd4cdb8
PA: Move in nactive counter.
2020-04-10 13:12:47 -07:00
David Goldblatt
c075fd0bcb
PA: Minor cleanups and comment fixes.
2020-04-10 13:12:47 -07:00
David Goldblatt
46a9d7fc0b
PA: Move in rest of purging.
2020-04-10 13:12:47 -07:00
David Goldblatt
2d6eec7b5c
PA: Move in decay-all pathway.
2020-04-10 13:12:47 -07:00
David Goldblatt
65698b7f2e
PA: Remove public visibility of some internals.
2020-04-10 13:12:47 -07:00
David Goldblatt
f012c43be0
PA: Move in decay_to_limit
2020-04-10 13:12:47 -07:00
David Goldblatt
103f5feda5
Move bg thread activity check out of purging core.
2020-04-10 13:12:47 -07:00
David Goldblatt
3034f4a508
PA: Move in decay_stashed.
2020-04-10 13:12:47 -07:00
David Goldblatt
aef28b2f8f
PA: Move in stash_decayed.
2020-04-10 13:12:47 -07:00
David Goldblatt
655a096343
Move bg inactivity check out of purge inner loop.
...
I.e. do it once per call to arena_decay_stashed instead of once per muzzy purge.
2020-04-10 13:12:47 -07:00
David Goldblatt
71fc0dc968
PA: Move in remaining page allocation functions.
2020-04-10 13:12:47 -07:00
David Goldblatt
74958567a4
PA: have expand take sizes instead of new usize.
...
This avoids involving usize, which makes some of the stats modifications more
intuitively correct.
2020-04-10 13:12:47 -07:00
David Goldblatt
5bcc2c2ab9
PA: Have expand take szind and slab.
...
This isn't really necessary, but having a uniform API will help us later.
2020-04-10 13:12:47 -07:00
David Goldblatt
0880c2ab97
PA: Have large expands use it.
2020-04-10 13:12:47 -07:00
David Goldblatt
7be3dea82c
PA: Have slab allocations use it.
2020-04-10 13:12:47 -07:00
David Goldblatt
9f93625c14
PA: Move in arena large allocation functionality.
2020-04-10 13:12:47 -07:00
David Goldblatt
7624043a41
PA: Add ehook-getting support.
2020-04-10 13:12:47 -07:00
David Goldblatt
eba35e2e48
Remove extent knowledge of arena.
2020-04-10 13:12:47 -07:00
David Goldblatt
e77f47a85a
Move arena decay getters to PA.
2020-04-10 13:12:47 -07:00
David Goldblatt
f77cec311e
Decay: Take current time as an argument.
...
This better facilitates testing.
2020-04-10 13:12:47 -07:00
David Goldblatt
d1d7e1076b
Decay: move in some background_thread accesses.
2020-04-10 13:12:47 -07:00
David Goldblatt
8f2193dc8d
Decay: Move in arena decay functions.
2020-04-10 13:12:47 -07:00
David Goldblatt
4d090d23f1
Decay: Introduce a stub .c file.
2020-04-10 13:12:47 -07:00
David Goldblatt
7b62885476
Introduce decay module and put decay objects in PA
2020-04-10 13:12:47 -07:00
David Goldblatt
3192d6b77d
Extents: Have extent_dalloc_gap take ehooks.
...
We're almost to the point where the extent code doesn't know about arenas at
all. In that world, we shouldn't pull them out of the arena.
2020-04-10 13:12:47 -07:00
David Goldblatt
22a0a7b93a
Move arena_decay_extent to extent module.
2020-04-10 13:12:47 -07:00
David Goldblatt
70d12ffa05
PA: Move mapped into pa stats.
2020-04-10 13:12:47 -07:00
David Goldblatt
ce8c0d6c09
PA: Move in arena extent_sn counter.
...
Just another step towards making PA self-contained.
2020-04-10 13:12:47 -07:00
David Goldblatt
1ada4aef84
PA: Get rid of arena_ind_get calls.
...
This is another step on the path towards breaking the extent reliance on the
arena module.
2020-04-10 13:12:47 -07:00
David Goldblatt
1ad368c8b7
PA: Move in decay stats.
2020-04-10 13:12:47 -07:00
David Goldblatt
356aaa7dc6
Introduce lockedint module.
...
This pulls out the various abstractions where some stats counter is sometimes an
atomic, sometimes a plain variable, sometimes always protected by a lock,
sometimes protected by reads but not writes, etc. With this change, these cases
are treated consistently, and access patterns tagged.
In the process, we fix a few missed-update bugs (where one caller assumes
"protected-by-a-lock" semantics and another does not).
2020-04-10 13:12:47 -07:00
David Goldblatt
acd0bf6a26
PA: move in ecache_grow.
2020-04-10 13:12:47 -07:00
David Goldblatt
32cb7c2f0b
PA: Add a stats type.
2020-04-10 13:12:47 -07:00
David Goldblatt
688fb3eb89
PA: Move in the arena edata_cache.
2020-04-10 13:12:47 -07:00
David Goldblatt
8433ad84ea
PA: move in shard initialization.
2020-04-10 13:12:47 -07:00
David Goldblatt
a24faed569
PA: Move in the ecache_t objects.
2020-04-10 13:12:47 -07:00
David Goldblatt
585f925055
Move cache index randomization out of extent.
...
This is logically at a higher level of the stack; extent should just allocate
things at the page-level; it shouldn't care exactly why the callers wants a
given number of pages.
2020-04-10 13:12:47 -07:00
David Goldblatt
12be9f5727
Add a stub PA module -- a page allocator.
2020-04-10 13:12:47 -07:00
Yinan Zhang
c4e9ea8cc6
Get rid of locks in prof recent test
2020-04-07 17:22:24 -07:00
Yinan Zhang
2deabac079
Get rid of custom iterator for last-N records
2020-04-07 17:22:24 -07:00
Yinan Zhang
a5ddfa7d91
Use ql for prof last-N list
2020-04-07 17:22:24 -07:00
Yinan Zhang
f9aad7a49b
Add piping API to buffered writer
2020-04-01 09:41:20 -07:00
Yinan Zhang
09cd79495f
Encapsulate buffer allocation failure in buffered writer
2020-04-01 09:41:20 -07:00
Yinan Zhang
a166c20818
Make prof_tctx_t pointer a true prof atomic fence
2020-03-31 17:43:42 -07:00
David T. Goldblatt
d936b46d3a
Add malloc_conf_2_conf_harder
...
This comes in handy when you're just a user of a canary system who wants to
change settings set by the configuration system itself.
2020-03-31 06:25:08 -07:00
Yinan Zhang
2256ef8961
Add option to fetch system thread name on each prof sample
2020-03-24 21:39:57 -07:00
Yinan Zhang
b30a5c2f90
Reorganize cpp APIs and suppress unused function warnings
2020-03-13 12:16:09 -07:00
David Goldblatt
2e5899c129
Stats: Fix tcache_bytes reporting.
...
Previously, large allocations in tcaches would have their sizes reduced during
stats estimation. Added a test, which fails before this change but passes now.
This fixes a bug introduced in 5934846612
, which
was itself fixing a bug introduced in 9c0549007d
.
2020-03-13 07:53:34 -07:00
Yinan Zhang
a5780598b3
Remove thread_event_rollback()
2020-03-12 13:55:00 -07:00
Yinan Zhang
ba783b3a0f
Remove prof -> thread_event dependency
2020-03-12 13:55:00 -07:00
Yinan Zhang
441d88d1c7
Rewrite profiling thread event
2020-03-12 13:55:00 -07:00
David Goldblatt
0dcd576600
Edata cache: atomic fetch-add -> load-store.
...
The modifications to count are protected by a mutex; there's no need to use the
more costly version.
2020-03-12 11:58:09 -07:00
David Goldblatt
99b1291d17
Edata cache: add edata_cache_small_t.
...
This can be used to amortize the synchronization costs of edata_cache accesses.
2020-03-12 11:58:09 -07:00
David Goldblatt
d701a085c2
Fast path: allow low-water mark changes.
...
This lets us put more allocations on an "almost as fast" path after a flush.
This results in around a 4% reduction in malloc cycles in prod workloads
(corresponding to about a 0.1% reduction in overall cycles).
2020-03-12 11:54:19 -07:00
David Goldblatt
397da03865
Cache bin: rewrite to track more state.
...
With this, we track all of the empty, full, and low water states together. This
simplifies a lot of the tracking logic, since we now don't need the
cache_bin_info_t for state queries (except for some debugging).
2020-03-12 11:54:19 -07:00
David Goldblatt
fef0b1ffe4
Cache bin: Remove last internals accesses.
2020-03-12 11:54:19 -07:00
David Goldblatt
0a2fcfac01
Tcache: Hold cache bin allocation explicitly.
2020-03-12 11:54:19 -07:00
David Goldblatt
d498a4bb08
Cache bin: Add an emptiness assertion.
2020-03-12 11:54:19 -07:00
David Goldblatt
6a7aa46ef7
Cache bin: Add a debug method for init checking.
2020-03-12 11:54:19 -07:00
David Goldblatt
7f5ebd211c
Cache bin: set low-water internally.
2020-03-12 11:54:19 -07:00
David Goldblatt
60113dfe3b
Cache bin: Move in initialization code.
2020-03-12 11:54:19 -07:00
David Goldblatt
44529da852
Cache-bin: Make flush modifications internal
...
I.e. the tcache code just calls a cache-bin function to finish flush (and move
pointers around, etc.). It doesn't directly access the cache-bin's owned memory
any more.
2020-03-12 11:54:19 -07:00
David Goldblatt
ff6acc6ed5
Cache bin: simplify names and argument ordering.
...
We always start with the cache bin, then its info (if necessary).
2020-03-12 11:54:19 -07:00
David Goldblatt
e1dcc557d6
Cache bin: Only take the relevant cache_bin_info_t
...
Previously, we took an array of cache_bin_info_ts and an index, and dereferenced
ourselves. But infos for other cache_bins aren't relevant to any particular
cache bin, so that should be the caller's job.
2020-03-12 11:54:19 -07:00
David Goldblatt
1b00d808d7
cache_bin: Don't let arena see empty position.
2020-03-12 11:54:19 -07:00
David Goldblatt
d303f30796
cache_bin nflush -> n.
...
We're going to use it on the fill pathway as well.
2020-03-12 11:54:19 -07:00
David Goldblatt
74d36d78ef
Cache bin: Make ncached_max a query on the info_t.
2020-03-12 11:54:19 -07:00
David Goldblatt
b66c0973cc
cache_bin: Don't allow direct internals access.
2020-03-12 11:54:19 -07:00
David Goldblatt
909c501b07
Cache_bin: Shouldn't know about tcache.
...
Instead, have it take the cache_bin_info_ts to use by pointer. While we're
here, add a src file for the cache bin.
2020-03-12 11:54:19 -07:00
David Goldblatt
79f1ee2fc0
Move junking out of arena/tcache code.
...
This is debug only and we keep it off the fast path. Moving it here simplifies
the internal logic.
This never tries to junk on regions that were shrunk via xallocx. I think this
is fine for two reasons:
- The shrunk-with-xallocx case is rare.
- We don't always do that anyway before this diff (it depends on the opt
settings and extent hooks in effect).
2020-03-12 11:54:19 -07:00
David Goldblatt
22657a5e65
Extents: Silence the "potentially unused" warning.
2020-03-12 11:54:19 -07:00
Yinan Zhang
305b1f6d96
Correction on geometric sampling
2020-03-04 13:55:21 -08:00
David T. Goldblatt
6c3491ad31
Tcache: Unify bin flush logic.
...
The small and large pathways share most of their logic, even if some of the
individual operations are different. We pull out the common logic into a
force-inlined function, and then specialize twice, once for each value of
"small".
2020-02-25 10:21:03 -08:00
David T. Goldblatt
162c2bcf31
Background thread: take base as a parameter.
2020-02-18 11:22:09 -08:00
David T. Goldblatt
29436fa056
Break prof and tcache knowledge of b0.
2020-02-18 11:22:09 -08:00
David T. Goldblatt
a0c1f4ac57
Rtree: take the base allocator as a parameter.
...
This facilitates better testing by avoiding mixing of the "real" base with the
base used by the rtree under test.
2020-02-18 11:22:09 -08:00
David T. Goldblatt
7013716aaa
Emap: Take (and propagate) a zeroed parameter.
...
Rtree needs this, and we should really treat them similarly.
2020-02-18 11:22:09 -08:00
David T. Goldblatt
182192f83c
Base: Pull into a single header.
2020-02-18 11:22:09 -08:00
David Goldblatt
7e6c8a7286
Emap: Standardize naming.
...
Namespace everything under emap_, always specify what it is we're looking up
(emap_lookup -> emap_edata_lookup), and use "ctx" over "info".
2020-02-17 10:50:51 -08:00
David Goldblatt
ac50c1e44b
Emap: Remove direct access to emap internals.
...
In the process, we do a few local cleanups and optimizations. In particular,
the size safety check on tcache flush no longer does a redundant load.
2020-02-17 10:50:51 -08:00
David Goldblatt
06e42090f7
Make jemalloc.c use the emap interface.
...
While we're here, we'll also clean up some style nits.
2020-02-17 10:50:51 -08:00
David Goldblatt
f7d9c6c42d
Emap: Move in alloc_ctx lookup functionality.
2020-02-17 10:50:51 -08:00
David Goldblatt
65a54d7714
Emap: Move in szind and slab modifications.
2020-02-17 10:50:51 -08:00
David Goldblatt
9b5d105fc3
Emap: Move in iealloc.
...
This is logically scoped to the emap.
2020-02-17 10:50:51 -08:00
David Goldblatt
1d449bd9a6
Emap: Internal rtree context setting.
...
The only time sharing an rtree context saves across extent operations isn't a
no-op is when tsd is unavailable. But this happens only in situations like
thread death or initialization, and we don't care about shaving off every
possible cycle in such scenarios.
2020-02-17 10:50:51 -08:00
David Goldblatt
08eb1e6c31
Emap: Comments and cleanup
...
Document some of the public interface, and hide the functions that are no longer
used outside of the emap module.
2020-02-17 10:50:51 -08:00
David Goldblatt
231d1477e5
Rename emap_split_prepare_t -> emap_prepare_t.
...
Both the split and merge functions use it.
2020-02-17 10:50:51 -08:00
David Goldblatt
0586a56f39
Emap: Move in merge functionality.
2020-02-17 10:50:51 -08:00
David Goldblatt
040eac77cc
Tell edatas their creation arena immediately.
...
This avoids having to pass it in anywhere else.
2020-02-17 10:50:51 -08:00
David Goldblatt
7c7b702064
Emap: Move over metadata splitting logic.
2020-02-17 10:50:51 -08:00
David Goldblatt
44f5f53605
Emap: Move over deregistration functions.
2020-02-17 10:50:51 -08:00
David Goldblatt
6513d9d923
Emap: Move over deregistration boundary functions.
2020-02-17 10:50:51 -08:00
David Goldblatt
9b5ca0b09d
Emap: Move in slab interior registration.
2020-02-17 10:50:51 -08:00
David Goldblatt
d05b61db4a
Emap: Move extent boundary registration in.
2020-02-17 10:50:51 -08:00
David Goldblatt
ca21ce4071
Emap: Move in write_acquired from extent.
2020-02-17 10:50:51 -08:00
David Goldblatt
01f255161c
Add emap, for tracking extent locking.
2020-02-17 10:50:51 -08:00
Qi Wang
0f686e82a3
Avoid variable length array with length 0.
2020-02-16 14:14:07 -08:00
Yinan Zhang
68e8ddcaff
Add mallctl for dumping last-N profiling records
2020-02-14 12:46:38 -08:00
Qi Wang
ba0e35411c
Rework the bin locking around tcache refill / flush.
...
Previously, tcache fill/flush (as well as small alloc/dalloc on the arena) may
potentially drop the bin lock for slab_alloc and slab_dalloc. This commit
refactors the logic so that the slab calls happen in the same function / level
as the bin lock / unlock. The main purpose is to be able to use flat combining
without having to keep track of stack state.
In the meantime, this change reduces the locking, especially for slab_dalloc
calls, where nothing happens after the call.
2020-02-13 23:31:54 -08:00
Qi Wang
ca1f082251
Disallow merge across mmap regions to preserve SN / first-fit.
...
Check the is_head state before merging two extents. Disallow the merge if it's
crossing two separate mmap regions. This enforces first-fit (by not losing the
SN) at a very small cost.
2020-02-13 12:18:44 -08:00
Yinan Zhang
7014f81e17
Add ASSURED_WRITE in mallctl
2020-02-05 15:29:14 -08:00
Yinan Zhang
9cac3fa8f5
Encapsulate buffer allocation in buffered writer
2020-02-04 13:21:58 -08:00
Yinan Zhang
bdc08b5158
Better naming buffered writer
2020-02-04 13:21:58 -08:00
Qi Wang
e896522616
Abbreviate thread-event to te.
2020-02-04 13:07:05 -08:00
Qi Wang
5e500523a0
Remove thread_event_boot().
2020-02-04 00:18:15 -08:00
Qi Wang
97dd79db6c
Implement deallocation events.
...
Make the event module to accept two event types, and pass around the event
context. Use bytes-based events to trigger tcache GC on deallocation, and get
rid of the tcache ticker.
2020-02-04 00:18:15 -08:00
zoulasc
536ea6858e
NetBSD specific changes:
...
- NetBSD overcommits
- When mapping pages, use the maximum of the alignment requested and the
compiled-in PAGE constant which might be greater than the current kernel
pagesize, since we compile binaries with the maximum page size supported
by the architecture (so that they work with all kernels).
2020-02-03 15:49:36 -08:00
Qi Wang
974222c626
Add safety check on sdallocx slow / sampled path.
2020-01-31 00:04:22 -08:00
Qi Wang
88d9eca848
Enforce page alignment for sampled allocations.
...
This allows sampled allocations to be checked through alignment, therefore
enable sized deallocation regardless of cache_oblivious.
2020-01-31 00:04:22 -08:00
Qi Wang
0f552ed673
Don't purge huge extents when decay is off.
2020-01-30 14:40:38 -08:00
Qi Wang
38a48e5741
Set reentrancy to 1 for tsd_state_purgatory.
...
Reentrancy is already set for other non-nominal tsd states (reincarnated and
minimal_initialized). Add purgatory to be safe and consistent.
2020-01-30 13:55:20 -08:00
Qi Wang
88b0e03a4e
Implement opt.stats_interval and the _opts options.
...
Add options stats_interval and stats_interval_opts to allow interval based stats
printing. This provides an easy way to collect stats without code changes,
because opt.stats_print may not work (some binaries never exit).
2020-01-29 09:57:55 -08:00
Qi Wang
d71a145ec1
Chagne prof_accum_t to counter_accum_t for general purpose.
2020-01-29 09:57:55 -08:00
David Goldblatt
d92f0175c7
Introduce NEITHER_READ_NOR_WRITE in ctl.
...
This is slightly clearer in meaning. A function that is both READONLY() and
WRITEONLY() is in fact neither one.
2020-01-22 18:29:13 -08:00
David Goldblatt
6a622867ca
Add "thread.idle" mallctl.
...
This can encapsulate various internal cleaning logic, and can be used to free up
resources before a long sleep.
2020-01-22 18:29:13 -08:00
Yinan Zhang
f81341a48b
Fallback to unbuffered printing if OOM
2020-01-21 17:09:44 -08:00
Yinan Zhang
84b28c6a13
Properly handle tdata deletion race
2020-01-21 16:51:26 -08:00
Yinan Zhang
d331208560
Get rid of redundant logic in prof
2020-01-21 16:51:26 -08:00
Yinan Zhang
7b67ed0b5a
Get rid of lock overlap in prof_recent_alloc_reset
2020-01-21 16:51:26 -08:00
David Goldblatt
bd3be8e0b1
Remove commit parameter to ecache functions.
...
No caller ever wants uncommitted memory.
2020-01-17 10:54:56 -08:00
Yinan Zhang
b8df719d5c
No tdata creation for backtracing on dying thread
2020-01-16 21:54:14 -08:00
Qi Wang
dab81bd315
Rework and fix the assertions on malloc fastpath.
...
The first half of the malloc fastpath may execute before malloc_init. Make the
assertions work in that case.
2020-01-14 15:00:41 -08:00
Yinan Zhang
ad3f3fc561
Fetch time after tctx and only for samples
2020-01-14 14:36:20 -08:00
Qi Wang
a5d3dd4059
Fix an assertion on extent head state with dss.
2020-01-10 13:29:14 -08:00
Yinan Zhang
2b604a3016
Record request size in prof recent entries
2020-01-10 12:01:01 -08:00
Yinan Zhang
40a391408c
Define constructor for buffered writer argument
2020-01-10 11:59:02 -08:00
Yinan Zhang
6d8e616902
Make buffered writer an independent module
2020-01-10 11:59:02 -08:00
Yinan Zhang
6b6b4709b3
Unify buffered writer naming
2020-01-09 14:31:31 -08:00
Yinan Zhang
9a60cf54ec
Last-N profiling mode
2019-12-30 15:58:57 -08:00
Yinan Zhang
3fa142cf39
Remove _externs from prof internal header names
2019-12-23 11:14:15 -08:00
Yinan Zhang
112dc36dd5
Handle log_mtx during forking
2019-12-20 17:17:48 -08:00
Yinan Zhang
ea42174d07
Refactor profiling headers
2019-12-20 17:17:48 -08:00
David Goldblatt
6342da0970
Ehooks: Further optimize default merge case.
...
This avoids the cost of an iealloc in cases where the user uses the default
merge hook without using the default extent hooks.
2019-12-20 10:18:40 -08:00
David Goldblatt
f2f2084e79
Ehooks: Assert alloc isn't NULL
2019-12-20 10:18:40 -08:00
David Goldblatt
e210ccc57e
Move extent2 -> extent.
...
Eventually, we may fully break off the extent module; but not for some time. If
it's going to live on in a non-transitory state, it might as well have the nicer
name.
2019-12-20 10:18:40 -08:00
David Goldblatt
2f4fa80414
Rename extents -> ecache.
2019-12-20 10:18:40 -08:00
David Goldblatt
56cc56b692
Break extent split dependence on arena.
2019-12-20 10:18:40 -08:00
David Goldblatt
0aa9769fb0
Break commit functions' arena dependence
2019-12-20 10:18:40 -08:00
David Goldblatt
48ec5d4355
Break extent_coalesce arena dependence
2019-12-20 10:18:40 -08:00
David Goldblatt
282a382326
Extent: Break [de]activation's arena dependence.
2019-12-20 10:18:40 -08:00
David Goldblatt
576d7047ab
Ecache: Should know its arena_ind.
...
What we call an arena_ind is really the index associated with some particular
set of ehooks; the arena is just the user-visible portion of that. Making this
explicit, and reframing checks in terms of that, makes the code simpler and
cleaner, and helps us avoid passing the arena itself all throughout extent code.
This lets us put back an arena-specific assert.
2019-12-20 10:18:40 -08:00
David Goldblatt
372042a082
Remove merge dependence on the arena.
2019-12-20 10:18:40 -08:00
David Goldblatt
439219be7e
Remove extent_can_coalesce arena dependency.
2019-12-20 10:18:40 -08:00
David Goldblatt
9cad5639ff
Ehooks: remove arena_ind parameter.
...
This lives within the ehooks_t now, so that callers don't need to know it.
2019-12-20 10:18:40 -08:00
David Goldblatt
57fe99d4be
Move relevant index into the ehooks_t itself.
...
It's always passed into the ehooks; keeping it colocated lets us avoid passing
the arena everywhere.
2019-12-20 10:18:40 -08:00
David Goldblatt
c792f3e4ab
edata_cache: Remember the associated base_t.
...
This will save us some trouble down the line when we stop passing arena pointers
everywhere; we won't have to pass around a base_t pointer either.
2019-12-20 10:18:40 -08:00
David Goldblatt
ae23e5f426
Unify extent_alloc_wrapper with the other wrappers.
...
Previously, it was really more like extents_alloc (it looks in an ecache for an
extent to reuse as its primary allocation pathway). Make that pathway more
explciitly like extents_alloc, and rename extent_alloc_wrapper_hard accordingly.
2019-12-20 10:18:40 -08:00
David Goldblatt
d8b0b66c6c
Put extent_state_t into ecache as well as eset.
2019-12-20 10:18:40 -08:00
David Goldblatt
98eb40e563
Move delay_coalesce from the eset to the ecache.
2019-12-20 10:18:40 -08:00
David Goldblatt
bb70df8e5b
Extent refactor: Introduce ecache module.
...
This will eventually completely wrap the eset, and handle concurrency,
allocation, and deallocation. For now, we only pull out the mutex from the
eset.
2019-12-20 10:18:40 -08:00
David Goldblatt
0704516245
Ehooks: Add head tracking.
2019-12-20 10:18:40 -08:00
David Goldblatt
09475bf8ac
extent_may_dalloc -> ehooks_dalloc_will_fail
2019-12-20 10:18:40 -08:00
David Goldblatt
7859184179
Pull out edata_t caching into its own module.
2019-12-20 10:18:40 -08:00
David Goldblatt
a7862df616
Rename extent_t to edata_t.
...
This frees us up from the unfortunate extent/extent2 naming collision.
2019-12-20 10:18:40 -08:00
David Goldblatt
865debda22
Rename extent.h -> edata.h.
...
This name is slightly pithier; a full-on rename will come shortly.
2019-12-20 10:18:40 -08:00
David Goldblatt
a738a66b5c
Ehooks: Add some debug zero and addr checks.
...
These help make sure that the ehooks return properly zeroed memory when required
to.
2019-12-20 10:18:40 -08:00
David Goldblatt
4b2e5ee8b9
Ehooks: Add a "zero" ehook.
...
This is the first API expansion. It lets the hooks pick where and how to purge
within themselves.
2019-12-20 10:18:40 -08:00
David Goldblatt
d0f187ad3b
Arena: Loosen arena_may_have_muzzy restrictions.
...
If there are custom extent hooks, pages_can_purge_lazy is not necessarily the
right guard. We could check ehooks_are_default too, but the case where
purge_lazy is unsupported is rare and getting rarer. Just checking the decay
interval captures most of the benefit.
2019-12-20 10:18:40 -08:00
David Goldblatt
ebbb973271
Base: Remove some unnecessary reentrancy guards.
...
The ehooks module will now call these if necessary.
2019-12-20 10:18:40 -08:00
David Goldblatt
403f2d1664
Extents: Split out introspection functionality.
...
This isn't really part of the core extent allocation facilities. Especially as
this module grows, having it in its own place may come in handy.
2019-12-20 10:18:40 -08:00
David Goldblatt
92a511d385
Make extent module hermetic.
...
In the form of extent2.h. The naming leaves something to be desired, but I'll
leave that for a later diff.
2019-12-20 10:18:40 -08:00
David Goldblatt
e08c581cf1
Extent: Get rid of extent-specific pre/post reentrancy calls.
...
These are taken care of by the ehook module; the extra increments and
decrements are safe but unnecessary.
2019-12-20 10:18:40 -08:00
David Goldblatt
39fdc690a0
Ehooks comments and cleanup.
2019-12-20 10:18:40 -08:00
David Goldblatt
c8dae890c8
Extent -> Ehooks: Move over default hooks.
2019-12-20 10:18:40 -08:00
David Goldblatt
2fe5108263
Extent -> Ehooks: Move merge hook.
2019-12-20 10:18:40 -08:00
David Goldblatt
1fff4d2ee3
Extent -> Ehooks: Move split hook.
2019-12-20 10:18:40 -08:00
David Goldblatt
a5b42a1a10
Extent -> Ehooks: Move purge_forced hook.
2019-12-20 10:18:40 -08:00
David Goldblatt
368baa42ef
Extent -> Ehooks: Move purge_lazy hook.
2019-12-20 10:18:40 -08:00
David Goldblatt
f83fdf5336
Extent: Clean up a comma
2019-12-20 10:18:40 -08:00
David Goldblatt
d78fe241ac
Extent -> Ehooks: Move commit and decommit hooks.
2019-12-20 10:18:40 -08:00
David Goldblatt
5459ec9dae
Extent -> Ehooks: Move destroy hook.
2019-12-20 10:18:40 -08:00
David Goldblatt
bac8e2e5a6
Extent -> Ehooks: Move dalloc hook.
2019-12-20 10:18:40 -08:00
David Goldblatt
dc8b4e6e13
Extent -> Ehooks: Move alloc hook.
2019-12-20 10:18:40 -08:00
David Goldblatt
ae0d8e8591
Move extent ehook calls into ehooks
2019-12-20 10:18:40 -08:00
David Goldblatt
ba8b9ecbcb
Add ehooks module
2019-12-20 10:18:40 -08:00
David Goldblatt
9f6eb09585
Extents: Eagerly initialize extent hooks.
...
When deferred initialization was added, initializing required copying
sizeof(extent_hooks_t) bytes after a pointer chase. Today, it's just a single
pointer loaded from the base_t. In subsequent diffs, we'll get rid of even that.
2019-12-20 10:18:40 -08:00
David Goldblatt
4278f84603
Move extent hook getters/setters to arena.c
...
This is where they're logically scoped; they access arena data.
2019-12-20 10:18:40 -08:00
Wenbo Zhang
9226e1f0d8
fix opt.thp:never still use THP with base_new
2019-12-19 13:27:00 -08:00
Qi Wang
d5031ea824
Allow dallocx and sdallocx after tsd destruction.
...
After a thread turns into purgatory / reincarnated state, still allow dallocx
and sdallocx to function normally.
2019-12-19 11:17:03 -08:00
Yinan Zhang
4afd709d1f
Restructure setters for profiling info
...
Explicitly define three setters:
- `prof_tctx_reset()`: set `prof_tctx` to `1U`, if we don't know in
advance whether the allocation is large or not;
- `prof_tctx_reset_sampled()`: set `prof_tctx` to `1U`, if we already
know in advance that the allocation is large;
- `prof_info_set()`: set a real `prof_tctx`, and also set other
profiling info e.g. the allocation time.
Code structure wise, the prof level is kept as a thin wrapper, the
large level only provides low level setter APIs, and the arena level
carries out the main logic.
2019-12-17 10:01:28 -08:00
Yinan Zhang
1d01e4c770
Initialization utilities for nstime
2019-12-16 16:08:56 -08:00
Qi Wang
dd649c9485
Optimize away the tsd_fast() check on fastpath.
...
Fold the tsd_state check onto the event threshold check. The fast threshold is
set to 0 when tsd switch to non-nominal.
The fast_threshold can be reset by remote threads, to refect the non nominal tsd
state change.
2019-12-11 23:44:20 -08:00
Qi Wang
1decf958d1
Fix incorrect usage of cassert.
2019-12-11 14:02:59 -08:00
Yinan Zhang
45836d7fd3
Pass nstime_t pointer for profiling
2019-12-11 11:38:16 -08:00
Yinan Zhang
7d2bac5a38
Refactor destroy code path for prof_tctx
2019-12-10 16:31:05 -08:00
Yinan Zhang
055478cca8
Threshold is no longer updated before prof_realloc()
2019-12-10 16:31:05 -08:00
Yinan Zhang
7e3671911f
Get rid of old indentation style for prof
2019-12-06 09:47:51 -08:00
Yinan Zhang
dfdd46f6c1
Refactor prof_tctx_t creation
2019-12-06 09:47:51 -08:00
Yinan Zhang
aa1d71fb7a
Rename prof_tctx to alloc_tctx in prof_info_t
2019-12-06 09:47:51 -08:00
Yinan Zhang
5e0b090992
No need to pass usize to prof_tctx_set()
2019-12-06 09:47:51 -08:00
David Goldblatt
1b1e76acfe
Disable some spuriously-triggering warnings
2019-12-04 13:45:17 -08:00
Yinan Zhang
5c47a30227
Guard C++ aligned APIs
2019-11-25 18:02:16 -08:00
Yinan Zhang
6945371778
Change tsdn to tsd for profiling code path
2019-11-22 16:31:56 -08:00
Yinan Zhang
b55419f9b9
Restructure profiling
...
Develop new data structure and code logic for holding profiling
related information stored in the extent that may be needed after the
extent is released, which in particular is the case for the
reallocation code path (e.g. in `rallocx()` and `xallocx()`). The
data structure is a generalization of `prof_tctx_t`: we previously
only copy out the `prof_tctx` before the extent is released, but we
may be in need of additional fields. Currently the only additional
field is the allocation time field, but there may be more fields in
the future.
The restructuring also resolved a bug: `prof_realloc()` mistakenly
passed the new `ptr` to `prof_free_sampled_object()`, but passing in
the `old_ptr` would crash because it's already been released. Now
the essential profiling information is collectively copied out early
and safely passed to `prof_free_sampled_object()` after the extent is
released.
2019-11-22 16:31:56 -08:00
Mark Santaniello
8b2c2a596d
Support C++17 over-aligned allocation
...
Summary:
Add support for C++17 over-aligned allocation:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0035r4.html
Supporting all 10 operators means we avoid thunking thru libstdc++-v3/libsupc++ and just call jemalloc directly.
It's also worth noting that there is now an aligned *and sized* operator delete:
```
void operator delete(void* ptr, std::size_t size, std::align_val_t al) noexcept;
```
If JeMalloc did not provide this, the default implementation would ignore the size parameter entirely:
https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/libsupc%2B%2B/del_opsa.cc#L30-L33
(I must also update ax_cxx_compile_stdcxx.m4 to a newer version with C++17 support.)
Test Plan:
Wrote a simple test that allocates and then deletes an over-aligned type:
```
struct alignas(32) Foo {};
Foo *f;
int main()
{
f = new Foo;
delete f;
}
```
Before this change, both new and delete go thru PLT, and we end up calling regular old free:
```
(gdb) disassemble
Dump of assembler code for function main():
...
0x00000000004029b7 <+55>: call 0x4022d0 <_ZnwmSt11align_val_t@plt>
...
0x00000000004029d5 <+85>: call 0x4022e0 <_ZdlPvmSt11align_val_t@plt>
...
(gdb) s
free (ptr=0x7ffff6408020) at /home/engshare/third-party2/jemalloc/master/src/jemalloc.git-trunk/src/jemalloc.c:2842
2842 if (!free_fastpath(ptr, 0, false)) {
```
After this change, we directly call new/delete and ultimately call sdallocx:
```
(gdb) disassemble
Dump of assembler code for function main():
...
0x0000000000402b77 <+55>: call 0x496ca0 <operator new(unsigned long, std::align_val_t)>
...
0x0000000000402b95 <+85>: call 0x496e60 <operator delete(void*, unsigned long, std::align_val_t)>
...
(gdb) s
116 je_sdallocx_noflags(ptr, size);
```
2019-11-22 10:14:16 -08:00
Qi Wang
9a3c738009
Refactor arena_bin_malloc_hard().
2019-11-21 11:41:26 -08:00
Qi Wang
9a7ae3c97f
Reduce footprint of bin_t.
...
Avoid storing mutex_prof_data_t in bin_t. Added bin_stats_data_t which is used
for reporting bin stats.
2019-11-21 11:08:36 -08:00
Qi Wang
cb1a1f4ada
Remove the unnecessary alloc_ctx on free_fastpath.
2019-11-16 13:41:13 -08:00
Qi Wang
7160617107
Add branch hints to free_fastpath.
...
Explicityly mark the non-slab case unlikely. Previously there were jumps in the
common case.
2019-11-16 13:41:13 -08:00
Qi Wang
a787d2f5b3
Prefer getaffinity() to detect number of CPUs.
2019-11-15 16:24:38 -08:00
Qi Wang
04cb7d4d6b
Bail out early for muzzy decay.
...
This avoids taking the muzzy decay mutex with the default setting.
2019-11-15 16:24:15 -08:00
Qi Wang
836d7a7e69
Check for large size first in the uncommon case of malloc.
...
Larger sizes are not that uncommon comparing to !tsd_fast.
2019-11-11 13:30:20 -08:00
Qi Wang
da50d8ce87
Refactor and optimize prof sampling initialization.
...
Makes the prof sample prng use the tsd prng_state. This allows us to properly
initialize the sample interval event, without having to create tdata. As a
result, tdata will be created on demand (when a thread reaches the sample
interval bytes allocated), instead of on the first allocation.
2019-11-11 10:35:37 -08:00
Qi Wang
bc774a3519
Rename tsd->offset_state to tsd->prng_state.
2019-11-11 10:35:37 -08:00
Qi Wang
19a51abf33
Avoid arena->offset_state when tsd not available for prng.
...
Use stack locals and remove the offset_state in arena.
2019-11-11 10:35:37 -08:00
Nick Desaulniers
d01b425e5d
Add -Wimplicit-fallthrough checks if supported
...
Clang since r369414 (clang-10) can now check -Wimplicit-fallthrough for
C code, and use the GNU C style attribute to denote fallthrough.
Move the test from header only to autoconf. The previous test used
brittle version detection which did not work for newer clang that
supported this feature.
The attribute has to be its own statement, hence the added `;`. It also
can only precede case statements, so the final cases should be
explicitly terminated with break statements.
Fixes commit 3d29d11ac2
("Clean compilation -Wextra")
Link: 1e0affb6e5
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
2019-11-08 13:03:03 -08:00
Yinan Zhang
43f0ce92d8
Define general purpose tsd_thread_event_init()
2019-11-04 16:07:56 -08:00
Yinan Zhang
97f93fa0f2
Pull tcache GC events into thread event handler
2019-11-04 16:07:56 -08:00
Yinan Zhang
198f02e797
Pull prof_accumbytes into thread event handler
2019-11-04 15:21:16 -08:00
Yinan Zhang
152c0ef954
Build a general purpose thread event handler
2019-11-04 11:15:50 -08:00
RingsC
6924f83cb2
use SYS_openat when available
...
some architecture like AArch64 may not have the open syscall, but have
openat syscall. so check and use SYS_openat if SYS_openat available if
SYS_open is not supported at init_thp_state.
2019-11-01 13:06:40 -07:00
David T. Goldblatt
de81a4eada
Add stats counters for number of zero reallocs
2019-10-29 17:48:44 -07:00
David T. Goldblatt
9cfa805947
Realloc: Make behavior of realloc(ptr, 0) configurable.
2019-10-29 17:48:44 -07:00
David T. Goldblatt
ee961c2310
Merge realloc and rallocx pathways.
2019-10-29 17:48:44 -07:00
Yinan Zhang
bd6e28d6a3
Guard slabcur fetching in extent_util
2019-10-28 17:27:51 -07:00
Yinan Zhang
4786099a3a
Increase column width for global malloc/free rate
2019-10-24 14:54:51 -07:00
Yinan Zhang
05681e387a
Optimize cache_bin_alloc_easy for malloc fast path
...
`tcache_bin_info` is not accessed on malloc fast path but the
compiler reserves a register for it, as well as an additional
register for `tcache_bin_info[ind].stack_size`. The optimization
gets rid of the need for the two registers.
2019-10-21 16:43:45 -07:00
Yinan Zhang
4fe50bc7d0
Fix amd64 MSVC warning
2019-10-18 10:16:29 -07:00
Yinan Zhang
4fbbc817c1
Simplify time setting and getting for prof log
2019-10-16 09:24:52 -07:00
Yinan Zhang
66e07f986d
Suppress tdata creation in reentrancy
...
This change suppresses tdata initialization and prof sample threshold
update in interrupting malloc calls. Interrupting calls have no need
for tdata. Delaying tdata creation aligns better with our lazy tdata
creation principle, and it also helps us gain control back from
interrupting calls more quickly and reduces any risk of delegating
tdata creation to an interrupting call.
2019-10-04 08:52:50 -07:00
Yinan Zhang
beb7c16e94
Guard prof_active reset by opt_prof
...
Set `prof_active` to read-only when `opt_prof` is turned off.
2019-10-02 11:42:53 -07:00
David T. Goldblatt
3d84bd57f4
Arena: Add helper function arena_get_from_extent.
2019-09-23 23:06:27 -07:00