Commit Graph

1199 Commits

Author SHA1 Message Date
Yinan Zhang
2deabac079 Get rid of custom iterator for last-N records 2020-04-07 17:22:24 -07:00
Yinan Zhang
a5ddfa7d91 Use ql for prof last-N list 2020-04-07 17:22:24 -07:00
Yinan Zhang
ce17af4221 Better structure ql module 2020-04-06 09:50:27 -07:00
Yinan Zhang
4b66297ea0 Add move constructor to ql module 2020-04-06 09:50:27 -07:00
Yinan Zhang
a62b7ed928 Add emptiness checking to ql module 2020-04-06 09:50:27 -07:00
Yinan Zhang
1dd24ca6d2 Add rotate functionality to ql module 2020-04-06 09:50:27 -07:00
Yinan Zhang
0dc95a882f Add concat and split functionality to ql module 2020-04-06 09:50:27 -07:00
Yinan Zhang
1ad06aa53b deduplicate insert and delete logic in qr module 2020-04-06 09:50:27 -07:00
Yinan Zhang
c9d56cddf2 Optimize meld in qr module
The goal of `qr_meld()` is to change the following four fields
`(a->prev, a->prev->next, b->prev, b->prev->next)` from the values
`(a->prev, a, b->prev, b)` to `(b->prev, b, a->prev, a)`.

This commit changes

```
a->prev->next = b;
b->prev->next = a;
temp = a->prev;
a->prev = b->prev;
b->prev = temp;
```

to

```
temp = a->prev;
a->prev = b->prev;
b->prev = temp;
a->prev->next = a;
b->prev->next = b;
```

The benefit is that we can use `b->prev->next` for `temp`, and so
there's no need to pass in `a_type`.

The restriction is that `b` cannot be a `qr_next()` macro, so users
of `qr_meld()` must pay attention.  (Before this change, neither `a`
nor `b` could be a `qr_next()` macro.)
2020-04-06 09:50:27 -07:00
Yinan Zhang
f9aad7a49b Add piping API to buffered writer 2020-04-01 09:41:20 -07:00
Yinan Zhang
09cd79495f Encapsulate buffer allocation failure in buffered writer 2020-04-01 09:41:20 -07:00
David Goldblatt
3b4a03b92b Mac: don't declare system functions as nothrow.
This contradicts the system headers, which can lead to breakages.
2020-03-26 14:11:24 -07:00
Yinan Zhang
2256ef8961 Add option to fetch system thread name on each prof sample 2020-03-24 21:39:57 -07:00
Yinan Zhang
a5780598b3 Remove thread_event_rollback() 2020-03-12 13:55:00 -07:00
Yinan Zhang
ba783b3a0f Remove prof -> thread_event dependency 2020-03-12 13:55:00 -07:00
Yinan Zhang
441d88d1c7 Rewrite profiling thread event 2020-03-12 13:55:00 -07:00
David Goldblatt
99b1291d17 Edata cache: add edata_cache_small_t.
This can be used to amortize the synchronization costs of edata_cache accesses.
2020-03-12 11:58:09 -07:00
David Goldblatt
92485032b2 Cache bin: improve comments. 2020-03-12 11:54:19 -07:00
David Goldblatt
d701a085c2 Fast path: allow low-water mark changes.
This lets us put more allocations on an "almost as fast" path after a flush.
This results in around a 4% reduction in malloc cycles in prod workloads
(corresponding to about a 0.1% reduction in overall cycles).
2020-03-12 11:54:19 -07:00
David Goldblatt
397da03865 Cache bin: rewrite to track more state.
With this, we track all of the empty, full, and low water states together.  This
simplifies a lot of the tracking logic, since we now don't need the
cache_bin_info_t for state queries (except for some debugging).
2020-03-12 11:54:19 -07:00
David Goldblatt
0a2fcfac01 Tcache: Hold cache bin allocation explicitly. 2020-03-12 11:54:19 -07:00
David Goldblatt
d498a4bb08 Cache bin: Add an emptiness assertion. 2020-03-12 11:54:19 -07:00
David Goldblatt
6a7aa46ef7 Cache bin: Add a debug method for init checking. 2020-03-12 11:54:19 -07:00
David Goldblatt
370c1ea007 Cache bin: Write the unit test in terms of the API
I.e. stop allowing the unit test to have secret access to implementation
internals.
2020-03-12 11:54:19 -07:00
David Goldblatt
7f5ebd211c Cache bin: set low-water internally. 2020-03-12 11:54:19 -07:00
David Goldblatt
60113dfe3b Cache bin: Move in initialization code. 2020-03-12 11:54:19 -07:00
David Goldblatt
44529da852 Cache-bin: Make flush modifications internal
I.e. the tcache code just calls a cache-bin function to finish flush (and move
pointers around, etc.).  It doesn't directly access the cache-bin's owned memory
any more.
2020-03-12 11:54:19 -07:00
David Goldblatt
ff6acc6ed5 Cache bin: simplify names and argument ordering.
We always start with the cache bin, then its info (if necessary).
2020-03-12 11:54:19 -07:00
David Goldblatt
e1dcc557d6 Cache bin: Only take the relevant cache_bin_info_t
Previously, we took an array of cache_bin_info_ts and an index, and dereferenced
ourselves.  But infos for other cache_bins aren't relevant to any particular
cache bin, so that should be the caller's job.
2020-03-12 11:54:19 -07:00
David Goldblatt
1b00d808d7 cache_bin: Don't let arena see empty position. 2020-03-12 11:54:19 -07:00
David Goldblatt
d303f30796 cache_bin nflush -> n.
We're going to use it on the fill pathway as well.
2020-03-12 11:54:19 -07:00
David Goldblatt
74d36d78ef Cache bin: Make ncached_max a query on the info_t. 2020-03-12 11:54:19 -07:00
David Goldblatt
b66c0973cc cache_bin: Don't allow direct internals access. 2020-03-12 11:54:19 -07:00
David Goldblatt
da68f73296 Move percpu_arena_update.
It's not really part of the API of the arena; it changes which arena we're using
that API on.
2020-03-12 11:54:19 -07:00
David Goldblatt
909c501b07 Cache_bin: Shouldn't know about tcache.
Instead, have it take the cache_bin_info_ts to use by pointer.  While we're
here, add a src file for the cache bin.
2020-03-12 11:54:19 -07:00
David Goldblatt
79f1ee2fc0 Move junking out of arena/tcache code.
This is debug only and we keep it off the fast path.  Moving it here simplifies
the internal logic.

This never tries to junk on regions that were shrunk via xallocx.  I think this
is fine for two reasons:
- The shrunk-with-xallocx case is rare.
- We don't always do that anyway before this diff (it depends on the opt
  settings and extent hooks in effect).
2020-03-12 11:54:19 -07:00
David T. Goldblatt
6c3491ad31 Tcache: Unify bin flush logic.
The small and large pathways share most of their logic, even if some of the
individual operations are different.  We pull out the common logic into a
force-inlined function, and then specialize twice, once for each value of
"small".
2020-02-25 10:21:03 -08:00
David T. Goldblatt
9f4fc27389 Ehooks: Fix a build warning.
We wrote `return some_void_func()` in a function returning void, which is
confusing and triggers warnings on MSVC.
2020-02-25 10:21:03 -08:00
David T. Goldblatt
162c2bcf31 Background thread: take base as a parameter. 2020-02-18 11:22:09 -08:00
David T. Goldblatt
29436fa056 Break prof and tcache knowledge of b0. 2020-02-18 11:22:09 -08:00
David T. Goldblatt
a0c1f4ac57 Rtree: take the base allocator as a parameter.
This facilitates better testing by avoiding mixing of the "real" base with the
base used by the rtree under test.
2020-02-18 11:22:09 -08:00
David T. Goldblatt
7013716aaa Emap: Take (and propagate) a zeroed parameter.
Rtree needs this, and we should really treat them similarly.
2020-02-18 11:22:09 -08:00
David T. Goldblatt
182192f83c Base: Pull into a single header. 2020-02-18 11:22:09 -08:00
David T. Goldblatt
34b7165fde Put szind_t, pszind_t in sz.h. 2020-02-18 11:22:09 -08:00
David Goldblatt
7e6c8a7286 Emap: Standardize naming.
Namespace everything under emap_, always specify what it is we're looking up
(emap_lookup -> emap_edata_lookup), and use "ctx" over "info".
2020-02-17 10:50:51 -08:00
David Goldblatt
ac50c1e44b Emap: Remove direct access to emap internals.
In the process, we do a few local cleanups and optimizations.  In particular,
the size safety check on tcache flush no longer does a redundant load.
2020-02-17 10:50:51 -08:00
David Goldblatt
06e42090f7 Make jemalloc.c use the emap interface.
While we're here, we'll also clean up some style nits.
2020-02-17 10:50:51 -08:00
David Goldblatt
f7d9c6c42d Emap: Move in alloc_ctx lookup functionality. 2020-02-17 10:50:51 -08:00
David Goldblatt
65a54d7714 Emap: Move in szind and slab modifications. 2020-02-17 10:50:51 -08:00
David Goldblatt
9b5d105fc3 Emap: Move in iealloc.
This is logically scoped to the emap.
2020-02-17 10:50:51 -08:00
David Goldblatt
1d449bd9a6 Emap: Internal rtree context setting.
The only time sharing an rtree context saves across extent operations isn't a
no-op is when tsd is unavailable.  But this happens only in situations like
thread death or initialization, and we don't care about shaving off every
possible cycle in such scenarios.
2020-02-17 10:50:51 -08:00
David Goldblatt
08eb1e6c31 Emap: Comments and cleanup
Document some of the public interface, and hide the functions that are no longer
used outside of the emap module.
2020-02-17 10:50:51 -08:00
David Goldblatt
231d1477e5 Rename emap_split_prepare_t -> emap_prepare_t.
Both the split and merge functions use it.
2020-02-17 10:50:51 -08:00
David Goldblatt
0586a56f39 Emap: Move in merge functionality. 2020-02-17 10:50:51 -08:00
David Goldblatt
040eac77cc Tell edatas their creation arena immediately.
This avoids having to pass it in anywhere else.
2020-02-17 10:50:51 -08:00
David Goldblatt
7c7b702064 Emap: Move over metadata splitting logic. 2020-02-17 10:50:51 -08:00
David Goldblatt
44f5f53605 Emap: Move over deregistration functions. 2020-02-17 10:50:51 -08:00
David Goldblatt
6513d9d923 Emap: Move over deregistration boundary functions. 2020-02-17 10:50:51 -08:00
David Goldblatt
9b5ca0b09d Emap: Move in slab interior registration. 2020-02-17 10:50:51 -08:00
David Goldblatt
d05b61db4a Emap: Move extent boundary registration in. 2020-02-17 10:50:51 -08:00
David Goldblatt
ca21ce4071 Emap: Move in write_acquired from extent. 2020-02-17 10:50:51 -08:00
David Goldblatt
01f255161c Add emap, for tracking extent locking. 2020-02-17 10:50:51 -08:00
Qi Wang
ba0e35411c Rework the bin locking around tcache refill / flush.
Previously, tcache fill/flush (as well as small alloc/dalloc on the arena) may
potentially drop the bin lock for slab_alloc and slab_dalloc.  This commit
refactors the logic so that the slab calls happen in the same function / level
as the bin lock / unlock.  The main purpose is to be able to use flat combining
without having to keep track of stack state.

In the meantime, this change reduces the locking, especially for slab_dalloc
calls, where nothing happens after the call.
2020-02-13 23:31:54 -08:00
Kamil Rytarowski
7fd22f7b2e Fix Undefined Behavior in hash.h
hash.h:200:27, left shift of 250 by 24 places cannot be represented in type 'int'
2020-02-13 12:25:26 -08:00
Yinan Zhang
9cac3fa8f5 Encapsulate buffer allocation in buffered writer 2020-02-04 13:21:58 -08:00
Yinan Zhang
bdc08b5158 Better naming buffered writer 2020-02-04 13:21:58 -08:00
Qi Wang
c6bfe55857 Update the tsd description. 2020-02-04 13:07:05 -08:00
Qi Wang
e896522616 Abbreviate thread-event to te. 2020-02-04 13:07:05 -08:00
Qi Wang
5e500523a0 Remove thread_event_boot(). 2020-02-04 00:18:15 -08:00
Qi Wang
97dd79db6c Implement deallocation events.
Make the event module to accept two event types, and pass around the event
context.  Use bytes-based events to trigger tcache GC on deallocation, and get
rid of the tcache ticker.
2020-02-04 00:18:15 -08:00
Qi Wang
974222c626 Add safety check on sdallocx slow / sampled path. 2020-01-31 00:04:22 -08:00
Qi Wang
88d9eca848 Enforce page alignment for sampled allocations.
This allows sampled allocations to be checked through alignment, therefore
enable sized deallocation regardless of cache_oblivious.
2020-01-31 00:04:22 -08:00
Qi Wang
0f552ed673 Don't purge huge extents when decay is off. 2020-01-30 14:40:38 -08:00
Qi Wang
38a48e5741 Set reentrancy to 1 for tsd_state_purgatory.
Reentrancy is already set for other non-nominal tsd states (reincarnated and
minimal_initialized).  Add purgatory to be safe and consistent.
2020-01-30 13:55:20 -08:00
Qi Wang
88b0e03a4e Implement opt.stats_interval and the _opts options.
Add options stats_interval and stats_interval_opts to allow interval based stats
printing.  This provides an easy way to collect stats without code changes,
because opt.stats_print may not work (some binaries never exit).
2020-01-29 09:57:55 -08:00
Qi Wang
d71a145ec1 Chagne prof_accum_t to counter_accum_t for general purpose. 2020-01-29 09:57:55 -08:00
Yinan Zhang
f81341a48b Fallback to unbuffered printing if OOM 2020-01-21 17:09:44 -08:00
David Goldblatt
bd3be8e0b1 Remove commit parameter to ecache functions.
No caller ever wants uncommitted memory.
2020-01-17 10:54:56 -08:00
Qi Wang
dab81bd315 Rework and fix the assertions on malloc fastpath.
The first half of the malloc fastpath may execute before malloc_init.  Make the
assertions work in that case.
2020-01-14 15:00:41 -08:00
Yinan Zhang
2b604a3016 Record request size in prof recent entries 2020-01-10 12:01:01 -08:00
Yinan Zhang
40a391408c Define constructor for buffered writer argument 2020-01-10 11:59:02 -08:00
Yinan Zhang
6d8e616902 Make buffered writer an independent module 2020-01-10 11:59:02 -08:00
Yinan Zhang
6b6b4709b3 Unify buffered writer naming 2020-01-09 14:31:31 -08:00
Yinan Zhang
9a60cf54ec Last-N profiling mode 2019-12-30 15:58:57 -08:00
Yinan Zhang
7a27a05940 Delete tdata states used for cleanup 2019-12-30 15:58:57 -08:00
Yinan Zhang
e98ddf7987 Fix unlikely condition in arena_prof_info_get() 2019-12-30 15:58:57 -08:00
Yinan Zhang
3fa142cf39 Remove _externs from prof internal header names 2019-12-23 11:14:15 -08:00
Yinan Zhang
112dc36dd5 Handle log_mtx during forking 2019-12-20 17:17:48 -08:00
Yinan Zhang
ea42174d07 Refactor profiling headers 2019-12-20 17:17:48 -08:00
David Goldblatt
6342da0970 Ehooks: Further optimize default merge case.
This avoids the cost of an iealloc in cases where the user uses the default
merge hook without using the default extent hooks.
2019-12-20 10:18:40 -08:00
David Goldblatt
e210ccc57e Move extent2 -> extent.
Eventually, we may fully break off the extent module; but not for some time.  If
it's going to live on in a non-transitory state, it might as well have the nicer
name.
2019-12-20 10:18:40 -08:00
David Goldblatt
2f4fa80414 Rename extents -> ecache. 2019-12-20 10:18:40 -08:00
David Goldblatt
56cc56b692 Break extent split dependence on arena. 2019-12-20 10:18:40 -08:00
David Goldblatt
0aa9769fb0 Break commit functions' arena dependence 2019-12-20 10:18:40 -08:00
David Goldblatt
576d7047ab Ecache: Should know its arena_ind.
What we call an arena_ind is really the index associated with some particular
set of ehooks; the arena is just the user-visible portion of that.  Making this
explicit, and reframing checks in terms of that, makes the code simpler and
cleaner, and helps us avoid passing the arena itself all throughout extent code.

This lets us put back an arena-specific assert.
2019-12-20 10:18:40 -08:00
David Goldblatt
372042a082 Remove merge dependence on the arena. 2019-12-20 10:18:40 -08:00
David Goldblatt
9cad5639ff Ehooks: remove arena_ind parameter.
This lives within the ehooks_t now, so that callers don't need to know it.
2019-12-20 10:18:40 -08:00
David Goldblatt
57fe99d4be Move relevant index into the ehooks_t itself.
It's always passed into the ehooks; keeping it colocated lets us avoid passing
the arena everywhere.
2019-12-20 10:18:40 -08:00
David Goldblatt
c792f3e4ab edata_cache: Remember the associated base_t.
This will save us some trouble down the line when we stop passing arena pointers
everywhere; we won't have to pass around a base_t pointer either.
2019-12-20 10:18:40 -08:00
David Goldblatt
ae23e5f426 Unify extent_alloc_wrapper with the other wrappers.
Previously, it was really more like extents_alloc (it looks in an ecache for an
extent to reuse as its primary allocation pathway).  Make that pathway more
explciitly like extents_alloc, and rename extent_alloc_wrapper_hard accordingly.
2019-12-20 10:18:40 -08:00