Existing backtrace implementations skip native stack frames from runtimes like
Python. The hook allows to augment the backtraces to attribute allocations to
native functions in heap profiles.
The prof initialization is done only when opt_prof is true. This change makes
sure the prof_* mallctls only have limited read access (i.e. no access to prof
internals) when opt_prof is false.
In addition, initialize the global prof mutexes even if opt_prof is false. This
makes sure the mutex stats are set properly.
This change allows every allocator conforming to PAI communicate that it
deferred some work for the future. Without it if a background thread goes into
indefinite sleep, there is no way to notify it about upcoming deferred work.
Previously the calculation of sleep time between wakeups was implemented within
background_thread. This resulted in some parts of decay and hpa specific
logic mixing with background thread implementation. In this change, background
thread delegates this calculation to arena and it, in turn, delegates it to PAI.
The next step is to implement the actual calculation of time until deferred work
in HPA.
Specifically, this change allows the default alloc hook to used during
arenas.create. One use case is to invoke the default alloc hook in a customized
hook arena, i.e. the default hooks can be read out of a default arena, then
create customized ones based on these hooks. Note that mixing the default with
customized hooks is not recommended, and should only be considered when the
customization is simple and straightforward.
By force-inlining everything that would otherwise be a macro, we get the same
effect (it's not clear in the first place that this is actually a good idea, but
it avoids making any changes to the existing performance profile).
This makes the code more maintainable (in anticipation of subsequent changes),
as well as making performance profiles and debug info more readable (we get
"real" line numbers, instead of making everything point to the macro definition
of all associated functions).
The edata_cache_small had a fill/flush heuristic. In retrospect, this was a
premature optimization; more testing indicates that an unbounded cache is
effectively fine here, and moreover we spend a nontrivial amount of time doing
unnecessary filling/flushing.
As the HPA takes on a larger and larger fraction of all allocations, any
theoretical differences in allocation patterns should shrink. The HPA is more
efficient with its metadata in general, so it still comes out ahead on metadata
usage anyways.
We wait a while after deciding a huge extent should get hugified to see if it
gets purged before long. This avoids hugifying extents that might shortly get
dehugified for purging.
Rename and use the hpa_dehugification_threshold option support code for this,
since it's now ignored.
This is a simple multi-producer, single-consumer queue. The intended use case
is in the HPA, as we begin supporting hpdatas that move between hpa_shards. We
take just a single CAS as the cost to send a message (or a batch of messages) in
the low-contention case, and lock-freedom lets us avoid some lock-ordering
issues.
This change pulls the SEC options into a struct, which simplifies their handling
across various modules (e.g. PA needs to forward on SEC options from the
malloc_conf string, but it doesn't really need to know their names). While
we're here, make some of the fixed constants configurable, and unify naming from
the configuration options to the internals.
Currently that just means max_alloc, but we're about to add more. While we're
touching these lines anyways, tweak things to be more in line with testing.
This finishes the refactoring of the HPA/psset interactions the past few commits
have been building towards.
Rather than the HPA removing and then reinserting hpdatas, it simply begins
updates and ends them. These updates can set flags on the hpdata that prevent
it from being returned for certain types of requests. For example, it can call
hpdata_alloc_allowed_set(hpdata, false) during an update, at which point the
given hpdata will no longer be returned for psset_pick_alloc requests.
This has various of benefits:
- It maintains stats correctness during purges and hugifies.
- It allows simpler and more explicit concurrency control for the various
special cases (e.g. allocations are disallowed during purge, but not during
hugify).
- It lets allocations and deallocations avoid disturbing the purging and
hugification orderings. If an hpdata "loses its place" in one of the queues
just do to an alloc / dalloc, it can result in pathological edge cases where
very hot, very full hugepages never get hugified (and cold extents on the
same hugepage as hot ones never get purged).
The key benefit though is that tracking hpdatas to be purged / hugified in a
principled way will let us do delayed purging and hugification. Eventually this
will let us move these operations to background threads, but in the short term
the benefit is that it will let us have global purging policies (e.g. purge when
the entire arena has too many dirty pages, rather than any particular hugepage).
We're moving towards a world in which purging decisions are less rigidly
enforced at a single-hugepage level. In that world, it makes sense to keep
around some hpdatas which are not completely purged, in which case we'll need to
track them.
Really, this isn't a functional change, just a naming change. We start thinking
of pageslabs as being always in the psset. What we used to think of as removal
is now thought of as being in the psset, but in the process of being updated
(and therefore, unavalable for serving new allocations).
This is in preparation of subsequent changes to support deferred purging;
allocations will still be in the psset for the purposes of choosing when to
purge, but not for purposes of allocation/deallocation.
This is really only useful for human consumption. Correspondingly, emit it only
in the human-readable stats, and let everybody else compute from the hugepage
size and nactive.
Previously, we would purge a hugepage only when it's completely empty. With
this change, we can purge even when only partially empty. Although the
heuristic here is still fairly primitive, this infrastructure can scale to
become more advanced.