Now that all merging go through try_acquire_edata_neighbor, the mergeablility
checks (including head state checking) are done before reaching the merge hook.
In other words, merge hook will never be called if the head state doesn't agree.
Instead of passing down the new_addr, pass down the active edata which allows us
to always use a neighbor-acquiring semantic. In other words, this tells us both
the original edata and neighbor address. With this change, only neighbors of a
"known" edata can be acquired, i.e. acquiring an edata based on an arbitrary
address isn't possible anymore.
This avoids the addr-based mutexes (i.e. the mutex_pool), and instead relies on
the metadata tracked in rtree leaf: the head state and extent_state. Before
trying to access the neighbor edata (e.g. for coalescing), the states will be
verified first -- only neighbor edatas from the same arena and with the same
state will be accessed.
When retain is on, when extent_grow_retained failed (e.g. due to split hook
failures), we'll try extent_alloc_wrapper as the last resort. Set the is_head
bit in that case to be consistent. The allocated extent in that case will be
retained properly, but not merged with other extents.
This was promised in the review of the introduction of geom_grow, but would have
been painful to do there because of the series that introduced it. Now that
those are comitted, renaming is easier.
This will be the centralized component of the coming hugepage allocator; the
source of larger chunks of memory from which smaller ones can be obtained.
For now, this is just a stub containing the ecaches, with no surrounding code
changed. Eventually all the core allocator bits will be moved in, in the
subsequent stack of commits.
This distinguishes it from the PA mapped stat, which is now named "pa_mapped" to
avoid confusion. The (derived) arena stat includes base memory, and the PA stat
is no longer partially derived.
This pulls out the various abstractions where some stats counter is sometimes an
atomic, sometimes a plain variable, sometimes always protected by a lock,
sometimes protected by reads but not writes, etc. With this change, these cases
are treated consistently, and access patterns tagged.
In the process, we fix a few missed-update bugs (where one caller assumes
"protected-by-a-lock" semantics and another does not).
This is logically at a higher level of the stack; extent should just allocate
things at the page-level; it shouldn't care exactly why the callers wants a
given number of pages.