Disentangle arena and extent locking.

Refactor arena and extent locking protocols such that arena and
extent locks are never held when calling into the extent_*_wrapper()
API.  This requires extra care during purging since the arena lock no
longer protects the inner purging logic.  It also requires extra care to
protect extents from being merged with adjacent extents.

Convert extent_t's 'active' flag to an enumerated 'state', so that
retained extents are explicitly marked as such, rather than depending on
ring linkage state.

Refactor the extent collections (and their synchronization) for cached
and retained extents into extents_t.  Incorporate LRU functionality to
support purging.  Incorporate page count accounting, which replaces
arena->ndirty and arena->stats.retained.

Assert that no core locks are held when entering any internal
[de]allocation functions.  This is in addition to existing assertions
that no locks are held when entering external [de]allocation functions.

Audit and document synchronization protocols for all arena_t fields.

This fixes a potential deadlock due to recursive allocation during
gdump, in a similar fashion to b49c649bc1
(Fix lock order reversal during gdump.), but with a necessarily much
broader code impact.
This commit is contained in:
Jason Evans
2017-01-29 21:57:14 -08:00
parent 1b6e43507e
commit d27f29b468
19 changed files with 772 additions and 650 deletions

View File

@@ -1,6 +1,12 @@
#ifndef JEMALLOC_INTERNAL_EXTENT_STRUCTS_H
#define JEMALLOC_INTERNAL_EXTENT_STRUCTS_H
typedef enum {
extent_state_active = 0,
extent_state_dirty = 1,
extent_state_retained = 2
} extent_state_t;
/* Extent (span of pages). Use accessor functions for e_* fields. */
struct extent_s {
/* Arena from which this extent came, if any. */
@@ -32,8 +38,8 @@ struct extent_s {
*/
size_t e_sn;
/* True if extent is active (in use). */
bool e_active;
/* Extent state. */
extent_state_t e_state;
/*
* The zeroed flag is used by extent recycling code to track whether
@@ -67,18 +73,48 @@ struct extent_s {
};
/*
* Linkage for arena's extents_dirty and arena_bin_t's slabs_full rings.
* List linkage, used by a variety of lists:
* - arena_bin_t's slabs_full
* - extents_t's LRU
* - stashed dirty extents
* - arena's large allocations
* - arena's extent structure freelist
*/
qr(extent_t) qr_link;
ql_elm(extent_t) ql_link;
union {
/* Linkage for per size class sn/address-ordered heaps. */
phn(extent_t) ph_link;
/* Linkage for arena's large and extent_cache lists. */
ql_elm(extent_t) ql_link;
};
/* Linkage for per size class sn/address-ordered heaps. */
phn(extent_t) ph_link;
};
typedef ql_head(extent_t) extent_list_t;
typedef ph(extent_t) extent_heap_t;
/* Quantized collection of extents, with built-in LRU queue. */
struct extents_s {
malloc_mutex_t mtx;
/*
* Quantized per size class heaps of extents.
*
* Synchronization: mtx.
*/
extent_heap_t heaps[NPSIZES+1];
/*
* LRU of all extents in heaps.
*
* Synchronization: mtx.
*/
extent_list_t lru;
/*
* Page sum for all extents in heaps.
*
* Synchronization: atomic.
*/
size_t npages;
/* All stored extents must be in the same state. */
extent_state_t state;
};
#endif /* JEMALLOC_INTERNAL_EXTENT_STRUCTS_H */