server-skynet-source-3rd-je.../include/jemalloc/internal/extent.h
Jason Evans cbf3a6d703 Move centralized chunk management into arenas.
Migrate all centralized data structures related to huge allocations and
recyclable chunks into arena_t, so that each arena can manage huge
allocations and recyclable virtual memory completely independently of
other arenas.

Add chunk node caching to arenas, in order to avoid contention on the
base allocator.

Use chunks_rtree to look up huge allocations rather than a red-black
tree.  Maintain a per arena unsorted list of huge allocations (which
will be needed to enumerate huge allocations during arena reset).

Remove the --enable-ivsalloc option, make ivsalloc() always available,
and use it for size queries if --enable-debug is enabled.  The only
practical implications to this removal are that 1) ivsalloc() is now
always available during live debugging (and the underlying radix tree is
available during core-based debugging), and 2) size query validation can
no longer be enabled independent of --enable-debug.

Remove the stats.chunks.{current,total,high} mallctls, and replace their
underlying statistics with simpler atomically updated counters used
exclusively for gdump triggering.  These statistics are no longer very
useful because each arena manages chunks independently, and per arena
statistics provide similar information.

Simplify chunk synchronization code, now that base chunk allocation
cannot cause recursive lock acquisition.
2015-02-12 00:15:56 -08:00

63 lines
1.7 KiB
C

/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct extent_node_s extent_node_t;
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
/* Tree of extents. */
struct extent_node_s {
/* Arena from which this extent came, if any. */
arena_t *arena;
/* Pointer to the extent that this tree node is responsible for. */
void *addr;
/*
* Total region size, or 0 if this node corresponds to an arena chunk.
*/
size_t size;
/*
* 'prof_tctx' and 'zeroed' are never needed at the same time, so
* overlay them in order to fit extent_node_t in one cache line.
*/
union {
/* Profile counters, used for huge objects. */
prof_tctx_t *prof_tctx;
/* True if zero-filled; used by chunk recycling code. */
bool zeroed;
};
union {
/* Linkage for the size/address-ordered tree. */
rb_node(extent_node_t) link_szad;
/* Linkage for huge allocations and cached chunks nodes. */
ql_elm(extent_node_t) link_ql;
};
/* Linkage for the address-ordered tree. */
rb_node(extent_node_t) link_ad;
};
typedef rb_tree(extent_node_t) extent_tree_t;
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
rb_proto(, extent_tree_szad_, extent_tree_t, extent_node_t)
rb_proto(, extent_tree_ad_, extent_tree_t, extent_node_t)
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/