Implement per arena base allocators.

Add/rename related mallctls:
- Add stats.arenas.<i>.base .
- Rename stats.arenas.<i>.metadata to stats.arenas.<i>.internal .
- Add stats.arenas.<i>.resident .

Modify the arenas.extend mallctl to take an optional (extent_hooks_t *)
argument so that it is possible for all base allocations to be serviced
by the specified extent hooks.

This resolves #463.
This commit is contained in:
Jason Evans 2016-12-22 16:39:10 -06:00
parent a6e86810d8
commit a0dd3a4483
18 changed files with 957 additions and 341 deletions

View File

@ -156,6 +156,7 @@ TESTS_UNIT := \
$(srcroot)test/unit/a0.c \ $(srcroot)test/unit/a0.c \
$(srcroot)test/unit/arena_reset.c \ $(srcroot)test/unit/arena_reset.c \
$(srcroot)test/unit/atomic.c \ $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/base.c \
$(srcroot)test/unit/bitmap.c \ $(srcroot)test/unit/bitmap.c \
$(srcroot)test/unit/ckh.c \ $(srcroot)test/unit/ckh.c \
$(srcroot)test/unit/decay.c \ $(srcroot)test/unit/decay.c \

View File

@ -1500,9 +1500,9 @@ malloc_conf = "xmalloc:true";]]></programlisting>
to control allocation for arenas created via <link to control allocation for arenas created via <link
linkend="arenas.extend"><mallctl>arenas.extend</mallctl></link> such linkend="arenas.extend"><mallctl>arenas.extend</mallctl></link> such
that all extents originate from an application-supplied extent allocator that all extents originate from an application-supplied extent allocator
(by setting custom extent hook functions just after arena creation), but (by specifying the custom extent hook functions during arena creation),
the automatically created arenas may have already created extents prior but the automatically created arenas will have already created extents
to the application having an opportunity to take over extent prior to the application having an opportunity to take over extent
allocation.</para> allocation.</para>
<programlisting language="C"><![CDATA[ <programlisting language="C"><![CDATA[
@ -1832,11 +1832,12 @@ struct extent_hooks_s {
<varlistentry id="arenas.extend"> <varlistentry id="arenas.extend">
<term> <term>
<mallctl>arenas.extend</mallctl> <mallctl>arenas.extend</mallctl>
(<type>unsigned</type>) (<type>unsigned</type>, <type>extent_hooks_t *</type>)
<literal>r-</literal> <literal>rw</literal>
</term> </term>
<listitem><para>Extend the array of arenas by appending a new arena, <listitem><para>Extend the array of arenas by appending a new arena with
and returning the new arena index.</para></listitem> optionally specified extent hooks, and returning the new arena
index.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="prof.thread_active_init"> <varlistentry id="prof.thread_active_init">
@ -1976,9 +1977,11 @@ struct extent_hooks_s {
[<option>--enable-stats</option>] [<option>--enable-stats</option>]
</term> </term>
<listitem><para>Total number of bytes dedicated to metadata, which <listitem><para>Total number of bytes dedicated to metadata, which
comprise base allocations used for bootstrap-sensitive internal comprise base allocations used for bootstrap-sensitive allocator
allocator data structures and internal allocations (see <link metadata structures (see <link
linkend="stats.arenas.i.metadata"><mallctl>stats.arenas.&lt;i&gt;.metadata</mallctl></link>).</para></listitem> linkend="stats.arenas.i.base"><mallctl>stats.arenas.&lt;i&gt;.base</mallctl></link>)
and internal allocations (see <link
linkend="stats.arenas.i.internal"><mallctl>stats.arenas.&lt;i&gt;.internal</mallctl></link>).</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.resident"> <varlistentry id="stats.resident">
@ -2114,9 +2117,21 @@ struct extent_hooks_s {
details.</para></listitem> details.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.metadata"> <varlistentry id="stats.arenas.i.base">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.metadata</mallctl> <mallctl>stats.arenas.&lt;i&gt;.base</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>
Number of bytes dedicated to bootstrap-sensitive allocator metadata
structures.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.internal">
<term>
<mallctl>stats.arenas.&lt;i&gt;.internal</mallctl>
(<type>size_t</type>) (<type>size_t</type>)
<literal>r-</literal> <literal>r-</literal>
[<option>--enable-stats</option>] [<option>--enable-stats</option>]
@ -2124,13 +2139,23 @@ struct extent_hooks_s {
<listitem><para>Number of bytes dedicated to internal allocations. <listitem><para>Number of bytes dedicated to internal allocations.
Internal allocations differ from application-originated allocations in Internal allocations differ from application-originated allocations in
that they are for internal use, and that they are omitted from heap that they are for internal use, and that they are omitted from heap
profiles. This statistic is reported separately from <link profiles.</para></listitem>
linkend="stats.metadata"><mallctl>stats.metadata</mallctl></link> </varlistentry>
because it overlaps with e.g. the <link
linkend="stats.allocated"><mallctl>stats.allocated</mallctl></link> and <varlistentry id="stats.arenas.i.resident">
<link linkend="stats.active"><mallctl>stats.active</mallctl></link> <term>
statistics, whereas the other metadata statistics do <mallctl>stats.arenas.&lt;i&gt;.resident</mallctl>
not.</para></listitem> (<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Maximum number of bytes in physically resident data
pages mapped by the arena, comprising all pages dedicated to allocator
metadata, pages backing active allocations, and unused dirty pages.
This is a maximum rather than precise because pages may not actually be
physically resident if they correspond to demand-zeroed virtual memory
that has not yet been touched. This is a multiple of the page
size.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.npurge"> <varlistentry id="stats.arenas.i.npurge">

View File

@ -143,9 +143,6 @@ struct arena_bin_s {
}; };
struct arena_s { struct arena_s {
/* This arena's index within the arenas array. */
unsigned ind;
/* /*
* Number of threads currently assigned to this arena, synchronized via * Number of threads currently assigned to this arena, synchronized via
* atomic operations. Each thread has two distinct assignments, one for * atomic operations. Each thread has two distinct assignments, one for
@ -226,12 +223,6 @@ struct arena_s {
/* Protects extents_{cached,retained,dirty}. */ /* Protects extents_{cached,retained,dirty}. */
malloc_mutex_t extents_mtx; malloc_mutex_t extents_mtx;
/* User-configurable extent hook functions. */
union {
extent_hooks_t *extent_hooks;
void *extent_hooks_pun;
};
/* /*
* Next extent size class in a growing series to use when satisfying a * Next extent size class in a growing series to use when satisfying a
* request via the extent hooks (only if !config_munmap). This limits * request via the extent hooks (only if !config_munmap). This limits
@ -247,6 +238,9 @@ struct arena_s {
/* bins is used to store heaps of free regions. */ /* bins is used to store heaps of free regions. */
arena_bin_t bins[NBINS]; arena_bin_t bins[NBINS];
/* Base allocator, from which arena metadata are allocated. */
base_t *base;
}; };
/* Used in conjunction with tsd for fast arena-related context lookup. */ /* Used in conjunction with tsd for fast arena-related context lookup. */
@ -337,7 +331,7 @@ unsigned arena_nthreads_get(arena_t *arena, bool internal);
void arena_nthreads_inc(arena_t *arena, bool internal); void arena_nthreads_inc(arena_t *arena, bool internal);
void arena_nthreads_dec(arena_t *arena, bool internal); void arena_nthreads_dec(arena_t *arena, bool internal);
size_t arena_extent_sn_next(arena_t *arena); size_t arena_extent_sn_next(arena_t *arena);
arena_t *arena_new(tsdn_t *tsdn, unsigned ind); arena_t *arena_new(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks);
void arena_boot(void); void arena_boot(void);
void arena_prefork0(tsdn_t *tsdn, arena_t *arena); void arena_prefork0(tsdn_t *tsdn, arena_t *arena);
void arena_prefork1(tsdn_t *tsdn, arena_t *arena); void arena_prefork1(tsdn_t *tsdn, arena_t *arena);
@ -351,9 +345,10 @@ void arena_postfork_child(tsdn_t *tsdn, arena_t *arena);
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
void arena_metadata_add(arena_t *arena, size_t size); unsigned arena_ind_get(const arena_t *arena);
void arena_metadata_sub(arena_t *arena, size_t size); void arena_internal_add(arena_t *arena, size_t size);
size_t arena_metadata_get(arena_t *arena); void arena_internal_sub(arena_t *arena, size_t size);
size_t arena_internal_get(arena_t *arena);
bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes);
bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes);
bool arena_prof_accum(tsdn_t *tsdn, arena_t *arena, uint64_t accumbytes); bool arena_prof_accum(tsdn_t *tsdn, arena_t *arena, uint64_t accumbytes);
@ -378,25 +373,32 @@ void arena_sdalloc(tsdn_t *tsdn, extent_t *extent, void *ptr, size_t size,
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_))
# ifdef JEMALLOC_ARENA_INLINE_A # ifdef JEMALLOC_ARENA_INLINE_A
JEMALLOC_INLINE void JEMALLOC_INLINE unsigned
arena_metadata_add(arena_t *arena, size_t size) arena_ind_get(const arena_t *arena)
{ {
atomic_add_zu(&arena->stats.metadata, size); return (base_ind_get(arena->base));
} }
JEMALLOC_INLINE void JEMALLOC_INLINE void
arena_metadata_sub(arena_t *arena, size_t size) arena_internal_add(arena_t *arena, size_t size)
{ {
atomic_sub_zu(&arena->stats.metadata, size); atomic_add_zu(&arena->stats.internal, size);
}
JEMALLOC_INLINE void
arena_internal_sub(arena_t *arena, size_t size)
{
atomic_sub_zu(&arena->stats.internal, size);
} }
JEMALLOC_INLINE size_t JEMALLOC_INLINE size_t
arena_metadata_get(arena_t *arena) arena_internal_get(arena_t *arena)
{ {
return (atomic_read_zu(&arena->stats.metadata)); return (atomic_read_zu(&arena->stats.internal));
} }
JEMALLOC_INLINE bool JEMALLOC_INLINE bool
@ -499,7 +501,7 @@ arena_decay_ticks(tsdn_t *tsdn, arena_t *arena, unsigned nticks)
if (unlikely(tsdn_null(tsdn))) if (unlikely(tsdn_null(tsdn)))
return; return;
tsd = tsdn_tsd(tsdn); tsd = tsdn_tsd(tsdn);
decay_ticker = decay_ticker_get(tsd, arena->ind); decay_ticker = decay_ticker_get(tsd, arena_ind_get(arena));
if (unlikely(decay_ticker == NULL)) if (unlikely(decay_ticker == NULL))
return; return;
if (unlikely(ticker_ticks(decay_ticker, nticks))) if (unlikely(ticker_ticks(decay_ticker, nticks)))

View File

@ -1,25 +1,87 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_TYPES #ifdef JEMALLOC_H_TYPES
typedef struct base_block_s base_block_t;
typedef struct base_s base_t;
#endif /* JEMALLOC_H_TYPES */ #endif /* JEMALLOC_H_TYPES */
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS #ifdef JEMALLOC_H_STRUCTS
/* Embedded at the beginning of every block of base-managed virtual memory. */
struct base_block_s {
/* Total size of block's virtual memory mapping. */
size_t size;
/* Next block in list of base's blocks. */
base_block_t *next;
/* Tracks unused trailing space. */
extent_t extent;
};
struct base_s {
/* Associated arena's index within the arenas array. */
unsigned ind;
/* User-configurable extent hook functions. */
union {
extent_hooks_t *extent_hooks;
void *extent_hooks_pun;
};
/* Protects base_alloc() and base_stats_get() operations. */
malloc_mutex_t mtx;
/* Serial number generation state. */
size_t extent_sn_next;
/* Chain of all blocks associated with base. */
base_block_t *blocks;
/* Heap of extents that track unused trailing space within blocks. */
extent_heap_t avail[NSIZES];
/* Stats, only maintained if config_stats. */
size_t allocated;
size_t resident;
size_t mapped;
};
#endif /* JEMALLOC_H_STRUCTS */ #endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *base_alloc(tsdn_t *tsdn, size_t size); base_t *b0get(void);
void base_stats_get(tsdn_t *tsdn, size_t *allocated, size_t *resident, base_t *base_new(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks);
size_t *mapped); void base_delete(base_t *base);
bool base_boot(void); extent_hooks_t *base_extent_hooks_get(base_t *base);
void base_prefork(tsdn_t *tsdn); extent_hooks_t *base_extent_hooks_set(base_t *base,
void base_postfork_parent(tsdn_t *tsdn); extent_hooks_t *extent_hooks);
void base_postfork_child(tsdn_t *tsdn); void *base_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment);
void base_stats_get(tsdn_t *tsdn, base_t *base, size_t *allocated,
size_t *resident, size_t *mapped);
void base_prefork(tsdn_t *tsdn, base_t *base);
void base_postfork_parent(tsdn_t *tsdn, base_t *base);
void base_postfork_child(tsdn_t *tsdn, base_t *base);
bool base_boot(tsdn_t *tsdn);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
unsigned base_ind_get(const base_t *base);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_BASE_C_))
JEMALLOC_INLINE unsigned
base_ind_get(const base_t *base)
{
return (base->ind);
}
#endif
#endif /* JEMALLOC_H_INLINES */ #endif /* JEMALLOC_H_INLINES */
/******************************************************************************/ /******************************************************************************/

View File

@ -370,9 +370,9 @@ typedef unsigned szind_t;
#include "jemalloc/internal/tsd.h" #include "jemalloc/internal/tsd.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
#include "jemalloc/internal/extent.h" #include "jemalloc/internal/extent.h"
#include "jemalloc/internal/base.h"
#include "jemalloc/internal/arena.h" #include "jemalloc/internal/arena.h"
#include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/bitmap.h"
#include "jemalloc/internal/base.h"
#include "jemalloc/internal/rtree.h" #include "jemalloc/internal/rtree.h"
#include "jemalloc/internal/pages.h" #include "jemalloc/internal/pages.h"
#include "jemalloc/internal/large.h" #include "jemalloc/internal/large.h"
@ -403,10 +403,10 @@ typedef unsigned szind_t;
#include "jemalloc/internal/arena.h" #include "jemalloc/internal/arena.h"
#undef JEMALLOC_ARENA_STRUCTS_A #undef JEMALLOC_ARENA_STRUCTS_A
#include "jemalloc/internal/extent.h" #include "jemalloc/internal/extent.h"
#include "jemalloc/internal/base.h"
#define JEMALLOC_ARENA_STRUCTS_B #define JEMALLOC_ARENA_STRUCTS_B
#include "jemalloc/internal/arena.h" #include "jemalloc/internal/arena.h"
#undef JEMALLOC_ARENA_STRUCTS_B #undef JEMALLOC_ARENA_STRUCTS_B
#include "jemalloc/internal/base.h"
#include "jemalloc/internal/rtree.h" #include "jemalloc/internal/rtree.h"
#include "jemalloc/internal/pages.h" #include "jemalloc/internal/pages.h"
#include "jemalloc/internal/large.h" #include "jemalloc/internal/large.h"
@ -464,7 +464,7 @@ void *bootstrap_malloc(size_t size);
void *bootstrap_calloc(size_t num, size_t size); void *bootstrap_calloc(size_t num, size_t size);
void bootstrap_free(void *ptr); void bootstrap_free(void *ptr);
unsigned narenas_total_get(void); unsigned narenas_total_get(void);
arena_t *arena_init(tsdn_t *tsdn, unsigned ind); arena_t *arena_init(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks);
arena_tdata_t *arena_tdata_get_hard(tsd_t *tsd, unsigned ind); arena_tdata_t *arena_tdata_get_hard(tsd_t *tsd, unsigned ind);
arena_t *arena_choose_hard(tsd_t *tsd, bool internal); arena_t *arena_choose_hard(tsd_t *tsd, bool internal);
void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind); void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind);
@ -491,8 +491,8 @@ void jemalloc_postfork_child(void);
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
#include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/bitmap.h"
#include "jemalloc/internal/extent.h" #include "jemalloc/internal/extent.h"
#include "jemalloc/internal/arena.h"
#include "jemalloc/internal/base.h" #include "jemalloc/internal/base.h"
#include "jemalloc/internal/arena.h"
#include "jemalloc/internal/rtree.h" #include "jemalloc/internal/rtree.h"
#include "jemalloc/internal/pages.h" #include "jemalloc/internal/pages.h"
#include "jemalloc/internal/large.h" #include "jemalloc/internal/large.h"
@ -900,8 +900,10 @@ arena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing)
ret = arenas[ind]; ret = arenas[ind];
if (unlikely(ret == NULL)) { if (unlikely(ret == NULL)) {
ret = (arena_t *)atomic_read_p((void **)&arenas[ind]); ret = (arena_t *)atomic_read_p((void **)&arenas[ind]);
if (init_if_missing && unlikely(ret == NULL)) if (init_if_missing && unlikely(ret == NULL)) {
ret = arena_init(tsdn, ind); ret = arena_init(tsdn, ind,
(extent_hooks_t *)&extent_hooks_default);
}
} }
return (ret); return (ret);
} }
@ -950,17 +952,17 @@ iealloc(tsdn_t *tsdn, const void *ptr)
arena_t *iaalloc(tsdn_t *tsdn, const void *ptr); arena_t *iaalloc(tsdn_t *tsdn, const void *ptr);
size_t isalloc(tsdn_t *tsdn, const extent_t *extent, const void *ptr); size_t isalloc(tsdn_t *tsdn, const extent_t *extent, const void *ptr);
void *iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, void *iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena, bool slow_path); tcache_t *tcache, bool is_internal, arena_t *arena, bool slow_path);
void *ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero, void *ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero,
bool slow_path); bool slow_path);
void *ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, void *ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena); tcache_t *tcache, bool is_internal, arena_t *arena);
void *ipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, void *ipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, arena_t *arena); tcache_t *tcache, arena_t *arena);
void *ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero); void *ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero);
size_t ivsalloc(tsdn_t *tsdn, const void *ptr); size_t ivsalloc(tsdn_t *tsdn, const void *ptr);
void idalloctm(tsdn_t *tsdn, extent_t *extent, void *ptr, tcache_t *tcache, void idalloctm(tsdn_t *tsdn, extent_t *extent, void *ptr, tcache_t *tcache,
bool is_metadata, bool slow_path); bool is_internal, bool slow_path);
void idalloc(tsd_t *tsd, extent_t *extent, void *ptr); void idalloc(tsd_t *tsd, extent_t *extent, void *ptr);
void isdalloct(tsdn_t *tsdn, extent_t *extent, void *ptr, size_t size, void isdalloct(tsdn_t *tsdn, extent_t *extent, void *ptr, size_t size,
tcache_t *tcache, bool slow_path); tcache_t *tcache, bool slow_path);
@ -1003,17 +1005,18 @@ isalloc(tsdn_t *tsdn, const extent_t *extent, const void *ptr)
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, tcache_t *tcache, iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, tcache_t *tcache,
bool is_metadata, arena_t *arena, bool slow_path) bool is_internal, arena_t *arena, bool slow_path)
{ {
void *ret; void *ret;
assert(size != 0); assert(size != 0);
assert(!is_metadata || tcache == NULL); assert(!is_internal || tcache == NULL);
assert(!is_metadata || arena == NULL || arena->ind < narenas_auto); assert(!is_internal || arena == NULL || arena_ind_get(arena) <
narenas_auto);
ret = arena_malloc(tsdn, arena, size, ind, zero, tcache, slow_path); ret = arena_malloc(tsdn, arena, size, ind, zero, tcache, slow_path);
if (config_stats && is_metadata && likely(ret != NULL)) { if (config_stats && is_internal && likely(ret != NULL)) {
arena_metadata_add(iaalloc(tsdn, ret), isalloc(tsdn, arena_internal_add(iaalloc(tsdn, ret), isalloc(tsdn,
iealloc(tsdn, ret), ret)); iealloc(tsdn, ret), ret));
} }
return (ret); return (ret);
@ -1029,19 +1032,20 @@ ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero, bool slow_path)
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero, ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena) tcache_t *tcache, bool is_internal, arena_t *arena)
{ {
void *ret; void *ret;
assert(usize != 0); assert(usize != 0);
assert(usize == sa2u(usize, alignment)); assert(usize == sa2u(usize, alignment));
assert(!is_metadata || tcache == NULL); assert(!is_internal || tcache == NULL);
assert(!is_metadata || arena == NULL || arena->ind < narenas_auto); assert(!is_internal || arena == NULL || arena_ind_get(arena) <
narenas_auto);
ret = arena_palloc(tsdn, arena, usize, alignment, zero, tcache); ret = arena_palloc(tsdn, arena, usize, alignment, zero, tcache);
assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret); assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret);
if (config_stats && is_metadata && likely(ret != NULL)) { if (config_stats && is_internal && likely(ret != NULL)) {
arena_metadata_add(iaalloc(tsdn, ret), isalloc(tsdn, arena_internal_add(iaalloc(tsdn, ret), isalloc(tsdn,
iealloc(tsdn, ret), ret)); iealloc(tsdn, ret), ret));
} }
return (ret); return (ret);
@ -1088,14 +1092,15 @@ ivsalloc(tsdn_t *tsdn, const void *ptr)
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
idalloctm(tsdn_t *tsdn, extent_t *extent, void *ptr, tcache_t *tcache, idalloctm(tsdn_t *tsdn, extent_t *extent, void *ptr, tcache_t *tcache,
bool is_metadata, bool slow_path) bool is_internal, bool slow_path)
{ {
assert(ptr != NULL); assert(ptr != NULL);
assert(!is_metadata || tcache == NULL); assert(!is_internal || tcache == NULL);
assert(!is_metadata || iaalloc(tsdn, ptr)->ind < narenas_auto); assert(!is_internal || arena_ind_get(iaalloc(tsdn, ptr)) <
if (config_stats && is_metadata) { narenas_auto);
arena_metadata_sub(iaalloc(tsdn, ptr), isalloc(tsdn, extent, if (config_stats && is_internal) {
arena_internal_sub(iaalloc(tsdn, ptr), isalloc(tsdn, extent,
ptr)); ptr));
} }

View File

@ -34,13 +34,14 @@ arena_extent_ralloc_large_shrink
arena_extent_sn_next arena_extent_sn_next
arena_get arena_get
arena_ichoose arena_ichoose
arena_ind_get
arena_init arena_init
arena_internal_add
arena_internal_get
arena_internal_sub
arena_malloc arena_malloc
arena_malloc_hard arena_malloc_hard
arena_maybe_purge arena_maybe_purge
arena_metadata_add
arena_metadata_get
arena_metadata_sub
arena_migrate arena_migrate
arena_new arena_new
arena_nthreads_dec arena_nthreads_dec
@ -93,8 +94,14 @@ atomic_write_u
atomic_write_u32 atomic_write_u32
atomic_write_u64 atomic_write_u64
atomic_write_zu atomic_write_zu
b0get
base_alloc base_alloc
base_boot base_boot
base_delete
base_extent_hooks_get
base_extent_hooks_set
base_ind_get
base_new
base_postfork_child base_postfork_child
base_postfork_parent base_postfork_parent
base_prefork base_prefork

View File

@ -100,8 +100,9 @@ struct arena_stats_s {
uint64_t nmadvise; uint64_t nmadvise;
uint64_t purged; uint64_t purged;
/* Number of bytes currently allocated for internal metadata. */ size_t base;
size_t metadata; /* Protected via atomic_*_zu(). */ size_t internal; /* Protected via atomic_*_zu(). */
size_t resident;
size_t allocated_large; size_t allocated_large;
uint64_t nmalloc_large; uint64_t nmalloc_large;

View File

@ -1550,6 +1550,7 @@ arena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads,
arena_stats_t *astats, malloc_bin_stats_t *bstats, arena_stats_t *astats, malloc_bin_stats_t *bstats,
malloc_large_stats_t *lstats) malloc_large_stats_t *lstats)
{ {
size_t base_allocated, base_resident, base_mapped;
unsigned i; unsigned i;
cassert(config_stats); cassert(config_stats);
@ -1558,12 +1559,18 @@ arena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads,
arena_basic_stats_merge_locked(arena, nthreads, dss, decay_time, arena_basic_stats_merge_locked(arena, nthreads, dss, decay_time,
nactive, ndirty); nactive, ndirty);
astats->mapped += arena->stats.mapped; base_stats_get(tsdn, arena->base, &base_allocated, &base_resident,
&base_mapped);
astats->mapped += base_mapped + arena->stats.mapped;
astats->retained += arena->stats.retained; astats->retained += arena->stats.retained;
astats->npurge += arena->stats.npurge; astats->npurge += arena->stats.npurge;
astats->nmadvise += arena->stats.nmadvise; astats->nmadvise += arena->stats.nmadvise;
astats->purged += arena->stats.purged; astats->purged += arena->stats.purged;
astats->metadata += arena_metadata_get(arena); astats->base += base_allocated;
astats->internal += arena_internal_get(arena);
astats->resident += base_resident + (((arena->nactive + arena->ndirty)
<< LG_PAGE));
astats->allocated_large += arena->stats.allocated_large; astats->allocated_large += arena->stats.allocated_large;
astats->nmalloc_large += arena->stats.nmalloc_large; astats->nmalloc_large += arena->stats.nmalloc_large;
astats->ndalloc_large += arena->stats.ndalloc_large; astats->ndalloc_large += arena->stats.ndalloc_large;
@ -1625,19 +1632,27 @@ arena_extent_sn_next(arena_t *arena)
} }
arena_t * arena_t *
arena_new(tsdn_t *tsdn, unsigned ind) arena_new(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks)
{ {
arena_t *arena; arena_t *arena;
base_t *base;
unsigned i; unsigned i;
arena = (arena_t *)base_alloc(tsdn, sizeof(arena_t)); if (ind == 0)
if (arena == NULL) base = b0get();
else {
base = base_new(tsdn, ind, extent_hooks);
if (base == NULL)
return (NULL); return (NULL);
}
arena = (arena_t *)base_alloc(tsdn, base, sizeof(arena_t), CACHELINE);
if (arena == NULL)
goto label_error;
arena->ind = ind;
arena->nthreads[0] = arena->nthreads[1] = 0; arena->nthreads[0] = arena->nthreads[1] = 0;
if (malloc_mutex_init(&arena->lock, "arena", WITNESS_RANK_ARENA)) if (malloc_mutex_init(&arena->lock, "arena", WITNESS_RANK_ARENA))
return (NULL); goto label_error;
if (config_stats && config_tcache) if (config_stats && config_tcache)
ql_new(&arena->tcache_ql); ql_new(&arena->tcache_ql);
@ -1670,7 +1685,7 @@ arena_new(tsdn_t *tsdn, unsigned ind)
ql_new(&arena->large); ql_new(&arena->large);
if (malloc_mutex_init(&arena->large_mtx, "arena_large", if (malloc_mutex_init(&arena->large_mtx, "arena_large",
WITNESS_RANK_ARENA_LARGE)) WITNESS_RANK_ARENA_LARGE))
return (NULL); goto label_error;
for (i = 0; i < NPSIZES+1; i++) { for (i = 0; i < NPSIZES+1; i++) {
extent_heap_new(&arena->extents_cached[i]); extent_heap_new(&arena->extents_cached[i]);
@ -1682,9 +1697,7 @@ arena_new(tsdn_t *tsdn, unsigned ind)
if (malloc_mutex_init(&arena->extents_mtx, "arena_extents", if (malloc_mutex_init(&arena->extents_mtx, "arena_extents",
WITNESS_RANK_ARENA_EXTENTS)) WITNESS_RANK_ARENA_EXTENTS))
return (NULL); goto label_error;
arena->extent_hooks = (extent_hooks_t *)&extent_hooks_default;
if (!config_munmap) if (!config_munmap)
arena->extent_grow_next = psz2ind(HUGEPAGE); arena->extent_grow_next = psz2ind(HUGEPAGE);
@ -1692,14 +1705,14 @@ arena_new(tsdn_t *tsdn, unsigned ind)
ql_new(&arena->extent_cache); ql_new(&arena->extent_cache);
if (malloc_mutex_init(&arena->extent_cache_mtx, "arena_extent_cache", if (malloc_mutex_init(&arena->extent_cache_mtx, "arena_extent_cache",
WITNESS_RANK_ARENA_EXTENT_CACHE)) WITNESS_RANK_ARENA_EXTENT_CACHE))
return (NULL); goto label_error;
/* Initialize bins. */ /* Initialize bins. */
for (i = 0; i < NBINS; i++) { for (i = 0; i < NBINS; i++) {
arena_bin_t *bin = &arena->bins[i]; arena_bin_t *bin = &arena->bins[i];
if (malloc_mutex_init(&bin->lock, "arena_bin", if (malloc_mutex_init(&bin->lock, "arena_bin",
WITNESS_RANK_ARENA_BIN)) WITNESS_RANK_ARENA_BIN))
return (NULL); goto label_error;
bin->slabcur = NULL; bin->slabcur = NULL;
extent_heap_new(&bin->slabs_nonfull); extent_heap_new(&bin->slabs_nonfull);
extent_init(&bin->slabs_full, arena, NULL, 0, 0, 0, false, extent_init(&bin->slabs_full, arena, NULL, 0, 0, 0, false,
@ -1708,7 +1721,13 @@ arena_new(tsdn_t *tsdn, unsigned ind)
memset(&bin->stats, 0, sizeof(malloc_bin_stats_t)); memset(&bin->stats, 0, sizeof(malloc_bin_stats_t));
} }
arena->base = base;
return (arena); return (arena);
label_error:
if (ind != 0)
base_delete(base);
return (NULL);
} }
void void
@ -1744,6 +1763,7 @@ arena_prefork3(tsdn_t *tsdn, arena_t *arena)
{ {
unsigned i; unsigned i;
base_prefork(tsdn, arena->base);
for (i = 0; i < NBINS; i++) for (i = 0; i < NBINS; i++)
malloc_mutex_prefork(tsdn, &arena->bins[i].lock); malloc_mutex_prefork(tsdn, &arena->bins[i].lock);
malloc_mutex_prefork(tsdn, &arena->large_mtx); malloc_mutex_prefork(tsdn, &arena->large_mtx);
@ -1757,6 +1777,7 @@ arena_postfork_parent(tsdn_t *tsdn, arena_t *arena)
malloc_mutex_postfork_parent(tsdn, &arena->large_mtx); malloc_mutex_postfork_parent(tsdn, &arena->large_mtx);
for (i = 0; i < NBINS; i++) for (i = 0; i < NBINS; i++)
malloc_mutex_postfork_parent(tsdn, &arena->bins[i].lock); malloc_mutex_postfork_parent(tsdn, &arena->bins[i].lock);
base_postfork_parent(tsdn, arena->base);
malloc_mutex_postfork_parent(tsdn, &arena->extent_cache_mtx); malloc_mutex_postfork_parent(tsdn, &arena->extent_cache_mtx);
malloc_mutex_postfork_parent(tsdn, &arena->extents_mtx); malloc_mutex_postfork_parent(tsdn, &arena->extents_mtx);
malloc_mutex_postfork_parent(tsdn, &arena->lock); malloc_mutex_postfork_parent(tsdn, &arena->lock);
@ -1770,6 +1791,7 @@ arena_postfork_child(tsdn_t *tsdn, arena_t *arena)
malloc_mutex_postfork_child(tsdn, &arena->large_mtx); malloc_mutex_postfork_child(tsdn, &arena->large_mtx);
for (i = 0; i < NBINS; i++) for (i = 0; i < NBINS; i++)
malloc_mutex_postfork_child(tsdn, &arena->bins[i].lock); malloc_mutex_postfork_child(tsdn, &arena->bins[i].lock);
base_postfork_child(tsdn, arena->base);
malloc_mutex_postfork_child(tsdn, &arena->extent_cache_mtx); malloc_mutex_postfork_child(tsdn, &arena->extent_cache_mtx);
malloc_mutex_postfork_child(tsdn, &arena->extents_mtx); malloc_mutex_postfork_child(tsdn, &arena->extents_mtx);
malloc_mutex_postfork_child(tsdn, &arena->lock); malloc_mutex_postfork_child(tsdn, &arena->lock);

View File

@ -4,112 +4,308 @@
/******************************************************************************/ /******************************************************************************/
/* Data. */ /* Data. */
static malloc_mutex_t base_mtx; static base_t *b0;
static size_t base_extent_sn_next;
static extent_heap_t base_avail[NSIZES];
static extent_t *base_extents;
static size_t base_allocated;
static size_t base_resident;
static size_t base_mapped;
/******************************************************************************/ /******************************************************************************/
static extent_t * static void *
base_extent_try_alloc(tsdn_t *tsdn) base_map(extent_hooks_t *extent_hooks, unsigned ind, size_t size)
{ {
extent_t *extent; void *addr;
bool zero = true;
bool commit = true;
malloc_mutex_assert_owner(tsdn, &base_mtx); assert(size == HUGEPAGE_CEILING(size));
if (base_extents == NULL) if (extent_hooks == &extent_hooks_default)
return (NULL); addr = extent_alloc_mmap(NULL, size, PAGE, &zero, &commit);
extent = base_extents; else {
base_extents = *(extent_t **)extent; addr = extent_hooks->alloc(extent_hooks, NULL, size, PAGE,
return (extent); &zero, &commit, ind);
}
return (addr);
} }
static void static void
base_extent_dalloc(tsdn_t *tsdn, extent_t *extent) base_unmap(extent_hooks_t *extent_hooks, unsigned ind, void *addr, size_t size)
{ {
malloc_mutex_assert_owner(tsdn, &base_mtx); /*
* Cascade through dalloc, decommit, purge_lazy, and purge_forced,
*(extent_t **)extent = base_extents; * stopping at first success. This cascade is performed for consistency
base_extents = extent; * with the cascade in extent_dalloc_wrapper() because an application's
* custom hooks may not support e.g. dalloc. This function is only ever
* called as a side effect of arena destruction, so although it might
* seem pointless to do anything besides dalloc here, the application
* may in fact want the end state of all associated virtual memory to in
* some consistent-but-allocated state.
*/
if (extent_hooks == &extent_hooks_default) {
if (!extent_dalloc_mmap(addr, size))
return;
if (!pages_decommit(addr, size))
return;
if (!pages_purge_lazy(addr, size))
return;
if (!pages_purge_forced(addr, size))
return;
/* Nothing worked. This should never happen. */
not_reached();
} else {
if (extent_hooks->dalloc != NULL &&
!extent_hooks->dalloc(extent_hooks, addr, size, true, ind))
return;
if (extent_hooks->decommit != NULL &&
!extent_hooks->decommit(extent_hooks, addr, size, 0, size,
ind))
return;
if (extent_hooks->purge_lazy != NULL &&
!extent_hooks->purge_lazy(extent_hooks, addr, size, 0, size,
ind))
return;
if (extent_hooks->purge_forced != NULL &&
!extent_hooks->purge_forced(extent_hooks, addr, size, 0,
size, ind))
return;
/* Nothing worked. That's the application's problem. */
}
} }
static void static void
base_extent_init(extent_t *extent, void *addr, size_t size) base_extent_init(size_t *extent_sn_next, extent_t *extent, void *addr,
size_t size)
{ {
size_t sn = atomic_add_zu(&base_extent_sn_next, 1) - 1; size_t sn;
sn = *extent_sn_next;
(*extent_sn_next)++;
extent_init(extent, NULL, addr, size, 0, sn, true, true, true, false); extent_init(extent, NULL, addr, size, 0, sn, true, true, true, false);
} }
static extent_t * static void *
base_extent_alloc(tsdn_t *tsdn, size_t minsize) base_extent_bump_alloc_helper(extent_t *extent, size_t *gap_size, size_t size,
{ size_t alignment)
extent_t *extent;
size_t esize, nsize;
void *addr;
malloc_mutex_assert_owner(tsdn, &base_mtx);
assert(minsize != 0);
extent = base_extent_try_alloc(tsdn);
/* Allocate enough space to also carve an extent out if necessary. */
nsize = (extent == NULL) ? CACHELINE_CEILING(sizeof(extent_t)) : 0;
esize = PAGE_CEILING(minsize + nsize);
/*
* Directly call extent_alloc_mmap() because it's critical to allocate
* untouched demand-zeroed virtual memory.
*/
{
bool zero = true;
bool commit = true;
addr = extent_alloc_mmap(NULL, esize, PAGE, &zero, &commit);
}
if (addr == NULL) {
if (extent != NULL)
base_extent_dalloc(tsdn, extent);
return (NULL);
}
base_mapped += esize;
if (extent == NULL) {
extent = (extent_t *)addr;
addr = (void *)((uintptr_t)addr + nsize);
esize -= nsize;
if (config_stats) {
base_allocated += nsize;
base_resident += PAGE_CEILING(nsize);
}
}
base_extent_init(extent, addr, esize);
return (extent);
}
/*
* base_alloc() guarantees demand-zeroed memory, in order to make multi-page
* sparse data structures such as radix tree nodes efficient with respect to
* physical memory usage.
*/
void *
base_alloc(tsdn_t *tsdn, size_t size)
{ {
void *ret; void *ret;
size_t csize;
assert(alignment == ALIGNMENT_CEILING(alignment, QUANTUM));
assert(size == ALIGNMENT_CEILING(size, alignment));
*gap_size = ALIGNMENT_CEILING((uintptr_t)extent_addr_get(extent),
alignment) - (uintptr_t)extent_addr_get(extent);
ret = (void *)((uintptr_t)extent_addr_get(extent) + *gap_size);
assert(extent_size_get(extent) >= *gap_size + size);
extent_init(extent, NULL, (void *)((uintptr_t)extent_addr_get(extent) +
*gap_size + size), extent_size_get(extent) - *gap_size - size, 0,
extent_sn_get(extent), true, true, true, false);
return (ret);
}
static void
base_extent_bump_alloc_post(tsdn_t *tsdn, base_t *base, extent_t *extent,
size_t gap_size, void *addr, size_t size)
{
if (extent_size_get(extent) > 0) {
/*
* Compute the index for the largest size class that does not
* exceed extent's size.
*/
szind_t index_floor = size2index(extent_size_get(extent) + 1) -
1;
extent_heap_insert(&base->avail[index_floor], extent);
}
if (config_stats) {
base->allocated += size;
/*
* Add one PAGE to base_resident for every page boundary that is
* crossed by the new allocation.
*/
base->resident += PAGE_CEILING((uintptr_t)addr + size) -
PAGE_CEILING((uintptr_t)addr - gap_size);
assert(base->allocated <= base->resident);
assert(base->resident <= base->mapped);
}
}
static void *
base_extent_bump_alloc(tsdn_t *tsdn, base_t *base, extent_t *extent,
size_t size, size_t alignment)
{
void *ret;
size_t gap_size;
ret = base_extent_bump_alloc_helper(extent, &gap_size, size, alignment);
base_extent_bump_alloc_post(tsdn, base, extent, gap_size, ret, size);
return (ret);
}
/*
* Allocate a block of virtual memory that is large enough to start with a
* base_block_t header, followed by an object of specified size and alignment.
* On success a pointer to the initialized base_block_t header is returned.
*/
static base_block_t *
base_block_alloc(extent_hooks_t *extent_hooks, unsigned ind,
size_t *extent_sn_next, size_t size, size_t alignment)
{
base_block_t *block;
size_t usize, header_size, gap_size, block_size;
alignment = ALIGNMENT_CEILING(alignment, QUANTUM);
usize = ALIGNMENT_CEILING(size, alignment);
header_size = sizeof(base_block_t);
gap_size = ALIGNMENT_CEILING(header_size, alignment) - header_size;
block_size = HUGEPAGE_CEILING(header_size + gap_size + usize);
block = (base_block_t *)base_map(extent_hooks, ind, block_size);
if (block == NULL)
return (NULL);
block->size = block_size;
block->next = NULL;
assert(block_size >= header_size);
base_extent_init(extent_sn_next, &block->extent,
(void *)((uintptr_t)block + header_size), block_size - header_size);
return (block);
}
/*
* Allocate an extent that is at least as large as specified size, with
* specified alignment.
*/
static extent_t *
base_extent_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment)
{
extent_hooks_t *extent_hooks = base_extent_hooks_get(base);
base_block_t *block;
malloc_mutex_assert_owner(tsdn, &base->mtx);
block = base_block_alloc(extent_hooks, base_ind_get(base),
&base->extent_sn_next, size, alignment);
if (block == NULL)
return (NULL);
block->next = base->blocks;
base->blocks = block;
if (config_stats) {
base->allocated += sizeof(base_block_t);
base->resident += PAGE_CEILING(sizeof(base_block_t));
base->mapped += block->size;
assert(base->allocated <= base->resident);
assert(base->resident <= base->mapped);
}
return (&block->extent);
}
base_t *
b0get(void)
{
return (b0);
}
base_t *
base_new(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks)
{
base_t *base;
size_t extent_sn_next, base_alignment, base_size, gap_size;
base_block_t *block;
szind_t i;
extent_sn_next = 0;
block = base_block_alloc(extent_hooks, ind, &extent_sn_next,
sizeof(base_t), QUANTUM);
if (block == NULL)
return (NULL);
base_alignment = CACHELINE;
base_size = ALIGNMENT_CEILING(sizeof(base_t), base_alignment);
base = (base_t *)base_extent_bump_alloc_helper(&block->extent,
&gap_size, base_size, base_alignment);
base->ind = ind;
base->extent_hooks = extent_hooks;
if (malloc_mutex_init(&base->mtx, "base", WITNESS_RANK_BASE)) {
base_unmap(extent_hooks, ind, block, block->size);
return (NULL);
}
base->extent_sn_next = extent_sn_next;
base->blocks = block;
for (i = 0; i < NSIZES; i++)
extent_heap_new(&base->avail[i]);
if (config_stats) {
base->allocated = sizeof(base_block_t);
base->resident = PAGE_CEILING(sizeof(base_block_t));
base->mapped = block->size;
assert(base->allocated <= base->resident);
assert(base->resident <= base->mapped);
}
base_extent_bump_alloc_post(tsdn, base, &block->extent, gap_size, base,
base_size);
return (base);
}
void
base_delete(base_t *base)
{
extent_hooks_t *extent_hooks = base_extent_hooks_get(base);
base_block_t *next = base->blocks;
do {
base_block_t *block = next;
next = block->next;
base_unmap(extent_hooks, base_ind_get(base), block,
block->size);
} while (next != NULL);
}
extent_hooks_t *
base_extent_hooks_get(base_t *base)
{
return ((extent_hooks_t *)atomic_read_p(&base->extent_hooks_pun));
}
extent_hooks_t *
base_extent_hooks_set(base_t *base, extent_hooks_t *extent_hooks)
{
extent_hooks_t *old_extent_hooks = base_extent_hooks_get(base);
union {
extent_hooks_t **h;
void **v;
} u;
u.h = &base->extent_hooks;
atomic_write_p(u.v, extent_hooks);
return (old_extent_hooks);
}
/*
* base_alloc() returns zeroed memory, which is always demand-zeroed for the
* auto arenas, in order to make multi-page sparse data structures such as radix
* tree nodes efficient with respect to physical memory usage. Upon success a
* pointer to at least size bytes with specified alignment is returned. Note
* that size is rounded up to the nearest multiple of alignment to avoid false
* sharing.
*/
void *
base_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment)
{
void *ret;
size_t usize, asize;
szind_t i; szind_t i;
extent_t *extent; extent_t *extent;
/* alignment = QUANTUM_CEILING(alignment);
* Round size up to nearest multiple of the cacheline size, so that usize = ALIGNMENT_CEILING(size, alignment);
* there is no chance of false cache line sharing. asize = usize + alignment - QUANTUM;
*/
csize = CACHELINE_CEILING(size);
extent = NULL; extent = NULL;
malloc_mutex_lock(tsdn, &base_mtx); malloc_mutex_lock(tsdn, &base->mtx);
for (i = size2index(csize); i < NSIZES; i++) { for (i = size2index(asize); i < NSIZES; i++) {
extent = extent_heap_remove_first(&base_avail[i]); extent = extent_heap_remove_first(&base->avail[i]);
if (extent != NULL) { if (extent != NULL) {
/* Use existing space. */ /* Use existing space. */
break; break;
@ -117,87 +313,60 @@ base_alloc(tsdn_t *tsdn, size_t size)
} }
if (extent == NULL) { if (extent == NULL) {
/* Try to allocate more space. */ /* Try to allocate more space. */
extent = base_extent_alloc(tsdn, csize); extent = base_extent_alloc(tsdn, base, usize, alignment);
} }
if (extent == NULL) { if (extent == NULL) {
ret = NULL; ret = NULL;
goto label_return; goto label_return;
} }
ret = extent_addr_get(extent); ret = base_extent_bump_alloc(tsdn, base, extent, usize, alignment);
if (extent_size_get(extent) > csize) {
szind_t index_floor;
extent_addr_set(extent, (void *)((uintptr_t)ret + csize));
extent_size_set(extent, extent_size_get(extent) - csize);
/*
* Compute the index for the largest size class that does not
* exceed extent's size.
*/
index_floor = size2index(extent_size_get(extent) + 1) - 1;
extent_heap_insert(&base_avail[index_floor], extent);
} else
base_extent_dalloc(tsdn, extent);
if (config_stats) {
base_allocated += csize;
/*
* Add one PAGE to base_resident for every page boundary that is
* crossed by the new allocation.
*/
base_resident += PAGE_CEILING((uintptr_t)ret + csize) -
PAGE_CEILING((uintptr_t)ret);
}
label_return: label_return:
malloc_mutex_unlock(tsdn, &base_mtx); malloc_mutex_unlock(tsdn, &base->mtx);
return (ret); return (ret);
} }
void void
base_stats_get(tsdn_t *tsdn, size_t *allocated, size_t *resident, base_stats_get(tsdn_t *tsdn, base_t *base, size_t *allocated, size_t *resident,
size_t *mapped) size_t *mapped)
{ {
malloc_mutex_lock(tsdn, &base_mtx); cassert(config_stats);
assert(base_allocated <= base_resident);
assert(base_resident <= base_mapped); malloc_mutex_lock(tsdn, &base->mtx);
*allocated = base_allocated; assert(base->allocated <= base->resident);
*resident = base_resident; assert(base->resident <= base->mapped);
*mapped = base_mapped; *allocated = base->allocated;
malloc_mutex_unlock(tsdn, &base_mtx); *resident = base->resident;
*mapped = base->mapped;
malloc_mutex_unlock(tsdn, &base->mtx);
}
void
base_prefork(tsdn_t *tsdn, base_t *base)
{
malloc_mutex_prefork(tsdn, &base->mtx);
}
void
base_postfork_parent(tsdn_t *tsdn, base_t *base)
{
malloc_mutex_postfork_parent(tsdn, &base->mtx);
}
void
base_postfork_child(tsdn_t *tsdn, base_t *base)
{
malloc_mutex_postfork_child(tsdn, &base->mtx);
} }
bool bool
base_boot(void) base_boot(tsdn_t *tsdn)
{
szind_t i;
if (malloc_mutex_init(&base_mtx, "base", WITNESS_RANK_BASE))
return (true);
base_extent_sn_next = 0;
for (i = 0; i < NSIZES; i++)
extent_heap_new(&base_avail[i]);
base_extents = NULL;
return (false);
}
void
base_prefork(tsdn_t *tsdn)
{ {
malloc_mutex_prefork(tsdn, &base_mtx); b0 = base_new(tsdn, 0, (extent_hooks_t *)&extent_hooks_default);
} return (b0 == NULL);
void
base_postfork_parent(tsdn_t *tsdn)
{
malloc_mutex_postfork_parent(tsdn, &base_mtx);
}
void
base_postfork_child(tsdn_t *tsdn)
{
malloc_mutex_postfork_child(tsdn, &base_mtx);
} }

View File

@ -55,7 +55,7 @@ static void ctl_arena_stats_amerge(tsdn_t *tsdn, ctl_arena_stats_t *cstats,
static void ctl_arena_stats_smerge(ctl_arena_stats_t *sstats, static void ctl_arena_stats_smerge(ctl_arena_stats_t *sstats,
ctl_arena_stats_t *astats); ctl_arena_stats_t *astats);
static void ctl_arena_refresh(tsdn_t *tsdn, arena_t *arena, unsigned i); static void ctl_arena_refresh(tsdn_t *tsdn, arena_t *arena, unsigned i);
static bool ctl_grow(tsdn_t *tsdn); static bool ctl_grow(tsdn_t *tsdn, extent_hooks_t *extent_hooks);
static void ctl_refresh(tsdn_t *tsdn); static void ctl_refresh(tsdn_t *tsdn);
static bool ctl_init(tsdn_t *tsdn); static bool ctl_init(tsdn_t *tsdn);
static int ctl_lookup(tsdn_t *tsdn, const char *name, static int ctl_lookup(tsdn_t *tsdn, const char *name,
@ -174,7 +174,9 @@ CTL_PROTO(stats_arenas_i_retained)
CTL_PROTO(stats_arenas_i_npurge) CTL_PROTO(stats_arenas_i_npurge)
CTL_PROTO(stats_arenas_i_nmadvise) CTL_PROTO(stats_arenas_i_nmadvise)
CTL_PROTO(stats_arenas_i_purged) CTL_PROTO(stats_arenas_i_purged)
CTL_PROTO(stats_arenas_i_metadata) CTL_PROTO(stats_arenas_i_base)
CTL_PROTO(stats_arenas_i_internal)
CTL_PROTO(stats_arenas_i_resident)
INDEX_PROTO(stats_arenas_i) INDEX_PROTO(stats_arenas_i)
CTL_PROTO(stats_allocated) CTL_PROTO(stats_allocated)
CTL_PROTO(stats_active) CTL_PROTO(stats_active)
@ -392,7 +394,9 @@ static const ctl_named_node_t stats_arenas_i_node[] = {
{NAME("npurge"), CTL(stats_arenas_i_npurge)}, {NAME("npurge"), CTL(stats_arenas_i_npurge)},
{NAME("nmadvise"), CTL(stats_arenas_i_nmadvise)}, {NAME("nmadvise"), CTL(stats_arenas_i_nmadvise)},
{NAME("purged"), CTL(stats_arenas_i_purged)}, {NAME("purged"), CTL(stats_arenas_i_purged)},
{NAME("metadata"), CTL(stats_arenas_i_metadata)}, {NAME("base"), CTL(stats_arenas_i_base)},
{NAME("internal"), CTL(stats_arenas_i_internal)},
{NAME("resident"), CTL(stats_arenas_i_resident)},
{NAME("small"), CHILD(named, stats_arenas_i_small)}, {NAME("small"), CHILD(named, stats_arenas_i_small)},
{NAME("large"), CHILD(named, stats_arenas_i_large)}, {NAME("large"), CHILD(named, stats_arenas_i_large)},
{NAME("bins"), CHILD(indexed, stats_arenas_i_bins)}, {NAME("bins"), CHILD(indexed, stats_arenas_i_bins)},
@ -500,7 +504,9 @@ ctl_arena_stats_smerge(ctl_arena_stats_t *sstats, ctl_arena_stats_t *astats)
sstats->astats.nmadvise += astats->astats.nmadvise; sstats->astats.nmadvise += astats->astats.nmadvise;
sstats->astats.purged += astats->astats.purged; sstats->astats.purged += astats->astats.purged;
sstats->astats.metadata += astats->astats.metadata; sstats->astats.base += astats->astats.base;
sstats->astats.internal += astats->astats.internal;
sstats->astats.resident += astats->astats.resident;
sstats->allocated_small += astats->allocated_small; sstats->allocated_small += astats->allocated_small;
sstats->nmalloc_small += astats->nmalloc_small; sstats->nmalloc_small += astats->nmalloc_small;
@ -556,12 +562,12 @@ ctl_arena_refresh(tsdn_t *tsdn, arena_t *arena, unsigned i)
} }
static bool static bool
ctl_grow(tsdn_t *tsdn) ctl_grow(tsdn_t *tsdn, extent_hooks_t *extent_hooks)
{ {
ctl_arena_stats_t *astats; ctl_arena_stats_t *astats;
/* Initialize new arena. */ /* Initialize new arena. */
if (arena_init(tsdn, ctl_stats.narenas) == NULL) if (arena_init(tsdn, ctl_stats.narenas, extent_hooks) == NULL)
return (true); return (true);
/* Allocate extended arena stats. */ /* Allocate extended arena stats. */
@ -615,20 +621,17 @@ ctl_refresh(tsdn_t *tsdn)
} }
if (config_stats) { if (config_stats) {
size_t base_allocated, base_resident, base_mapped;
base_stats_get(tsdn, &base_allocated, &base_resident,
&base_mapped);
ctl_stats.allocated = ctl_stats.allocated =
ctl_stats.arenas[ctl_stats.narenas].allocated_small + ctl_stats.arenas[ctl_stats.narenas].allocated_small +
ctl_stats.arenas[ctl_stats.narenas].astats.allocated_large; ctl_stats.arenas[ctl_stats.narenas].astats.allocated_large;
ctl_stats.active = ctl_stats.active =
(ctl_stats.arenas[ctl_stats.narenas].pactive << LG_PAGE); (ctl_stats.arenas[ctl_stats.narenas].pactive << LG_PAGE);
ctl_stats.metadata = base_allocated + ctl_stats.metadata =
ctl_stats.arenas[ctl_stats.narenas].astats.metadata; ctl_stats.arenas[ctl_stats.narenas].astats.base +
ctl_stats.resident = base_resident + ctl_stats.arenas[ctl_stats.narenas].astats.internal;
((ctl_stats.arenas[ctl_stats.narenas].pactive + ctl_stats.resident =
ctl_stats.arenas[ctl_stats.narenas].pdirty) << LG_PAGE); ctl_stats.arenas[ctl_stats.narenas].astats.resident;
ctl_stats.mapped = base_mapped + ctl_stats.mapped =
ctl_stats.arenas[ctl_stats.narenas].astats.mapped; ctl_stats.arenas[ctl_stats.narenas].astats.mapped;
ctl_stats.retained = ctl_stats.retained =
ctl_stats.arenas[ctl_stats.narenas].astats.retained; ctl_stats.arenas[ctl_stats.narenas].astats.retained;
@ -1167,7 +1170,7 @@ thread_arena_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,
if (oldarena == NULL) if (oldarena == NULL)
return (EAGAIN); return (EAGAIN);
newind = oldind = oldarena->ind; newind = oldind = arena_ind_get(oldarena);
WRITE(newind, unsigned); WRITE(newind, unsigned);
READ(oldind, unsigned); READ(oldind, unsigned);
if (newind != oldind) { if (newind != oldind) {
@ -1738,11 +1741,14 @@ arenas_extend_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,
size_t *oldlenp, void *newp, size_t newlen) size_t *oldlenp, void *newp, size_t newlen)
{ {
int ret; int ret;
extent_hooks_t *extent_hooks;
unsigned narenas; unsigned narenas;
malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx); malloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);
READONLY();
if (ctl_grow(tsd_tsdn(tsd))) { extent_hooks = (extent_hooks_t *)&extent_hooks_default;
WRITE(extent_hooks, extent_hooks_t *);
if (ctl_grow(tsd_tsdn(tsd), extent_hooks)) {
ret = EAGAIN; ret = EAGAIN;
goto label_return; goto label_return;
} }
@ -1906,8 +1912,12 @@ CTL_RO_CGEN(config_stats, stats_arenas_i_nmadvise,
ctl_stats.arenas[mib[2]].astats.nmadvise, uint64_t) ctl_stats.arenas[mib[2]].astats.nmadvise, uint64_t)
CTL_RO_CGEN(config_stats, stats_arenas_i_purged, CTL_RO_CGEN(config_stats, stats_arenas_i_purged,
ctl_stats.arenas[mib[2]].astats.purged, uint64_t) ctl_stats.arenas[mib[2]].astats.purged, uint64_t)
CTL_RO_CGEN(config_stats, stats_arenas_i_metadata, CTL_RO_CGEN(config_stats, stats_arenas_i_base,
ctl_stats.arenas[mib[2]].astats.metadata, size_t) ctl_stats.arenas[mib[2]].astats.base, size_t)
CTL_RO_CGEN(config_stats, stats_arenas_i_internal,
ctl_stats.arenas[mib[2]].astats.internal, size_t)
CTL_RO_CGEN(config_stats, stats_arenas_i_resident,
ctl_stats.arenas[mib[2]].astats.resident, size_t)
CTL_RO_CGEN(config_stats, stats_arenas_i_small_allocated, CTL_RO_CGEN(config_stats, stats_arenas_i_small_allocated,
ctl_stats.arenas[mib[2]].allocated_small, size_t) ctl_stats.arenas[mib[2]].allocated_small, size_t)

View File

@ -83,7 +83,8 @@ extent_alloc(tsdn_t *tsdn, arena_t *arena)
extent = ql_last(&arena->extent_cache, ql_link); extent = ql_last(&arena->extent_cache, ql_link);
if (extent == NULL) { if (extent == NULL) {
malloc_mutex_unlock(tsdn, &arena->extent_cache_mtx); malloc_mutex_unlock(tsdn, &arena->extent_cache_mtx);
return (base_alloc(tsdn, sizeof(extent_t))); return (base_alloc(tsdn, arena->base, sizeof(extent_t),
QUANTUM));
} }
ql_tail_remove(&arena->extent_cache, extent_t, ql_link); ql_tail_remove(&arena->extent_cache, extent_t, ql_link);
malloc_mutex_unlock(tsdn, &arena->extent_cache_mtx); malloc_mutex_unlock(tsdn, &arena->extent_cache_mtx);
@ -104,22 +105,14 @@ extent_hooks_t *
extent_hooks_get(arena_t *arena) extent_hooks_get(arena_t *arena)
{ {
return ((extent_hooks_t *)atomic_read_p(&arena->extent_hooks_pun)); return (base_extent_hooks_get(arena->base));
} }
extent_hooks_t * extent_hooks_t *
extent_hooks_set(arena_t *arena, extent_hooks_t *extent_hooks) extent_hooks_set(arena_t *arena, extent_hooks_t *extent_hooks)
{ {
extent_hooks_t *old_extent_hooks = extent_hooks_get(arena);
union {
extent_hooks_t **h;
void **v;
} u;
u.h = &arena->extent_hooks; return (base_extent_hooks_set(arena->base, extent_hooks));
atomic_write_p(u.v, extent_hooks);
return (old_extent_hooks);
} }
static void static void
@ -873,7 +866,7 @@ extent_alloc_wrapper_hard(tsdn_t *tsdn, arena_t *arena,
alignment, zero, commit); alignment, zero, commit);
} else { } else {
addr = (*r_extent_hooks)->alloc(*r_extent_hooks, new_addr, size, addr = (*r_extent_hooks)->alloc(*r_extent_hooks, new_addr, size,
alignment, zero, commit, arena->ind); alignment, zero, commit, arena_ind_get(arena));
} }
if (addr == NULL) { if (addr == NULL) {
extent_dalloc(tsdn, arena, extent); extent_dalloc(tsdn, arena, extent);
@ -1071,7 +1064,7 @@ extent_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena,
err = ((*r_extent_hooks)->dalloc == NULL || err = ((*r_extent_hooks)->dalloc == NULL ||
(*r_extent_hooks)->dalloc(*r_extent_hooks, (*r_extent_hooks)->dalloc(*r_extent_hooks,
extent_base_get(extent), extent_size_get(extent), extent_base_get(extent), extent_size_get(extent),
extent_committed_get(extent), arena->ind)); extent_committed_get(extent), arena_ind_get(arena)));
} }
if (!err) { if (!err) {
@ -1088,12 +1081,12 @@ extent_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena,
else if ((*r_extent_hooks)->purge_lazy != NULL && else if ((*r_extent_hooks)->purge_lazy != NULL &&
!(*r_extent_hooks)->purge_lazy(*r_extent_hooks, !(*r_extent_hooks)->purge_lazy(*r_extent_hooks,
extent_base_get(extent), extent_size_get(extent), 0, extent_base_get(extent), extent_size_get(extent), 0,
extent_size_get(extent), arena->ind)) extent_size_get(extent), arena_ind_get(arena)))
zeroed = false; zeroed = false;
else if ((*r_extent_hooks)->purge_forced != NULL && else if ((*r_extent_hooks)->purge_forced != NULL &&
!(*r_extent_hooks)->purge_forced(*r_extent_hooks, !(*r_extent_hooks)->purge_forced(*r_extent_hooks,
extent_base_get(extent), extent_size_get(extent), 0, extent_base_get(extent), extent_size_get(extent), 0,
extent_size_get(extent), arena->ind)) extent_size_get(extent), arena_ind_get(arena)))
zeroed = true; zeroed = true;
else else
zeroed = false; zeroed = false;
@ -1129,7 +1122,7 @@ extent_commit_wrapper(tsdn_t *tsdn, arena_t *arena,
extent_hooks_assure_initialized(arena, r_extent_hooks); extent_hooks_assure_initialized(arena, r_extent_hooks);
err = ((*r_extent_hooks)->commit == NULL || err = ((*r_extent_hooks)->commit == NULL ||
(*r_extent_hooks)->commit(*r_extent_hooks, extent_base_get(extent), (*r_extent_hooks)->commit(*r_extent_hooks, extent_base_get(extent),
extent_size_get(extent), offset, length, arena->ind)); extent_size_get(extent), offset, length, arena_ind_get(arena)));
extent_committed_set(extent, extent_committed_get(extent) || !err); extent_committed_set(extent, extent_committed_get(extent) || !err);
return (err); return (err);
} }
@ -1157,7 +1150,7 @@ extent_decommit_wrapper(tsdn_t *tsdn, arena_t *arena,
err = ((*r_extent_hooks)->decommit == NULL || err = ((*r_extent_hooks)->decommit == NULL ||
(*r_extent_hooks)->decommit(*r_extent_hooks, (*r_extent_hooks)->decommit(*r_extent_hooks,
extent_base_get(extent), extent_size_get(extent), offset, length, extent_base_get(extent), extent_size_get(extent), offset, length,
arena->ind)); arena_ind_get(arena)));
extent_committed_set(extent, extent_committed_get(extent) && err); extent_committed_set(extent, extent_committed_get(extent) && err);
return (err); return (err);
} }
@ -1189,7 +1182,7 @@ extent_purge_lazy_wrapper(tsdn_t *tsdn, arena_t *arena,
return ((*r_extent_hooks)->purge_lazy == NULL || return ((*r_extent_hooks)->purge_lazy == NULL ||
(*r_extent_hooks)->purge_lazy(*r_extent_hooks, (*r_extent_hooks)->purge_lazy(*r_extent_hooks,
extent_base_get(extent), extent_size_get(extent), offset, length, extent_base_get(extent), extent_size_get(extent), offset, length,
arena->ind)); arena_ind_get(arena)));
} }
#ifdef PAGES_CAN_PURGE_FORCED #ifdef PAGES_CAN_PURGE_FORCED
@ -1219,7 +1212,7 @@ extent_purge_forced_wrapper(tsdn_t *tsdn, arena_t *arena,
return ((*r_extent_hooks)->purge_forced == NULL || return ((*r_extent_hooks)->purge_forced == NULL ||
(*r_extent_hooks)->purge_forced(*r_extent_hooks, (*r_extent_hooks)->purge_forced(*r_extent_hooks,
extent_base_get(extent), extent_size_get(extent), offset, length, extent_base_get(extent), extent_size_get(extent), offset, length,
arena->ind)); arena_ind_get(arena)));
} }
#ifdef JEMALLOC_MAPS_COALESCE #ifdef JEMALLOC_MAPS_COALESCE
@ -1280,7 +1273,7 @@ extent_split_wrapper(tsdn_t *tsdn, arena_t *arena,
if ((*r_extent_hooks)->split(*r_extent_hooks, extent_base_get(extent), if ((*r_extent_hooks)->split(*r_extent_hooks, extent_base_get(extent),
size_a + size_b, size_a, size_b, extent_committed_get(extent), size_a + size_b, size_a, size_b, extent_committed_get(extent),
arena->ind)) arena_ind_get(arena)))
goto label_error_d; goto label_error_d;
extent_size_set(extent, size_a); extent_size_set(extent, size_a);
@ -1348,7 +1341,8 @@ extent_merge_wrapper(tsdn_t *tsdn, arena_t *arena,
} else { } else {
err = (*r_extent_hooks)->merge(*r_extent_hooks, err = (*r_extent_hooks)->merge(*r_extent_hooks,
extent_base_get(a), extent_size_get(a), extent_base_get(b), extent_base_get(a), extent_size_get(a), extent_base_get(b),
extent_size_get(b), extent_committed_get(a), arena->ind); extent_size_get(b), extent_committed_get(a),
arena_ind_get(arena));
} }
if (err) if (err)

View File

@ -304,21 +304,21 @@ malloc_init(void)
*/ */
static void * static void *
a0ialloc(size_t size, bool zero, bool is_metadata) a0ialloc(size_t size, bool zero, bool is_internal)
{ {
if (unlikely(malloc_init_a0())) if (unlikely(malloc_init_a0()))
return (NULL); return (NULL);
return (iallocztm(TSDN_NULL, size, size2index(size), zero, NULL, return (iallocztm(TSDN_NULL, size, size2index(size), zero, NULL,
is_metadata, arena_get(TSDN_NULL, 0, true), true)); is_internal, arena_get(TSDN_NULL, 0, true), true));
} }
static void static void
a0idalloc(extent_t *extent, void *ptr, bool is_metadata) a0idalloc(extent_t *extent, void *ptr, bool is_internal)
{ {
idalloctm(TSDN_NULL, extent, ptr, false, is_metadata, true); idalloctm(TSDN_NULL, extent, ptr, false, is_internal, true);
} }
void * void *
@ -405,7 +405,7 @@ narenas_total_get(void)
/* Create a new arena and insert it into the arenas array at index ind. */ /* Create a new arena and insert it into the arenas array at index ind. */
static arena_t * static arena_t *
arena_init_locked(tsdn_t *tsdn, unsigned ind) arena_init_locked(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks)
{ {
arena_t *arena; arena_t *arena;
@ -426,18 +426,18 @@ arena_init_locked(tsdn_t *tsdn, unsigned ind)
} }
/* Actually initialize the arena. */ /* Actually initialize the arena. */
arena = arena_new(tsdn, ind); arena = arena_new(tsdn, ind, extent_hooks);
arena_set(ind, arena); arena_set(ind, arena);
return (arena); return (arena);
} }
arena_t * arena_t *
arena_init(tsdn_t *tsdn, unsigned ind) arena_init(tsdn_t *tsdn, unsigned ind, extent_hooks_t *extent_hooks)
{ {
arena_t *arena; arena_t *arena;
malloc_mutex_lock(tsdn, &arenas_lock); malloc_mutex_lock(tsdn, &arenas_lock);
arena = arena_init_locked(tsdn, ind); arena = arena_init_locked(tsdn, ind, extent_hooks);
malloc_mutex_unlock(tsdn, &arenas_lock); malloc_mutex_unlock(tsdn, &arenas_lock);
return (arena); return (arena);
} }
@ -629,7 +629,8 @@ arena_choose_hard(tsd_t *tsd, bool internal)
/* Initialize a new arena. */ /* Initialize a new arena. */
choose[j] = first_null; choose[j] = first_null;
arena = arena_init_locked(tsd_tsdn(tsd), arena = arena_init_locked(tsd_tsdn(tsd),
choose[j]); choose[j],
(extent_hooks_t *)&extent_hooks_default);
if (arena == NULL) { if (arena == NULL) {
malloc_mutex_unlock(tsd_tsdn(tsd), malloc_mutex_unlock(tsd_tsdn(tsd),
&arenas_lock); &arenas_lock);
@ -657,7 +658,7 @@ iarena_cleanup(tsd_t *tsd)
iarena = tsd_iarena_get(tsd); iarena = tsd_iarena_get(tsd);
if (iarena != NULL) if (iarena != NULL)
arena_unbind(tsd, iarena->ind, true); arena_unbind(tsd, arena_ind_get(iarena), true);
} }
void void
@ -667,7 +668,7 @@ arena_cleanup(tsd_t *tsd)
arena = tsd_arena_get(tsd); arena = tsd_arena_get(tsd);
if (arena != NULL) if (arena != NULL)
arena_unbind(tsd, arena->ind, false); arena_unbind(tsd, arena_ind_get(arena), false);
} }
void void
@ -1211,7 +1212,7 @@ malloc_init_hard_a0_locked()
} }
} }
pages_boot(); pages_boot();
if (base_boot()) if (base_boot(TSDN_NULL))
return (true); return (true);
if (extent_boot()) if (extent_boot())
return (true); return (true);
@ -1236,7 +1237,8 @@ malloc_init_hard_a0_locked()
* Initialize one arena here. The rest are lazily created in * Initialize one arena here. The rest are lazily created in
* arena_choose_hard(). * arena_choose_hard().
*/ */
if (arena_init(TSDN_NULL, 0) == NULL) if (arena_init(TSDN_NULL, 0, (extent_hooks_t *)&extent_hooks_default) ==
NULL)
return (true); return (true);
malloc_init_state = malloc_init_a0_initialized; malloc_init_state = malloc_init_a0_initialized;
@ -1309,8 +1311,8 @@ malloc_init_hard_finish(tsdn_t *tsdn)
narenas_total_set(narenas_auto); narenas_total_set(narenas_auto);
/* Allocate and initialize arenas. */ /* Allocate and initialize arenas. */
arenas = (arena_t **)base_alloc(tsdn, sizeof(arena_t *) * arenas = (arena_t **)base_alloc(tsdn, a0->base, sizeof(arena_t *) *
(MALLOCX_ARENA_MAX+1)); (MALLOCX_ARENA_MAX+1), CACHELINE);
if (arenas == NULL) if (arenas == NULL)
return (true); return (true);
/* Copy the pointer to the one arena that was already initialized. */ /* Copy the pointer to the one arena that was already initialized. */
@ -2690,7 +2692,6 @@ _malloc_prefork(void)
} }
} }
} }
base_prefork(tsd_tsdn(tsd));
for (i = 0; i < narenas; i++) { for (i = 0; i < narenas; i++) {
if ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL) if ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL)
arena_prefork3(tsd_tsdn(tsd), arena); arena_prefork3(tsd_tsdn(tsd), arena);
@ -2719,7 +2720,6 @@ _malloc_postfork(void)
witness_postfork_parent(tsd); witness_postfork_parent(tsd);
/* Release all mutexes, now that fork() has completed. */ /* Release all mutexes, now that fork() has completed. */
base_postfork_parent(tsd_tsdn(tsd));
for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { for (i = 0, narenas = narenas_total_get(); i < narenas; i++) {
arena_t *arena; arena_t *arena;
@ -2743,7 +2743,6 @@ jemalloc_postfork_child(void)
witness_postfork_child(tsd); witness_postfork_child(tsd);
/* Release all mutexes, now that fork() has completed. */ /* Release all mutexes, now that fork() has completed. */
base_postfork_child(tsd_tsdn(tsd));
for (i = 0, narenas = narenas_total_get(); i < narenas; i++) { for (i = 0, narenas = narenas_total_get(); i < narenas; i++) {
arena_t *arena; arena_t *arena;

View File

@ -2254,7 +2254,8 @@ prof_boot2(tsd_t *tsd)
} }
gctx_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), gctx_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd),
PROF_NCTX_LOCKS * sizeof(malloc_mutex_t)); b0get(), PROF_NCTX_LOCKS * sizeof(malloc_mutex_t),
CACHELINE);
if (gctx_locks == NULL) if (gctx_locks == NULL)
return (true); return (true);
for (i = 0; i < PROF_NCTX_LOCKS; i++) { for (i = 0; i < PROF_NCTX_LOCKS; i++) {
@ -2264,7 +2265,8 @@ prof_boot2(tsd_t *tsd)
} }
tdata_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), tdata_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd),
PROF_NTDATA_LOCKS * sizeof(malloc_mutex_t)); b0get(), PROF_NTDATA_LOCKS * sizeof(malloc_mutex_t),
CACHELINE);
if (tdata_locks == NULL) if (tdata_locks == NULL)
return (true); return (true);
for (i = 0; i < PROF_NTDATA_LOCKS; i++) { for (i = 0; i < PROF_NTDATA_LOCKS; i++) {

View File

@ -72,7 +72,8 @@ static rtree_elm_t *
rtree_node_alloc(tsdn_t *tsdn, rtree_t *rtree, size_t nelms) rtree_node_alloc(tsdn_t *tsdn, rtree_t *rtree, size_t nelms)
{ {
return ((rtree_elm_t *)base_alloc(tsdn, nelms * sizeof(rtree_elm_t))); return ((rtree_elm_t *)base_alloc(tsdn, b0get(), nelms *
sizeof(rtree_elm_t), CACHELINE));
} }
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
#undef rtree_node_alloc #undef rtree_node_alloc

View File

@ -254,7 +254,8 @@ stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque,
unsigned nthreads; unsigned nthreads;
const char *dss; const char *dss;
ssize_t decay_time; ssize_t decay_time;
size_t page, pactive, pdirty, mapped, retained, metadata; size_t page, pactive, pdirty, mapped, retained;
size_t base, internal, resident;
uint64_t npurge, nmadvise, purged; uint64_t npurge, nmadvise, purged;
size_t small_allocated; size_t small_allocated;
uint64_t small_nmalloc, small_ndalloc, small_nrequests; uint64_t small_nmalloc, small_ndalloc, small_nrequests;
@ -404,14 +405,32 @@ stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque,
"retained: %12zu\n", retained); "retained: %12zu\n", retained);
} }
CTL_M2_GET("stats.arenas.0.metadata", i, &metadata, size_t); CTL_M2_GET("stats.arenas.0.base", i, &base, size_t);
if (json) { if (json) {
malloc_cprintf(write_cb, cbopaque, malloc_cprintf(write_cb, cbopaque,
"\t\t\t\t\"metadata\": %zu%s\n", metadata, (bins || large) ? "\t\t\t\t\"base\": %zu,\n", base);
} else {
malloc_cprintf(write_cb, cbopaque,
"base: %12zu\n", base);
}
CTL_M2_GET("stats.arenas.0.internal", i, &internal, size_t);
if (json) {
malloc_cprintf(write_cb, cbopaque,
"\t\t\t\t\"internal\": %zu,\n", internal);
} else {
malloc_cprintf(write_cb, cbopaque,
"internal: %12zu\n", internal);
}
CTL_M2_GET("stats.arenas.0.resident", i, &resident, size_t);
if (json) {
malloc_cprintf(write_cb, cbopaque,
"\t\t\t\t\"resident\": %zu%s\n", resident, (bins || large) ?
"," : ""); "," : "");
} else { } else {
malloc_cprintf(write_cb, cbopaque, malloc_cprintf(write_cb, cbopaque,
"metadata: %12zu\n", metadata); "resident: %12zu\n", resident);
} }
if (bins) if (bins)

View File

@ -440,8 +440,8 @@ tcaches_create(tsd_t *tsd, unsigned *r_ind)
tcaches_t *elm; tcaches_t *elm;
if (tcaches == NULL) { if (tcaches == NULL) {
tcaches = base_alloc(tsd_tsdn(tsd), sizeof(tcache_t *) * tcaches = base_alloc(tsd_tsdn(tsd), b0get(), sizeof(tcache_t *)
(MALLOCX_TCACHE_MAX+1)); * (MALLOCX_TCACHE_MAX+1), CACHELINE);
if (tcaches == NULL) if (tcaches == NULL)
return (true); return (true);
} }
@ -510,8 +510,8 @@ tcache_boot(tsdn_t *tsdn)
nhbins = size2index(tcache_maxclass) + 1; nhbins = size2index(tcache_maxclass) + 1;
/* Initialize tcache_bin_info. */ /* Initialize tcache_bin_info. */
tcache_bin_info = (tcache_bin_info_t *)base_alloc(tsdn, nhbins * tcache_bin_info = (tcache_bin_info_t *)base_alloc(tsdn, b0get(), nhbins
sizeof(tcache_bin_info_t)); * sizeof(tcache_bin_info_t), CACHELINE);
if (tcache_bin_info == NULL) if (tcache_bin_info == NULL)
return (true); return (true);
stack_nelms = 0; stack_nelms = 0;

View File

@ -71,7 +71,7 @@ extent_alloc(extent_hooks_t *extent_hooks, void *new_addr, size_t size,
assert_ptr_eq(extent_hooks->alloc, extent_alloc, "Wrong hook function"); assert_ptr_eq(extent_hooks->alloc, extent_alloc, "Wrong hook function");
did_alloc = true; did_alloc = true;
return (old_hooks->alloc(old_hooks, new_addr, size, alignment, zero, return (old_hooks->alloc(old_hooks, new_addr, size, alignment, zero,
commit, arena_ind)); commit, 0));
} }
static bool static bool
@ -89,7 +89,7 @@ extent_dalloc(extent_hooks_t *extent_hooks, void *addr, size_t size,
did_dalloc = true; did_dalloc = true;
if (!do_dalloc) if (!do_dalloc)
return (true); return (true);
return (old_hooks->dalloc(old_hooks, addr, size, committed, arena_ind)); return (old_hooks->dalloc(old_hooks, addr, size, committed, 0));
} }
static bool static bool
@ -105,8 +105,7 @@ extent_commit(extent_hooks_t *extent_hooks, void *addr, size_t size,
"extent_hooks should be same as pointer used to set hooks"); "extent_hooks should be same as pointer used to set hooks");
assert_ptr_eq(extent_hooks->commit, extent_commit, assert_ptr_eq(extent_hooks->commit, extent_commit,
"Wrong hook function"); "Wrong hook function");
err = old_hooks->commit(old_hooks, addr, size, offset, length, err = old_hooks->commit(old_hooks, addr, size, offset, length, 0);
arena_ind);
did_commit = !err; did_commit = !err;
return (err); return (err);
} }
@ -126,8 +125,7 @@ extent_decommit(extent_hooks_t *extent_hooks, void *addr, size_t size,
"Wrong hook function"); "Wrong hook function");
if (!do_decommit) if (!do_decommit)
return (true); return (true);
err = old_hooks->decommit(old_hooks, addr, size, offset, length, err = old_hooks->decommit(old_hooks, addr, size, offset, length, 0);
arena_ind);
did_decommit = !err; did_decommit = !err;
return (err); return (err);
} }
@ -146,8 +144,7 @@ extent_purge_lazy(extent_hooks_t *extent_hooks, void *addr, size_t size,
"Wrong hook function"); "Wrong hook function");
did_purge_lazy = true; did_purge_lazy = true;
return (old_hooks->purge_lazy == NULL || return (old_hooks->purge_lazy == NULL ||
old_hooks->purge_lazy(old_hooks, addr, size, offset, length, old_hooks->purge_lazy(old_hooks, addr, size, offset, length, 0));
arena_ind));
} }
static bool static bool
@ -164,8 +161,7 @@ extent_purge_forced(extent_hooks_t *extent_hooks, void *addr, size_t size,
"Wrong hook function"); "Wrong hook function");
did_purge_forced = true; did_purge_forced = true;
return (old_hooks->purge_forced == NULL || return (old_hooks->purge_forced == NULL ||
old_hooks->purge_forced(old_hooks, addr, size, offset, length, old_hooks->purge_forced(old_hooks, addr, size, offset, length, 0));
arena_ind));
} }
static bool static bool
@ -183,7 +179,7 @@ extent_split(extent_hooks_t *extent_hooks, void *addr, size_t size,
assert_ptr_eq(extent_hooks->split, extent_split, "Wrong hook function"); assert_ptr_eq(extent_hooks->split, extent_split, "Wrong hook function");
tried_split = true; tried_split = true;
err = (old_hooks->split == NULL || old_hooks->split(old_hooks, addr, err = (old_hooks->split == NULL || old_hooks->split(old_hooks, addr,
size, size_a, size_b, committed, arena_ind)); size, size_a, size_b, committed, 0));
did_split = !err; did_split = !err;
return (err); return (err);
} }
@ -202,51 +198,23 @@ extent_merge(extent_hooks_t *extent_hooks, void *addr_a, size_t size_a,
"extent_hooks should be same as pointer used to set hooks"); "extent_hooks should be same as pointer used to set hooks");
assert_ptr_eq(extent_hooks->merge, extent_merge, "Wrong hook function"); assert_ptr_eq(extent_hooks->merge, extent_merge, "Wrong hook function");
err = (old_hooks->merge == NULL || old_hooks->merge(old_hooks, addr_a, err = (old_hooks->merge == NULL || old_hooks->merge(old_hooks, addr_a,
size_a, addr_b, size_b, committed, arena_ind)); size_a, addr_b, size_b, committed, 0));
did_merge = !err; did_merge = !err;
return (err); return (err);
} }
TEST_BEGIN(test_extent) static void
test_extent_body(unsigned arena_ind)
{ {
void *p; void *p;
size_t old_size, new_size, large0, large1, large2, sz; size_t large0, large1, large2, sz;
unsigned arena_ind; size_t purge_mib[3];
size_t purge_miblen;
int flags; int flags;
size_t hooks_mib[3], purge_mib[3];
size_t hooks_miblen, purge_miblen;
bool xallocx_success_a, xallocx_success_b, xallocx_success_c; bool xallocx_success_a, xallocx_success_b, xallocx_success_c;
sz = sizeof(unsigned);
assert_d_eq(mallctl("arenas.extend", (void *)&arena_ind, &sz, NULL, 0),
0, "Unexpected mallctl() failure");
flags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE; flags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE;
/* Install custom extent hooks. */
hooks_miblen = sizeof(hooks_mib)/sizeof(size_t);
assert_d_eq(mallctlnametomib("arena.0.extent_hooks", hooks_mib,
&hooks_miblen), 0, "Unexpected mallctlnametomib() failure");
hooks_mib[1] = (size_t)arena_ind;
old_size = sizeof(extent_hooks_t *);
new_size = sizeof(extent_hooks_t *);
assert_d_eq(mallctlbymib(hooks_mib, hooks_miblen, (void *)&old_hooks,
&old_size, (void *)&new_hooks, new_size), 0,
"Unexpected extent_hooks error");
orig_hooks = old_hooks;
assert_ptr_ne(old_hooks->alloc, extent_alloc, "Unexpected alloc error");
assert_ptr_ne(old_hooks->dalloc, extent_dalloc,
"Unexpected dalloc error");
assert_ptr_ne(old_hooks->commit, extent_commit,
"Unexpected commit error");
assert_ptr_ne(old_hooks->decommit, extent_decommit,
"Unexpected decommit error");
assert_ptr_ne(old_hooks->purge_lazy, extent_purge_lazy,
"Unexpected purge_lazy error");
assert_ptr_ne(old_hooks->purge_forced, extent_purge_forced,
"Unexpected purge_forced error");
assert_ptr_ne(old_hooks->split, extent_split, "Unexpected split error");
assert_ptr_ne(old_hooks->merge, extent_merge, "Unexpected merge error");
/* Get large size classes. */ /* Get large size classes. */
sz = sizeof(size_t); sz = sizeof(size_t);
assert_d_eq(mallctl("arenas.lextent.0.size", (void *)&large0, &sz, NULL, assert_d_eq(mallctl("arenas.lextent.0.size", (void *)&large0, &sz, NULL,
@ -314,6 +282,45 @@ TEST_BEGIN(test_extent)
p = mallocx(42, flags); p = mallocx(42, flags);
assert_ptr_not_null(p, "Unexpected mallocx() error"); assert_ptr_not_null(p, "Unexpected mallocx() error");
dallocx(p, flags); dallocx(p, flags);
}
TEST_BEGIN(test_extent_manual_hook)
{
unsigned arena_ind;
size_t old_size, new_size, sz;
size_t hooks_mib[3];
size_t hooks_miblen;
sz = sizeof(unsigned);
assert_d_eq(mallctl("arenas.extend", (void *)&arena_ind, &sz, NULL, 0),
0, "Unexpected mallctl() failure");
/* Install custom extent hooks. */
hooks_miblen = sizeof(hooks_mib)/sizeof(size_t);
assert_d_eq(mallctlnametomib("arena.0.extent_hooks", hooks_mib,
&hooks_miblen), 0, "Unexpected mallctlnametomib() failure");
hooks_mib[1] = (size_t)arena_ind;
old_size = sizeof(extent_hooks_t *);
new_size = sizeof(extent_hooks_t *);
assert_d_eq(mallctlbymib(hooks_mib, hooks_miblen, (void *)&old_hooks,
&old_size, (void *)&new_hooks, new_size), 0,
"Unexpected extent_hooks error");
orig_hooks = old_hooks;
assert_ptr_ne(old_hooks->alloc, extent_alloc, "Unexpected alloc error");
assert_ptr_ne(old_hooks->dalloc, extent_dalloc,
"Unexpected dalloc error");
assert_ptr_ne(old_hooks->commit, extent_commit,
"Unexpected commit error");
assert_ptr_ne(old_hooks->decommit, extent_decommit,
"Unexpected decommit error");
assert_ptr_ne(old_hooks->purge_lazy, extent_purge_lazy,
"Unexpected purge_lazy error");
assert_ptr_ne(old_hooks->purge_forced, extent_purge_forced,
"Unexpected purge_forced error");
assert_ptr_ne(old_hooks->split, extent_split, "Unexpected split error");
assert_ptr_ne(old_hooks->merge, extent_merge, "Unexpected merge error");
test_extent_body(arena_ind);
/* Restore extent hooks. */ /* Restore extent hooks. */
assert_d_eq(mallctlbymib(hooks_mib, hooks_miblen, NULL, NULL, assert_d_eq(mallctlbymib(hooks_mib, hooks_miblen, NULL, NULL,
@ -340,9 +347,25 @@ TEST_BEGIN(test_extent)
} }
TEST_END TEST_END
TEST_BEGIN(test_extent_auto_hook)
{
unsigned arena_ind;
size_t new_size, sz;
sz = sizeof(unsigned);
new_size = sizeof(extent_hooks_t *);
assert_d_eq(mallctl("arenas.extend", (void *)&arena_ind, &sz,
(void *)&new_hooks, new_size), 0, "Unexpected mallctl() failure");
test_extent_body(arena_ind);
}
TEST_END
int int
main(void) main(void)
{ {
return (test(test_extent)); return (test(
test_extent_manual_hook,
test_extent_auto_hook));
} }

274
test/unit/base.c Normal file
View File

@ -0,0 +1,274 @@
#include "test/jemalloc_test.h"
static void *extent_alloc_hook(extent_hooks_t *extent_hooks, void *new_addr,
size_t size, size_t alignment, bool *zero, bool *commit,
unsigned arena_ind);
static bool extent_dalloc_hook(extent_hooks_t *extent_hooks, void *addr,
size_t size, bool committed, unsigned arena_ind);
static bool extent_decommit_hook(extent_hooks_t *extent_hooks, void *addr,
size_t size, size_t offset, size_t length, unsigned arena_ind);
static bool extent_purge_lazy_hook(extent_hooks_t *extent_hooks, void *addr,
size_t size, size_t offset, size_t length, unsigned arena_ind);
static bool extent_purge_forced_hook(extent_hooks_t *extent_hooks,
void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind);
static extent_hooks_t hooks_not_null = {
extent_alloc_hook,
extent_dalloc_hook,
NULL, /* commit */
extent_decommit_hook,
extent_purge_lazy_hook,
extent_purge_forced_hook,
NULL, /* split */
NULL /* merge */
};
static extent_hooks_t hooks_null = {
extent_alloc_hook,
NULL, /* dalloc */
NULL, /* commit */
NULL, /* decommit */
NULL, /* purge_lazy */
NULL, /* purge_forced */
NULL, /* split */
NULL /* merge */
};
static bool did_alloc;
static bool did_dalloc;
static bool did_decommit;
static bool did_purge_lazy;
static bool did_purge_forced;
#if 0
# define TRACE_HOOK(fmt, ...) malloc_printf(fmt, __VA_ARGS__)
#else
# define TRACE_HOOK(fmt, ...)
#endif
static void *
extent_alloc_hook(extent_hooks_t *extent_hooks, void *new_addr, size_t size,
size_t alignment, bool *zero, bool *commit, unsigned arena_ind)
{
TRACE_HOOK("%s(extent_hooks=%p, new_addr=%p, size=%zu, alignment=%zu, "
"*zero=%s, *commit=%s, arena_ind=%u)\n", __func__, extent_hooks,
new_addr, size, alignment, *zero ? "true" : "false", *commit ?
"true" : "false", arena_ind);
did_alloc = true;
return (extent_hooks_default.alloc(
(extent_hooks_t *)&extent_hooks_default, new_addr, size, alignment,
zero, commit, 0));
}
static bool
extent_dalloc_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,
bool committed, unsigned arena_ind)
{
TRACE_HOOK("%s(extent_hooks=%p, addr=%p, size=%zu, committed=%s, "
"arena_ind=%u)\n", __func__, extent_hooks, addr, size, committed ?
"true" : "false", arena_ind);
did_dalloc = true;
return (true); /* Cause cascade. */
}
static bool
extent_decommit_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,
size_t offset, size_t length, unsigned arena_ind)
{
TRACE_HOOK("%s(extent_hooks=%p, addr=%p, size=%zu, offset=%zu, "
"length=%zu, arena_ind=%u)\n", __func__, extent_hooks, addr, size,
offset, length, arena_ind);
did_decommit = true;
return (true); /* Cause cascade. */
}
static bool
extent_purge_lazy_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,
size_t offset, size_t length, unsigned arena_ind)
{
TRACE_HOOK("%s(extent_hooks=%p, addr=%p, size=%zu, offset=%zu, "
"length=%zu arena_ind=%u)\n", __func__, extent_hooks, addr, size,
offset, length, arena_ind);
did_purge_lazy = true;
return (true); /* Cause cascade. */
}
static bool
extent_purge_forced_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,
size_t offset, size_t length, unsigned arena_ind)
{
TRACE_HOOK("%s(extent_hooks=%p, addr=%p, size=%zu, offset=%zu, "
"length=%zu arena_ind=%u)\n", __func__, extent_hooks, addr, size,
offset, length, arena_ind);
did_purge_forced = true;
return (true); /* Cause cascade. */
}
TEST_BEGIN(test_base_hooks_default)
{
tsdn_t *tsdn;
base_t *base;
size_t allocated0, allocated1, resident, mapped;
tsdn = tsdn_fetch();
base = base_new(tsdn, 0, (extent_hooks_t *)&extent_hooks_default);
base_stats_get(tsdn, base, &allocated0, &resident, &mapped);
assert_zu_ge(allocated0, sizeof(base_t),
"Base header should count as allocated");
assert_ptr_not_null(base_alloc(tsdn, base, 42, 1),
"Unexpected base_alloc() failure");
base_stats_get(tsdn, base, &allocated1, &resident, &mapped);
assert_zu_ge(allocated1 - allocated0, 42,
"At least 42 bytes were allocated by base_alloc()");
base_delete(base);
}
TEST_END
TEST_BEGIN(test_base_hooks_null)
{
tsdn_t *tsdn;
base_t *base;
size_t allocated0, allocated1, resident, mapped;
tsdn = tsdn_fetch();
base = base_new(tsdn, 0, (extent_hooks_t *)&hooks_null);
assert_ptr_not_null(base, "Unexpected base_new() failure");
base_stats_get(tsdn, base, &allocated0, &resident, &mapped);
assert_zu_ge(allocated0, sizeof(base_t),
"Base header should count as allocated");
assert_ptr_not_null(base_alloc(tsdn, base, 42, 1),
"Unexpected base_alloc() failure");
base_stats_get(tsdn, base, &allocated1, &resident, &mapped);
assert_zu_ge(allocated1 - allocated0, 42,
"At least 42 bytes were allocated by base_alloc()");
base_delete(base);
}
TEST_END
TEST_BEGIN(test_base_hooks_not_null)
{
tsdn_t *tsdn;
base_t *base;
void *p, *q, *r, *r_exp;
tsdn = tsdn_fetch();
did_alloc = false;
base = base_new(tsdn, 0, (extent_hooks_t *)&hooks_not_null);
assert_ptr_not_null(base, "Unexpected base_new() failure");
assert_true(did_alloc, "Expected alloc hook call");
/*
* Check for tight packing at specified alignment under simple
* conditions.
*/
{
const size_t alignments[] = {
1,
QUANTUM,
QUANTUM << 1,
CACHELINE,
CACHELINE << 1,
};
unsigned i;
for (i = 0; i < sizeof(alignments) / sizeof(size_t); i++) {
size_t alignment = alignments[i];
size_t align_ceil = ALIGNMENT_CEILING(alignment,
QUANTUM);
p = base_alloc(tsdn, base, 1, alignment);
assert_ptr_not_null(p,
"Unexpected base_alloc() failure");
assert_ptr_eq(p,
(void *)(ALIGNMENT_CEILING((uintptr_t)p,
alignment)), "Expected quantum alignment");
q = base_alloc(tsdn, base, alignment, alignment);
assert_ptr_not_null(q,
"Unexpected base_alloc() failure");
assert_ptr_eq((void *)((uintptr_t)p + align_ceil), q,
"Minimal allocation should take up %zu bytes",
align_ceil);
r = base_alloc(tsdn, base, 1, alignment);
assert_ptr_not_null(r,
"Unexpected base_alloc() failure");
assert_ptr_eq((void *)((uintptr_t)q + align_ceil), r,
"Minimal allocation should take up %zu bytes",
align_ceil);
}
}
/*
* Allocate an object that cannot fit in the first block, then verify
* that the first block's remaining space is considered for subsequent
* allocation.
*/
assert_zu_ge(extent_size_get(&base->blocks->extent), QUANTUM,
"Remainder insufficient for test");
/* Use up all but one quantum of block. */
while (extent_size_get(&base->blocks->extent) > QUANTUM) {
p = base_alloc(tsdn, base, QUANTUM, QUANTUM);
assert_ptr_not_null(p, "Unexpected base_alloc() failure");
}
r_exp = extent_addr_get(&base->blocks->extent);
assert_zu_eq(base->extent_sn_next, 1, "One extant block expected");
q = base_alloc(tsdn, base, QUANTUM + 1, QUANTUM);
assert_ptr_not_null(q, "Unexpected base_alloc() failure");
assert_ptr_ne(q, r_exp, "Expected allocation from new block");
assert_zu_eq(base->extent_sn_next, 2, "Two extant blocks expected");
r = base_alloc(tsdn, base, QUANTUM, QUANTUM);
assert_ptr_not_null(r, "Unexpected base_alloc() failure");
assert_ptr_eq(r, r_exp, "Expected allocation from first block");
assert_zu_eq(base->extent_sn_next, 2, "Two extant blocks expected");
/*
* Check for proper alignment support when normal blocks are too small.
*/
{
const size_t alignments[] = {
HUGEPAGE,
HUGEPAGE << 1
};
unsigned i;
for (i = 0; i < sizeof(alignments) / sizeof(size_t); i++) {
size_t alignment = alignments[i];
p = base_alloc(tsdn, base, QUANTUM, alignment);
assert_ptr_not_null(p,
"Unexpected base_alloc() failure");
assert_ptr_eq(p,
(void *)(ALIGNMENT_CEILING((uintptr_t)p,
alignment)), "Expected %zu-byte alignment",
alignment);
}
}
did_dalloc = did_decommit = did_purge_lazy = did_purge_forced = false;
base_delete(base);
assert_true(did_dalloc, "Expected dalloc hook call");
assert_true(did_decommit, "Expected decommit hook call");
assert_true(did_purge_lazy, "Expected purge_lazy hook call");
assert_true(did_purge_forced, "Expected purge_forced hook call");
}
TEST_END
int
main(void)
{
return (test(
test_base_hooks_default,
test_base_hooks_null,
test_base_hooks_not_null));
}