Implement dynamic per arena control over dirty page purging.

Add mallctls:
- arenas.lg_dirty_mult is initialized via opt.lg_dirty_mult, and can be
  modified to change the initial lg_dirty_mult setting for newly created
  arenas.
- arena.<i>.lg_dirty_mult controls an individual arena's dirty page
  purging threshold, and synchronously triggers any purging that may be
  necessary to maintain the constraint.
- arena.<i>.chunk.purge allows the per arena dirty page purging function
  to be replaced.

This resolves #93.
This commit is contained in:
Jason Evans 2015-03-18 18:55:33 -07:00
parent c9db461ffb
commit 8d6a3e8321
13 changed files with 460 additions and 99 deletions

View File

@ -937,7 +937,11 @@ for (i = 0; i < nbins; i++) {
provides the kernel with sufficient information to recycle dirty pages provides the kernel with sufficient information to recycle dirty pages
if physical memory becomes scarce and the pages remain unused. The if physical memory becomes scarce and the pages remain unused. The
default minimum ratio is 8:1 (2^3:1); an option value of -1 will default minimum ratio is 8:1 (2^3:1); an option value of -1 will
disable dirty page purging.</para></listitem> disable dirty page purging. See <link
linkend="arenas.lg_dirty_mult"><mallctl>arenas.lg_dirty_mult</mallctl></link>
and <link
linkend="arena.i.lg_dirty_mult"><mallctl>arena.&lt;i&gt;.lg_dirty_mult</mallctl></link>
for related dynamic control options.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="opt.stats_print"> <varlistentry id="opt.stats_print">
@ -1151,7 +1155,7 @@ malloc_conf = "xmalloc:true";]]></programlisting>
<term> <term>
<mallctl>opt.prof_active</mallctl> <mallctl>opt.prof_active</mallctl>
(<type>bool</type>) (<type>bool</type>)
<literal>rw</literal> <literal>r-</literal>
[<option>--enable-prof</option>] [<option>--enable-prof</option>]
</term> </term>
<listitem><para>Profiling activated/deactivated. This is a secondary <listitem><para>Profiling activated/deactivated. This is a secondary
@ -1489,6 +1493,20 @@ malloc_conf = "xmalloc:true";]]></programlisting>
settings.</para></listitem> settings.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arena.i.lg_dirty_mult">
<term>
<mallctl>arena.&lt;i&gt;.lg_dirty_mult</mallctl>
(<type>ssize_t</type>)
<literal>rw</literal>
</term>
<listitem><para>Current per-arena minimum ratio (log base 2) of active
to dirty pages for arena &lt;i&gt;. Each time this interface is set and
the ratio is increased, pages are synchronously purged as necessary to
impose the new ratio. See <link
linkend="opt.lg_dirty_mult"><mallctl>opt.lg_dirty_mult</mallctl></link>
for additional information.</para></listitem>
</varlistentry>
<varlistentry id="arena.i.chunk.alloc"> <varlistentry id="arena.i.chunk.alloc">
<term> <term>
<mallctl>arena.&lt;i&gt;.chunk.alloc</mallctl> <mallctl>arena.&lt;i&gt;.chunk.alloc</mallctl>
@ -1544,12 +1562,12 @@ malloc_conf = "xmalloc:true";]]></programlisting>
allocation for arenas created via <link allocation for arenas created via <link
linkend="arenas.extend"><mallctl>arenas.extend</mallctl></link> such linkend="arenas.extend"><mallctl>arenas.extend</mallctl></link> such
that all chunks originate from an application-supplied chunk allocator that all chunks originate from an application-supplied chunk allocator
(by setting custom chunk allocation/deallocation functions just after (by setting custom chunk allocation/deallocation/purge functions just
arena creation), but the automatically created arenas may have already after arena creation), but the automatically created arenas may have
created chunks prior to the application having an opportunity to take already created chunks prior to the application having an opportunity to
over chunk allocation. take over chunk allocation.
<funcsynopsis><funcprototype> <funcsynopsis><funcprototype>
<funcdef>typedef void <function>(chunk_dalloc_t)</function></funcdef> <funcdef>typedef bool <function>(chunk_dalloc_t)</function></funcdef>
<paramdef>void *<parameter>chunk</parameter></paramdef> <paramdef>void *<parameter>chunk</parameter></paramdef>
<paramdef>size_t <parameter>size</parameter></paramdef> <paramdef>size_t <parameter>size</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef> <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
@ -1557,7 +1575,47 @@ malloc_conf = "xmalloc:true";]]></programlisting>
A chunk deallocation function conforms to the A chunk deallocation function conforms to the
<type>chunk_dalloc_t</type> type and deallocates a <type>chunk_dalloc_t</type> type and deallocates a
<parameter>chunk</parameter> of given <parameter>size</parameter> on <parameter>chunk</parameter> of given <parameter>size</parameter> on
behalf of arena <parameter>arena_ind</parameter>.</para></listitem> behalf of arena <parameter>arena_ind</parameter>, returning false upon
success.</para></listitem>
</varlistentry>
<varlistentry id="arena.i.chunk.purge">
<term>
<mallctl>arena.&lt;i&gt;.chunk.purge</mallctl>
(<type>chunk_purge_t *</type>)
<literal>rw</literal>
</term>
<listitem><para>Get or set the chunk purge function for arena &lt;i&gt;.
A chunk purge function optionally discards physical pages associated
with pages in the chunk's virtual memory range but leaves the virtual
memory mapping intact, and indicates via its return value whether pages
in the virtual memory range will be zero-filled the next time they are
accessed. If setting, the chunk purge function must be capable of
purging all extant chunks associated with arena &lt;i&gt;, usually by
passing unknown chunks to the purge function that was replaced. In
practice, it is feasible to control allocation for arenas created via
<link linkend="arenas.extend"><mallctl>arenas.extend</mallctl></link>
such that all chunks originate from an application-supplied chunk
allocator (by setting custom chunk allocation/deallocation/purge
functions just after arena creation), but the automatically created
arenas may have already created chunks prior to the application having
an opportunity to take over chunk allocation.
<funcsynopsis><funcprototype>
<funcdef>typedef bool <function>(chunk_purge_t)</function></funcdef>
<paramdef>void *<parameter>chunk</parameter></paramdef>
<paramdef>size_t <parameter>offset</parameter></paramdef>
<paramdef>size_t <parameter>length</parameter></paramdef>
<paramdef>unsigned <parameter>arena_ind</parameter></paramdef>
</funcprototype></funcsynopsis>
A chunk purge function conforms to the <type>chunk_purge_t</type> type
and purges pages within <parameter>chunk</parameter> at
<parameter>offset</parameter> bytes, extending for
<parameter>length</parameter> on behalf of arena
<parameter>arena_ind</parameter>, returning false if pages within the
purged virtual memory range will be zero-filled the next time they are
accessed. Note that the memory range being purged may span multiple
contiguous chunks, e.g. when purging memory that backed a huge
allocation.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arenas.narenas"> <varlistentry id="arenas.narenas">
@ -1581,6 +1639,20 @@ malloc_conf = "xmalloc:true";]]></programlisting>
initialized.</para></listitem> initialized.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arenas.lg_dirty_mult">
<term>
<mallctl>arenas.lg_dirty_mult</mallctl>
(<type>ssize_t</type>)
<literal>rw</literal>
</term>
<listitem><para>Current default per-arena minimum ratio (log base 2) of
active to dirty pages, used to initialize <link
linkend="arena.i.lg_dirty_mult"><mallctl>arena.&lt;i&gt;.lg_dirty_mult</mallctl></link>
during arena creation. See <link
linkend="opt.lg_dirty_mult"><mallctl>opt.lg_dirty_mult</mallctl></link>
for additional information.</para></listitem>
</varlistentry>
<varlistentry id="arenas.quantum"> <varlistentry id="arenas.quantum">
<term> <term>
<mallctl>arenas.quantum</mallctl> <mallctl>arenas.quantum</mallctl>

View File

@ -16,10 +16,10 @@
/* /*
* The minimum ratio of active:dirty pages per arena is computed as: * The minimum ratio of active:dirty pages per arena is computed as:
* *
* (nactive >> opt_lg_dirty_mult) >= ndirty * (nactive >> lg_dirty_mult) >= ndirty
* *
* So, supposing that opt_lg_dirty_mult is 3, there can be no less than 8 times * So, supposing that lg_dirty_mult is 3, there can be no less than 8 times as
* as many active pages as dirty pages. * many active pages as dirty pages.
*/ */
#define LG_DIRTY_MULT_DEFAULT 3 #define LG_DIRTY_MULT_DEFAULT 3
@ -304,6 +304,9 @@ struct arena_s {
*/ */
arena_chunk_t *spare; arena_chunk_t *spare;
/* Minimum ratio (log base 2) of nactive:ndirty. */
ssize_t lg_dirty_mult;
/* Number of pages in active runs and huge regions. */ /* Number of pages in active runs and huge regions. */
size_t nactive; size_t nactive;
@ -376,10 +379,11 @@ struct arena_s {
malloc_mutex_t node_cache_mtx; malloc_mutex_t node_cache_mtx;
/* /*
* User-configurable chunk allocation and deallocation functions. * User-configurable chunk allocation/deallocation/purge functions.
*/ */
chunk_alloc_t *chunk_alloc; chunk_alloc_t *chunk_alloc;
chunk_dalloc_t *chunk_dalloc; chunk_dalloc_t *chunk_dalloc;
chunk_purge_t *chunk_purge;
/* bins is used to store trees of free regions. */ /* bins is used to store trees of free regions. */
arena_bin_t bins[NBINS]; arena_bin_t bins[NBINS];
@ -416,6 +420,8 @@ void arena_chunk_ralloc_huge_shrink(arena_t *arena, void *chunk,
size_t oldsize, size_t usize); size_t oldsize, size_t usize);
bool arena_chunk_ralloc_huge_expand(arena_t *arena, void *chunk, bool arena_chunk_ralloc_huge_expand(arena_t *arena, void *chunk,
size_t oldsize, size_t usize, bool *zero); size_t oldsize, size_t usize, bool *zero);
ssize_t arena_lg_dirty_mult_get(arena_t *arena);
bool arena_lg_dirty_mult_set(arena_t *arena, ssize_t lg_dirty_mult);
void arena_maybe_purge(arena_t *arena); void arena_maybe_purge(arena_t *arena);
void arena_purge_all(arena_t *arena); void arena_purge_all(arena_t *arena);
void arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin, void arena_tcache_fill_small(arena_t *arena, tcache_bin_t *tbin,
@ -462,6 +468,8 @@ void *arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
size_t size, size_t extra, size_t alignment, bool zero, tcache_t *tcache); size_t size, size_t extra, size_t alignment, bool zero, tcache_t *tcache);
dss_prec_t arena_dss_prec_get(arena_t *arena); dss_prec_t arena_dss_prec_get(arena_t *arena);
bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec); bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec);
ssize_t arena_lg_dirty_mult_default_get(void);
bool arena_lg_dirty_mult_default_set(ssize_t lg_dirty_mult);
void arena_stats_merge(arena_t *arena, const char **dss, size_t *nactive, void arena_stats_merge(arena_t *arena, const char **dss, size_t *nactive,
size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats, size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats,
malloc_large_stats_t *lstats, malloc_huge_stats_t *hstats); malloc_large_stats_t *lstats, malloc_huge_stats_t *hstats);

View File

@ -54,6 +54,12 @@ void chunk_dalloc_arena(arena_t *arena, void *chunk, size_t size,
bool chunk_dalloc_default(void *chunk, size_t size, unsigned arena_ind); bool chunk_dalloc_default(void *chunk, size_t size, unsigned arena_ind);
void chunk_dalloc_wrapper(arena_t *arena, chunk_dalloc_t *chunk_dalloc, void chunk_dalloc_wrapper(arena_t *arena, chunk_dalloc_t *chunk_dalloc,
void *chunk, size_t size); void *chunk, size_t size);
bool chunk_purge_arena(arena_t *arena, void *chunk, size_t offset,
size_t length);
bool chunk_purge_default(void *chunk, size_t offset, size_t length,
unsigned arena_ind);
bool chunk_purge_wrapper(arena_t *arena, chunk_purge_t *chunk_purge,
void *chunk, size_t offset, size_t length);
bool chunk_boot(void); bool chunk_boot(void);
void chunk_prefork(void); void chunk_prefork(void);
void chunk_postfork_parent(void); void chunk_postfork_parent(void);

View File

@ -30,6 +30,10 @@ arena_dalloc_small
arena_dss_prec_get arena_dss_prec_get
arena_dss_prec_set arena_dss_prec_set
arena_init arena_init
arena_lg_dirty_mult_default_get
arena_lg_dirty_mult_default_set
arena_lg_dirty_mult_get
arena_lg_dirty_mult_set
arena_malloc arena_malloc
arena_malloc_large arena_malloc_large
arena_malloc_small arena_malloc_small
@ -151,6 +155,9 @@ chunk_npages
chunk_postfork_child chunk_postfork_child
chunk_postfork_parent chunk_postfork_parent
chunk_prefork chunk_prefork
chunk_purge_arena
chunk_purge_default
chunk_purge_wrapper
chunk_record chunk_record
chunk_register chunk_register
chunks_rtree chunks_rtree

View File

@ -1,2 +1,3 @@
typedef void *(chunk_alloc_t)(void *, size_t, size_t, bool *, unsigned); typedef void *(chunk_alloc_t)(void *, size_t, size_t, bool *, unsigned);
typedef bool (chunk_dalloc_t)(void *, size_t, unsigned); typedef bool (chunk_dalloc_t)(void *, size_t, unsigned);
typedef bool (chunk_purge_t)(void *, size_t, size_t, unsigned);

View File

@ -5,6 +5,7 @@
/* Data. */ /* Data. */
ssize_t opt_lg_dirty_mult = LG_DIRTY_MULT_DEFAULT; ssize_t opt_lg_dirty_mult = LG_DIRTY_MULT_DEFAULT;
static ssize_t lg_dirty_mult_default;
arena_bin_info_t arena_bin_info[NBINS]; arena_bin_info_t arena_bin_info[NBINS];
size_t map_bias; size_t map_bias;
@ -1032,15 +1033,49 @@ arena_run_alloc_small(arena_t *arena, size_t size, index_t binind)
return (arena_run_alloc_small_helper(arena, size, binind)); return (arena_run_alloc_small_helper(arena, size, binind));
} }
static bool
arena_lg_dirty_mult_valid(ssize_t lg_dirty_mult)
{
return (lg_dirty_mult >= -1 && lg_dirty_mult < (sizeof(size_t) << 3));
}
ssize_t
arena_lg_dirty_mult_get(arena_t *arena)
{
ssize_t lg_dirty_mult;
malloc_mutex_lock(&arena->lock);
lg_dirty_mult = arena->lg_dirty_mult;
malloc_mutex_unlock(&arena->lock);
return (lg_dirty_mult);
}
bool
arena_lg_dirty_mult_set(arena_t *arena, ssize_t lg_dirty_mult)
{
if (!arena_lg_dirty_mult_valid(lg_dirty_mult))
return (true);
malloc_mutex_lock(&arena->lock);
arena->lg_dirty_mult = lg_dirty_mult;
arena_maybe_purge(arena);
malloc_mutex_unlock(&arena->lock);
return (false);
}
void void
arena_maybe_purge(arena_t *arena) arena_maybe_purge(arena_t *arena)
{ {
size_t threshold; size_t threshold;
/* Don't purge if the option is disabled. */ /* Don't purge if the option is disabled. */
if (opt_lg_dirty_mult < 0) if (arena->lg_dirty_mult < 0)
return; return;
threshold = (arena->nactive >> opt_lg_dirty_mult); threshold = (arena->nactive >> arena->lg_dirty_mult);
threshold = threshold < chunk_npages ? chunk_npages : threshold; threshold = threshold < chunk_npages ? chunk_npages : threshold;
/* /*
* Don't purge unless the number of purgeable pages exceeds the * Don't purge unless the number of purgeable pages exceeds the
@ -1096,7 +1131,7 @@ arena_compute_npurge(arena_t *arena, bool all)
* purge. * purge.
*/ */
if (!all) { if (!all) {
size_t threshold = (arena->nactive >> opt_lg_dirty_mult); size_t threshold = (arena->nactive >> arena->lg_dirty_mult);
threshold = threshold < chunk_npages ? chunk_npages : threshold; threshold = threshold < chunk_npages ? chunk_npages : threshold;
npurge = arena->ndirty - threshold; npurge = arena->ndirty - threshold;
@ -1192,6 +1227,7 @@ arena_purge_stashed(arena_t *arena,
extent_node_t *purge_chunks_sentinel) extent_node_t *purge_chunks_sentinel)
{ {
size_t npurged, nmadvise; size_t npurged, nmadvise;
chunk_purge_t *chunk_purge;
arena_runs_dirty_link_t *rdelm; arena_runs_dirty_link_t *rdelm;
extent_node_t *chunkselm; extent_node_t *chunkselm;
@ -1199,6 +1235,7 @@ arena_purge_stashed(arena_t *arena,
nmadvise = 0; nmadvise = 0;
npurged = 0; npurged = 0;
chunk_purge = arena->chunk_purge;
malloc_mutex_unlock(&arena->lock); malloc_mutex_unlock(&arena->lock);
for (rdelm = qr_next(purge_runs_sentinel, rd_link), for (rdelm = qr_next(purge_runs_sentinel, rd_link),
chunkselm = qr_next(purge_chunks_sentinel, cc_link); chunkselm = qr_next(purge_chunks_sentinel, cc_link);
@ -1207,11 +1244,16 @@ arena_purge_stashed(arena_t *arena,
if (rdelm == &chunkselm->rd) { if (rdelm == &chunkselm->rd) {
size_t size = extent_node_size_get(chunkselm); size_t size = extent_node_size_get(chunkselm);
void *addr, *chunk;
size_t offset;
bool unzeroed; bool unzeroed;
npages = size >> LG_PAGE; npages = size >> LG_PAGE;
unzeroed = pages_purge(extent_node_addr_get(chunkselm), addr = extent_node_addr_get(chunkselm);
size); chunk = CHUNK_ADDR2BASE(addr);
offset = CHUNK_ADDR2OFFSET(addr);
unzeroed = chunk_purge_wrapper(arena, chunk_purge,
chunk, offset, size);
extent_node_zeroed_set(chunkselm, !unzeroed); extent_node_zeroed_set(chunkselm, !unzeroed);
chunkselm = qr_next(chunkselm, cc_link); chunkselm = qr_next(chunkselm, cc_link);
} else { } else {
@ -1226,15 +1268,15 @@ arena_purge_stashed(arena_t *arena,
npages = run_size >> LG_PAGE; npages = run_size >> LG_PAGE;
assert(pageind + npages <= chunk_npages); assert(pageind + npages <= chunk_npages);
unzeroed = pages_purge((void *)((uintptr_t)chunk + unzeroed = chunk_purge_wrapper(arena, chunk_purge,
(pageind << LG_PAGE)), run_size); chunk, pageind << LG_PAGE, run_size);
flag_unzeroed = unzeroed ? CHUNK_MAP_UNZEROED : 0; flag_unzeroed = unzeroed ? CHUNK_MAP_UNZEROED : 0;
/* /*
* Set the unzeroed flag for all pages, now that * Set the unzeroed flag for all pages, now that
* pages_purge() has returned whether the pages were * chunk_purge_wrapper() has returned whether the pages
* zeroed as a side effect of purging. This chunk map * were zeroed as a side effect of purging. This chunk
* modification is safe even though the arena mutex * map modification is safe even though the arena mutex
* isn't currently owned by this thread, because the run * isn't currently owned by this thread, because the run
* is marked as allocated, thus protecting it from being * is marked as allocated, thus protecting it from being
* modified by any other thread. As long as these * modified by any other thread. As long as these
@ -1294,7 +1336,7 @@ arena_unstash_purged(arena_t *arena,
} }
} }
void static void
arena_purge(arena_t *arena, bool all) arena_purge(arena_t *arena, bool all)
{ {
size_t npurge, npurgeable, npurged; size_t npurge, npurgeable, npurged;
@ -1309,7 +1351,7 @@ arena_purge(arena_t *arena, bool all)
size_t ndirty = arena_dirty_count(arena); size_t ndirty = arena_dirty_count(arena);
assert(ndirty == arena->ndirty); assert(ndirty == arena->ndirty);
} }
assert((arena->nactive >> opt_lg_dirty_mult) < arena->ndirty || all); assert((arena->nactive >> arena->lg_dirty_mult) < arena->ndirty || all);
if (config_stats) if (config_stats)
arena->stats.npurge++; arena->stats.npurge++;
@ -2596,6 +2638,23 @@ arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec)
return (false); return (false);
} }
ssize_t
arena_lg_dirty_mult_default_get(void)
{
return ((ssize_t)atomic_read_z((size_t *)&lg_dirty_mult_default));
}
bool
arena_lg_dirty_mult_default_set(ssize_t lg_dirty_mult)
{
if (!arena_lg_dirty_mult_valid(lg_dirty_mult))
return (true);
atomic_write_z((size_t *)&lg_dirty_mult_default, (size_t)lg_dirty_mult);
return (false);
}
void void
arena_stats_merge(arena_t *arena, const char **dss, size_t *nactive, arena_stats_merge(arena_t *arena, const char **dss, size_t *nactive,
size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats, size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats,
@ -2702,6 +2761,7 @@ arena_new(unsigned ind)
arena->spare = NULL; arena->spare = NULL;
arena->lg_dirty_mult = arena_lg_dirty_mult_default_get();
arena->nactive = 0; arena->nactive = 0;
arena->ndirty = 0; arena->ndirty = 0;
@ -2727,6 +2787,7 @@ arena_new(unsigned ind)
arena->chunk_alloc = chunk_alloc_default; arena->chunk_alloc = chunk_alloc_default;
arena->chunk_dalloc = chunk_dalloc_default; arena->chunk_dalloc = chunk_dalloc_default;
arena->chunk_purge = chunk_purge_default;
/* Initialize bins. */ /* Initialize bins. */
for (i = 0; i < NBINS; i++) { for (i = 0; i < NBINS; i++) {
@ -2860,6 +2921,8 @@ arena_boot(void)
size_t header_size; size_t header_size;
unsigned i; unsigned i;
arena_lg_dirty_mult_default_set(opt_lg_dirty_mult);
/* /*
* Compute the header size such that it is large enough to contain the * Compute the header size such that it is large enough to contain the
* page map. The page map is biased to omit entries for the header * page map. The page map is biased to omit entries for the header

View File

@ -391,8 +391,10 @@ chunk_record(arena_t *arena, extent_tree_t *chunks_szad,
* pages have already been purged, so that this is only * pages have already been purged, so that this is only
* a virtual memory leak. * a virtual memory leak.
*/ */
if (cache) if (cache) {
pages_purge(chunk, size); chunk_purge_wrapper(arena, arena->chunk_purge,
chunk, 0, size);
}
goto label_return; goto label_return;
} }
extent_node_init(node, arena, chunk, size, !unzeroed); extent_node_init(node, arena, chunk, size, !unzeroed);
@ -485,6 +487,37 @@ chunk_dalloc_wrapper(arena_t *arena, chunk_dalloc_t *chunk_dalloc, void *chunk,
JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size); JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size);
} }
bool
chunk_purge_arena(arena_t *arena, void *chunk, size_t offset, size_t length)
{
assert(chunk != NULL);
assert(CHUNK_ADDR2BASE(chunk) == chunk);
assert((offset & PAGE_MASK) == 0);
assert(length != 0);
assert((length & PAGE_MASK) == 0);
return (pages_purge((void *)((uintptr_t)chunk + (uintptr_t)offset),
length));
}
bool
chunk_purge_default(void *chunk, size_t offset, size_t length,
unsigned arena_ind)
{
return (chunk_purge_arena(chunk_arena_get(arena_ind), chunk, offset,
length));
}
bool
chunk_purge_wrapper(arena_t *arena, chunk_purge_t *chunk_purge, void *chunk,
size_t offset, size_t length)
{
return (chunk_purge(chunk, offset, length, arena->ind));
}
static rtree_node_elm_t * static rtree_node_elm_t *
chunks_rtree_node_alloc(size_t nelms) chunks_rtree_node_alloc(size_t nelms)
{ {

121
src/ctl.c
View File

@ -116,8 +116,10 @@ CTL_PROTO(tcache_destroy)
CTL_PROTO(arena_i_purge) CTL_PROTO(arena_i_purge)
static void arena_purge(unsigned arena_ind); static void arena_purge(unsigned arena_ind);
CTL_PROTO(arena_i_dss) CTL_PROTO(arena_i_dss)
CTL_PROTO(arena_i_lg_dirty_mult)
CTL_PROTO(arena_i_chunk_alloc) CTL_PROTO(arena_i_chunk_alloc)
CTL_PROTO(arena_i_chunk_dalloc) CTL_PROTO(arena_i_chunk_dalloc)
CTL_PROTO(arena_i_chunk_purge)
INDEX_PROTO(arena_i) INDEX_PROTO(arena_i)
CTL_PROTO(arenas_bin_i_size) CTL_PROTO(arenas_bin_i_size)
CTL_PROTO(arenas_bin_i_nregs) CTL_PROTO(arenas_bin_i_nregs)
@ -129,6 +131,7 @@ CTL_PROTO(arenas_hchunk_i_size)
INDEX_PROTO(arenas_hchunk_i) INDEX_PROTO(arenas_hchunk_i)
CTL_PROTO(arenas_narenas) CTL_PROTO(arenas_narenas)
CTL_PROTO(arenas_initialized) CTL_PROTO(arenas_initialized)
CTL_PROTO(arenas_lg_dirty_mult)
CTL_PROTO(arenas_quantum) CTL_PROTO(arenas_quantum)
CTL_PROTO(arenas_page) CTL_PROTO(arenas_page)
CTL_PROTO(arenas_tcache_max) CTL_PROTO(arenas_tcache_max)
@ -283,12 +286,14 @@ static const ctl_named_node_t tcache_node[] = {
static const ctl_named_node_t chunk_node[] = { static const ctl_named_node_t chunk_node[] = {
{NAME("alloc"), CTL(arena_i_chunk_alloc)}, {NAME("alloc"), CTL(arena_i_chunk_alloc)},
{NAME("dalloc"), CTL(arena_i_chunk_dalloc)} {NAME("dalloc"), CTL(arena_i_chunk_dalloc)},
{NAME("purge"), CTL(arena_i_chunk_purge)}
}; };
static const ctl_named_node_t arena_i_node[] = { static const ctl_named_node_t arena_i_node[] = {
{NAME("purge"), CTL(arena_i_purge)}, {NAME("purge"), CTL(arena_i_purge)},
{NAME("dss"), CTL(arena_i_dss)}, {NAME("dss"), CTL(arena_i_dss)},
{NAME("lg_dirty_mult"), CTL(arena_i_lg_dirty_mult)},
{NAME("chunk"), CHILD(named, chunk)}, {NAME("chunk"), CHILD(named, chunk)},
}; };
static const ctl_named_node_t super_arena_i_node[] = { static const ctl_named_node_t super_arena_i_node[] = {
@ -337,6 +342,7 @@ static const ctl_indexed_node_t arenas_hchunk_node[] = {
static const ctl_named_node_t arenas_node[] = { static const ctl_named_node_t arenas_node[] = {
{NAME("narenas"), CTL(arenas_narenas)}, {NAME("narenas"), CTL(arenas_narenas)},
{NAME("initialized"), CTL(arenas_initialized)}, {NAME("initialized"), CTL(arenas_initialized)},
{NAME("lg_dirty_mult"), CTL(arenas_lg_dirty_mult)},
{NAME("quantum"), CTL(arenas_quantum)}, {NAME("quantum"), CTL(arenas_quantum)},
{NAME("page"), CTL(arenas_page)}, {NAME("page"), CTL(arenas_page)},
{NAME("tcache_max"), CTL(arenas_tcache_max)}, {NAME("tcache_max"), CTL(arenas_tcache_max)},
@ -1617,57 +1623,70 @@ label_return:
} }
static int static int
arena_i_chunk_alloc_ctl(const size_t *mib, size_t miblen, void *oldp, arena_i_lg_dirty_mult_ctl(const size_t *mib, size_t miblen, void *oldp,
size_t *oldlenp, void *newp, size_t newlen) size_t *oldlenp, void *newp, size_t newlen)
{ {
int ret; int ret;
unsigned arena_ind = mib[1]; unsigned arena_ind = mib[1];
arena_t *arena; arena_t *arena;
malloc_mutex_lock(&ctl_mtx); arena = arena_get(tsd_fetch(), arena_ind, false, (arena_ind == 0));
if (arena_ind < narenas_total_get() && (arena = arena_get(tsd_fetch(), if (arena == NULL) {
arena_ind, false, true)) != NULL) {
malloc_mutex_lock(&arena->lock);
READ(arena->chunk_alloc, chunk_alloc_t *);
WRITE(arena->chunk_alloc, chunk_alloc_t *);
} else {
ret = EFAULT; ret = EFAULT;
goto label_outer_return; goto label_return;
} }
if (oldp != NULL && oldlenp != NULL) {
size_t oldval = arena_lg_dirty_mult_get(arena);
READ(oldval, ssize_t);
}
if (newp != NULL) {
if (newlen != sizeof(ssize_t)) {
ret = EINVAL;
goto label_return;
}
if (arena_lg_dirty_mult_set(arena, *(ssize_t *)newp)) {
ret = EFAULT;
goto label_return;
}
}
ret = 0; ret = 0;
label_return: label_return:
malloc_mutex_unlock(&arena->lock);
label_outer_return:
malloc_mutex_unlock(&ctl_mtx);
return (ret); return (ret);
} }
static int #define CHUNK_FUNC(n) \
arena_i_chunk_dalloc_ctl(const size_t *mib, size_t miblen, void *oldp, static int \
size_t *oldlenp, void *newp, size_t newlen) arena_i_chunk_##n##_ctl(const size_t *mib, size_t miblen, void *oldp, \
{ size_t *oldlenp, void *newp, size_t newlen) \
{ \
int ret; \
unsigned arena_ind = mib[1]; int ret; \
arena_t *arena; unsigned arena_ind = mib[1]; \
arena_t *arena; \
malloc_mutex_lock(&ctl_mtx); \
if (arena_ind < narenas_total_get() && (arena = arena_get(tsd_fetch(), malloc_mutex_lock(&ctl_mtx); \
arena_ind, false, true)) != NULL) { if (arena_ind < narenas_total_get() && (arena = \
malloc_mutex_lock(&arena->lock); arena_get(tsd_fetch(), arena_ind, false, true)) != NULL) { \
READ(arena->chunk_dalloc, chunk_dalloc_t *); malloc_mutex_lock(&arena->lock); \
WRITE(arena->chunk_dalloc, chunk_dalloc_t *); READ(arena->chunk_##n, chunk_##n##_t *); \
} else { WRITE(arena->chunk_##n, chunk_##n##_t *); \
ret = EFAULT; } else { \
goto label_outer_return; ret = EFAULT; \
} goto label_outer_return; \
ret = 0; } \
label_return: ret = 0; \
malloc_mutex_unlock(&arena->lock); label_return: \
label_outer_return: malloc_mutex_unlock(&arena->lock); \
malloc_mutex_unlock(&ctl_mtx); label_outer_return: \
return (ret); malloc_mutex_unlock(&ctl_mtx); \
return (ret); \
} }
CHUNK_FUNC(alloc)
CHUNK_FUNC(dalloc)
CHUNK_FUNC(purge)
#undef CHUNK_FUNC
static const ctl_named_node_t * static const ctl_named_node_t *
arena_i_index(const size_t *mib, size_t miblen, size_t i) arena_i_index(const size_t *mib, size_t miblen, size_t i)
@ -1736,6 +1755,32 @@ label_return:
return (ret); return (ret);
} }
static int
arenas_lg_dirty_mult_ctl(const size_t *mib, size_t miblen, void *oldp,
size_t *oldlenp, void *newp, size_t newlen)
{
int ret;
if (oldp != NULL && oldlenp != NULL) {
size_t oldval = arena_lg_dirty_mult_default_get();
READ(oldval, ssize_t);
}
if (newp != NULL) {
if (newlen != sizeof(ssize_t)) {
ret = EINVAL;
goto label_return;
}
if (arena_lg_dirty_mult_default_set(*(ssize_t *)newp)) {
ret = EFAULT;
goto label_return;
}
}
ret = 0;
label_return:
return (ret);
}
CTL_RO_NL_GEN(arenas_quantum, QUANTUM, size_t) CTL_RO_NL_GEN(arenas_quantum, QUANTUM, size_t)
CTL_RO_NL_GEN(arenas_page, PAGE, size_t) CTL_RO_NL_GEN(arenas_page, PAGE, size_t)
CTL_RO_NL_CGEN(config_tcache, arenas_tcache_max, tcache_maxclass, size_t) CTL_RO_NL_CGEN(config_tcache, arenas_tcache_max, tcache_maxclass, size_t)

View File

@ -124,9 +124,10 @@ huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize,
size_t size, size_t extra, bool zero) size_t size, size_t extra, bool zero)
{ {
size_t usize_next; size_t usize_next;
bool zeroed;
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
chunk_purge_t *chunk_purge;
bool zeroed;
/* Increase usize to incorporate extra. */ /* Increase usize to incorporate extra. */
while (usize < s2u(size+extra) && (usize_next = s2u(usize+1)) < oldsize) while (usize < s2u(size+extra) && (usize_next = s2u(usize+1)) < oldsize)
@ -135,11 +136,18 @@ huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize,
if (oldsize == usize) if (oldsize == usize)
return; return;
node = huge_node_get(ptr);
arena = extent_node_arena_get(node);
malloc_mutex_lock(&arena->lock);
chunk_purge = arena->chunk_purge;
malloc_mutex_unlock(&arena->lock);
/* Fill if necessary (shrinking). */ /* Fill if necessary (shrinking). */
if (oldsize > usize) { if (oldsize > usize) {
size_t sdiff = CHUNK_CEILING(usize) - usize; size_t sdiff = CHUNK_CEILING(usize) - usize;
zeroed = (sdiff != 0) ? !pages_purge((void *)((uintptr_t)ptr + zeroed = (sdiff != 0) ? !chunk_purge_wrapper(arena, chunk_purge,
usize), sdiff) : true; CHUNK_ADDR2BASE(ptr), CHUNK_ADDR2OFFSET(ptr), usize) : true;
if (config_fill && unlikely(opt_junk_free)) { if (config_fill && unlikely(opt_junk_free)) {
memset((void *)((uintptr_t)ptr + usize), 0x5a, oldsize - memset((void *)((uintptr_t)ptr + usize), 0x5a, oldsize -
usize); usize);
@ -148,8 +156,6 @@ huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize,
} else } else
zeroed = true; zeroed = true;
node = huge_node_get(ptr);
arena = extent_node_arena_get(node);
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
/* Update the size of the huge allocation. */ /* Update the size of the huge allocation. */
assert(extent_node_size_get(node) != usize); assert(extent_node_size_get(node) != usize);
@ -177,22 +183,29 @@ huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize,
static void static void
huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize) huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize)
{ {
size_t sdiff;
bool zeroed;
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
chunk_purge_t *chunk_purge;
size_t sdiff;
bool zeroed;
node = huge_node_get(ptr);
arena = extent_node_arena_get(node);
malloc_mutex_lock(&arena->lock);
chunk_purge = arena->chunk_purge;
malloc_mutex_unlock(&arena->lock);
sdiff = CHUNK_CEILING(usize) - usize; sdiff = CHUNK_CEILING(usize) - usize;
zeroed = (sdiff != 0) ? !pages_purge((void *)((uintptr_t)ptr + usize), zeroed = (sdiff != 0) ? !chunk_purge_wrapper(arena, chunk_purge,
sdiff) : true; CHUNK_ADDR2BASE((uintptr_t)ptr + usize),
CHUNK_ADDR2OFFSET((uintptr_t)ptr + usize), sdiff) : true;
if (config_fill && unlikely(opt_junk_free)) { if (config_fill && unlikely(opt_junk_free)) {
huge_dalloc_junk((void *)((uintptr_t)ptr + usize), oldsize - huge_dalloc_junk((void *)((uintptr_t)ptr + usize), oldsize -
usize); usize);
zeroed = false; zeroed = false;
} }
node = huge_node_get(ptr);
arena = extent_node_arena_get(node);
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(&arena->huge_mtx);
/* Update the size of the huge allocation. */ /* Update the size of the huge allocation. */
extent_node_size_set(node, usize); extent_node_size_set(node, usize);
@ -291,8 +304,7 @@ huge_ralloc_no_move(void *ptr, size_t oldsize, size_t size, size_t extra,
} }
/* Attempt to expand the allocation in-place. */ /* Attempt to expand the allocation in-place. */
if (huge_ralloc_no_move_expand(ptr, oldsize, size + extra, if (huge_ralloc_no_move_expand(ptr, oldsize, size + extra, zero)) {
zero)) {
if (extra == 0) if (extra == 0)
return (true); return (true);

View File

@ -264,6 +264,7 @@ stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque,
{ {
unsigned nthreads; unsigned nthreads;
const char *dss; const char *dss;
ssize_t lg_dirty_mult;
size_t page, pactive, pdirty, mapped; size_t page, pactive, pdirty, mapped;
size_t metadata_mapped, metadata_allocated; size_t metadata_mapped, metadata_allocated;
uint64_t npurge, nmadvise, purged; uint64_t npurge, nmadvise, purged;
@ -282,6 +283,15 @@ stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque,
CTL_I_GET("stats.arenas.0.dss", &dss, const char *); CTL_I_GET("stats.arenas.0.dss", &dss, const char *);
malloc_cprintf(write_cb, cbopaque, "dss allocation precedence: %s\n", malloc_cprintf(write_cb, cbopaque, "dss allocation precedence: %s\n",
dss); dss);
CTL_I_GET("stats.arenas.0.lg_dirty_mult", &lg_dirty_mult, ssize_t);
if (lg_dirty_mult >= 0) {
malloc_cprintf(write_cb, cbopaque,
"Min active:dirty page ratio: %u:1\n",
(1U << lg_dirty_mult));
} else {
malloc_cprintf(write_cb, cbopaque,
"Min active:dirty page ratio: N/A\n");
}
CTL_I_GET("stats.arenas.0.pactive", &pactive, size_t); CTL_I_GET("stats.arenas.0.pactive", &pactive, size_t);
CTL_I_GET("stats.arenas.0.pdirty", &pdirty, size_t); CTL_I_GET("stats.arenas.0.pdirty", &pdirty, size_t);
CTL_I_GET("stats.arenas.0.npurge", &npurge, uint64_t); CTL_I_GET("stats.arenas.0.npurge", &npurge, uint64_t);

View File

@ -2,13 +2,8 @@
chunk_alloc_t *old_alloc; chunk_alloc_t *old_alloc;
chunk_dalloc_t *old_dalloc; chunk_dalloc_t *old_dalloc;
chunk_purge_t *old_purge;
bool bool purged;
chunk_dalloc(void *chunk, size_t size, unsigned arena_ind)
{
return (old_dalloc(chunk, size, arena_ind));
}
void * void *
chunk_alloc(void *new_addr, size_t size, size_t alignment, bool *zero, chunk_alloc(void *new_addr, size_t size, size_t alignment, bool *zero,
@ -18,36 +13,79 @@ chunk_alloc(void *new_addr, size_t size, size_t alignment, bool *zero,
return (old_alloc(new_addr, size, alignment, zero, arena_ind)); return (old_alloc(new_addr, size, alignment, zero, arena_ind));
} }
bool
chunk_dalloc(void *chunk, size_t size, unsigned arena_ind)
{
return (old_dalloc(chunk, size, arena_ind));
}
bool
chunk_purge(void *chunk, size_t offset, size_t length, unsigned arena_ind)
{
purged = true;
return (old_purge(chunk, offset, length, arena_ind));
}
TEST_BEGIN(test_chunk) TEST_BEGIN(test_chunk)
{ {
void *p; void *p;
chunk_alloc_t *new_alloc; chunk_alloc_t *new_alloc;
chunk_dalloc_t *new_dalloc; chunk_dalloc_t *new_dalloc;
size_t old_size, new_size; chunk_purge_t *new_purge;
size_t old_size, new_size, huge0, huge1, huge2, sz;
new_alloc = chunk_alloc; new_alloc = chunk_alloc;
new_dalloc = chunk_dalloc; new_dalloc = chunk_dalloc;
new_purge = chunk_purge;
old_size = sizeof(chunk_alloc_t *); old_size = sizeof(chunk_alloc_t *);
new_size = sizeof(chunk_alloc_t *); new_size = sizeof(chunk_alloc_t *);
assert_d_eq(mallctl("arena.0.chunk.alloc", &old_alloc, assert_d_eq(mallctl("arena.0.chunk.alloc", &old_alloc, &old_size,
&old_size, &new_alloc, new_size), 0, &new_alloc, new_size), 0, "Unexpected alloc error");
"Unexpected alloc error"); assert_ptr_ne(old_alloc, new_alloc, "Unexpected alloc error");
assert_ptr_ne(old_alloc, new_alloc,
"Unexpected alloc error");
assert_d_eq(mallctl("arena.0.chunk.dalloc", &old_dalloc, &old_size, assert_d_eq(mallctl("arena.0.chunk.dalloc", &old_dalloc, &old_size,
&new_dalloc, new_size), 0, "Unexpected dalloc error"); &new_dalloc, new_size), 0, "Unexpected dalloc error");
assert_ptr_ne(old_dalloc, new_dalloc, "Unexpected dalloc error"); assert_ptr_ne(old_dalloc, new_dalloc, "Unexpected dalloc error");
assert_d_eq(mallctl("arena.0.chunk.purge", &old_purge, &old_size,
&new_purge, new_size), 0, "Unexpected purge error");
assert_ptr_ne(old_purge, new_purge, "Unexpected purge error");
sz = sizeof(size_t);
assert_d_eq(mallctl("arenas.hchunk.0.size", &huge0, &sz, NULL, 0), 0,
"Unexpected arenas.hchunk.0.size failure");
assert_d_eq(mallctl("arenas.hchunk.1.size", &huge1, &sz, NULL, 0), 0,
"Unexpected arenas.hchunk.1.size failure");
assert_d_eq(mallctl("arenas.hchunk.2.size", &huge2, &sz, NULL, 0), 0,
"Unexpected arenas.hchunk.2.size failure");
if (huge0 * 2 > huge2) {
/*
* There are at least four size classes per doubling, so
* xallocx() from size=huge2 to size=huge1 is guaranteed to
* leave trailing purgeable memory.
*/
p = mallocx(huge2, 0);
assert_ptr_not_null(p, "Unexpected mallocx() error");
purged = false;
assert_zu_eq(xallocx(p, huge1, 0, 0), huge1,
"Unexpected xallocx() failure");
assert_true(purged, "Unexpected purge");
dallocx(p, 0);
}
p = mallocx(42, 0); p = mallocx(42, 0);
assert_ptr_ne(p, NULL, "Unexpected alloc error"); assert_ptr_not_null(p, "Unexpected mallocx() error");
free(p); free(p);
assert_d_eq(mallctl("arena.0.chunk.alloc", NULL, assert_d_eq(mallctl("arena.0.chunk.alloc", NULL, NULL, &old_alloc,
NULL, &old_alloc, old_size), 0, old_size), 0, "Unexpected alloc error");
"Unexpected alloc error");
assert_d_eq(mallctl("arena.0.chunk.dalloc", NULL, NULL, &old_dalloc, assert_d_eq(mallctl("arena.0.chunk.dalloc", NULL, NULL, &old_dalloc,
old_size), 0, "Unexpected dalloc error"); old_size), 0, "Unexpected dalloc error");
assert_d_eq(mallctl("arena.0.chunk.purge", NULL, NULL, &old_purge,
old_size), 0, "Unexpected purge error");
} }
TEST_END TEST_END

View File

@ -348,6 +348,38 @@ TEST_BEGIN(test_thread_arena)
} }
TEST_END TEST_END
TEST_BEGIN(test_arena_i_lg_dirty_mult)
{
ssize_t lg_dirty_mult, orig_lg_dirty_mult, prev_lg_dirty_mult;
size_t sz = sizeof(ssize_t);
assert_d_eq(mallctl("arena.0.lg_dirty_mult", &orig_lg_dirty_mult, &sz,
NULL, 0), 0, "Unexpected mallctl() failure");
lg_dirty_mult = -2;
assert_d_eq(mallctl("arena.0.lg_dirty_mult", NULL, NULL,
&lg_dirty_mult, sizeof(ssize_t)), EFAULT,
"Unexpected mallctl() success");
lg_dirty_mult = (sizeof(size_t) << 3);
assert_d_eq(mallctl("arena.0.lg_dirty_mult", NULL, NULL,
&lg_dirty_mult, sizeof(ssize_t)), EFAULT,
"Unexpected mallctl() success");
for (prev_lg_dirty_mult = orig_lg_dirty_mult, lg_dirty_mult = -1;
lg_dirty_mult < (sizeof(ssize_t) << 3); prev_lg_dirty_mult =
lg_dirty_mult, lg_dirty_mult++) {
ssize_t old_lg_dirty_mult;
assert_d_eq(mallctl("arena.0.lg_dirty_mult", &old_lg_dirty_mult,
&sz, &lg_dirty_mult, sizeof(ssize_t)), 0,
"Unexpected mallctl() failure");
assert_zd_eq(old_lg_dirty_mult, prev_lg_dirty_mult,
"Unexpected old arena.0.lg_dirty_mult");
}
}
TEST_END
TEST_BEGIN(test_arena_i_purge) TEST_BEGIN(test_arena_i_purge)
{ {
unsigned narenas; unsigned narenas;
@ -427,6 +459,38 @@ TEST_BEGIN(test_arenas_initialized)
} }
TEST_END TEST_END
TEST_BEGIN(test_arenas_lg_dirty_mult)
{
ssize_t lg_dirty_mult, orig_lg_dirty_mult, prev_lg_dirty_mult;
size_t sz = sizeof(ssize_t);
assert_d_eq(mallctl("arenas.lg_dirty_mult", &orig_lg_dirty_mult, &sz,
NULL, 0), 0, "Unexpected mallctl() failure");
lg_dirty_mult = -2;
assert_d_eq(mallctl("arenas.lg_dirty_mult", NULL, NULL,
&lg_dirty_mult, sizeof(ssize_t)), EFAULT,
"Unexpected mallctl() success");
lg_dirty_mult = (sizeof(size_t) << 3);
assert_d_eq(mallctl("arenas.lg_dirty_mult", NULL, NULL,
&lg_dirty_mult, sizeof(ssize_t)), EFAULT,
"Unexpected mallctl() success");
for (prev_lg_dirty_mult = orig_lg_dirty_mult, lg_dirty_mult = -1;
lg_dirty_mult < (sizeof(ssize_t) << 3); prev_lg_dirty_mult =
lg_dirty_mult, lg_dirty_mult++) {
ssize_t old_lg_dirty_mult;
assert_d_eq(mallctl("arenas.lg_dirty_mult", &old_lg_dirty_mult,
&sz, &lg_dirty_mult, sizeof(ssize_t)), 0,
"Unexpected mallctl() failure");
assert_zd_eq(old_lg_dirty_mult, prev_lg_dirty_mult,
"Unexpected old arenas.lg_dirty_mult");
}
}
TEST_END
TEST_BEGIN(test_arenas_constants) TEST_BEGIN(test_arenas_constants)
{ {
@ -554,9 +618,11 @@ main(void)
test_tcache_none, test_tcache_none,
test_tcache, test_tcache,
test_thread_arena, test_thread_arena,
test_arena_i_lg_dirty_mult,
test_arena_i_purge, test_arena_i_purge,
test_arena_i_dss, test_arena_i_dss,
test_arenas_initialized, test_arenas_initialized,
test_arenas_lg_dirty_mult,
test_arenas_constants, test_arenas_constants,
test_arenas_bin_constants, test_arenas_bin_constants,
test_arenas_lrun_constants, test_arenas_lrun_constants,

View File

@ -22,7 +22,7 @@ TEST_BEGIN(test_rtree_get_empty)
rtree_t rtree; rtree_t rtree;
assert_false(rtree_new(&rtree, i, node_alloc, node_dalloc), assert_false(rtree_new(&rtree, i, node_alloc, node_dalloc),
"Unexpected rtree_new() failure"); "Unexpected rtree_new() failure");
assert_ptr_eq(rtree_get(&rtree, 0), NULL, assert_ptr_null(rtree_get(&rtree, 0),
"rtree_get() should return NULL for empty tree"); "rtree_get() should return NULL for empty tree");
rtree_delete(&rtree); rtree_delete(&rtree);
} }
@ -75,8 +75,8 @@ TEST_BEGIN(test_rtree_bits)
"get key=%#"PRIxPTR, i, j, k, keys[j], "get key=%#"PRIxPTR, i, j, k, keys[j],
keys[k]); keys[k]);
} }
assert_ptr_eq(rtree_get(&rtree, assert_ptr_null(rtree_get(&rtree,
(((uintptr_t)1) << (sizeof(uintptr_t)*8-i))), NULL, (((uintptr_t)1) << (sizeof(uintptr_t)*8-i))),
"Only leftmost rtree leaf should be set; " "Only leftmost rtree leaf should be set; "
"i=%u, j=%u", i, j); "i=%u, j=%u", i, j);
rtree_set(&rtree, keys[j], NULL); rtree_set(&rtree, keys[j], NULL);
@ -117,11 +117,11 @@ TEST_BEGIN(test_rtree_random)
for (j = 0; j < NSET; j++) { for (j = 0; j < NSET; j++) {
rtree_set(&rtree, keys[j], NULL); rtree_set(&rtree, keys[j], NULL);
assert_ptr_eq(rtree_get(&rtree, keys[j]), NULL, assert_ptr_null(rtree_get(&rtree, keys[j]),
"rtree_get() should return previously set value"); "rtree_get() should return previously set value");
} }
for (j = 0; j < NSET; j++) { for (j = 0; j < NSET; j++) {
assert_ptr_eq(rtree_get(&rtree, keys[j]), NULL, assert_ptr_null(rtree_get(&rtree, keys[j]),
"rtree_get() should return previously set value"); "rtree_get() should return previously set value");
} }