Merge branch 'dev'

This commit is contained in:
Jason Evans 2016-05-12 14:51:07 -07:00
commit f70a254d44
78 changed files with 4955 additions and 2467 deletions

View File

@ -1,10 +1,10 @@
Unless otherwise specified, files in the jemalloc source distribution are Unless otherwise specified, files in the jemalloc source distribution are
subject to the following license: subject to the following license:
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
Copyright (C) 2002-2015 Jason Evans <jasone@canonware.com>. Copyright (C) 2002-2016 Jason Evans <jasone@canonware.com>.
All rights reserved. All rights reserved.
Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved. Copyright (C) 2007-2012 Mozilla Foundation. All rights reserved.
Copyright (C) 2009-2015 Facebook, Inc. All rights reserved. Copyright (C) 2009-2016 Facebook, Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met: modification, are permitted provided that the following conditions are met:

View File

@ -4,6 +4,29 @@ brevity. Much more detail can be found in the git revision history:
https://github.com/jemalloc/jemalloc https://github.com/jemalloc/jemalloc
* 4.2.0 (May 12, 2016)
New features:
- Add the arena.<i>.reset mallctl, which makes it possible to discard all of
an arena's allocations in a single operation. (@jasone@)
- Add the stats.retained and stats.arenas.<i>.retained statistics. (@jasone)
- Add the --with-version configure option. (@jasone)
- Support --with-lg-page values larger than actual page size. (@jasone)
Optimizations:
- Use pairing heaps rather than red-black trees for various hot data
structures. (@djwatson, @jasone)
- Streamline fast paths of rtree operations. (@jasone)
- Optimize the fast paths of calloc() and [m,d,sd]allocx(). (@jasone)
- Decommit unused virtual memory if the OS does not overcommit. (@jasone)
- Specify MAP_NORESERVE on Linux if [heuristic] overcommit is active, in order
to avoid unfortunate interactions during fork(2). (@jasone)
Bug fixes:
- Fix chunk accounting related to triggering gdump profiles. (@jasone)
- Link against librt for clock_gettime(2) if glibc < 2.17. (@jasone)
- Scale leak report summary according to sampling probability. (@jasone)
* 4.1.1 (May 3, 2016) * 4.1.1 (May 3, 2016)
This bugfix release resolves a variety of mostly minor issues, though the This bugfix release resolves a variety of mostly minor issues, though the
@ -21,7 +44,7 @@ brevity. Much more detail can be found in the git revision history:
enabled and active. (@jasone) enabled and active. (@jasone)
- Fix various chunk leaks in OOM code paths. (@jasone) - Fix various chunk leaks in OOM code paths. (@jasone)
- Fix malloc_stats_print() to print opt.narenas correctly. (@jasone) - Fix malloc_stats_print() to print opt.narenas correctly. (@jasone)
- Fix MSVC-specific build/test issues. (@rustyx, yuslepukhin) - Fix MSVC-specific build/test issues. (@rustyx, @yuslepukhin)
- Fix a variety of test failures that were due to test fragility rather than - Fix a variety of test failures that were due to test fragility rather than
core bugs. (@jasone) core bugs. (@jasone)
@ -80,14 +103,14 @@ brevity. Much more detail can be found in the git revision history:
Bug fixes: Bug fixes:
- Fix stats.cactive accounting regression. (@rustyx, @jasone) - Fix stats.cactive accounting regression. (@rustyx, @jasone)
- Handle unaligned keys in hash(). This caused problems for some ARM systems. - Handle unaligned keys in hash(). This caused problems for some ARM systems.
(@jasone, Christopher Ferris) (@jasone, @cferris1000)
- Refactor arenas array. In addition to fixing a fork-related deadlock, this - Refactor arenas array. In addition to fixing a fork-related deadlock, this
makes arena lookups faster and simpler. (@jasone) makes arena lookups faster and simpler. (@jasone)
- Move retained memory allocation out of the default chunk allocation - Move retained memory allocation out of the default chunk allocation
function, to a location that gets executed even if the application installs function, to a location that gets executed even if the application installs
a custom chunk allocation function. This resolves a virtual memory leak. a custom chunk allocation function. This resolves a virtual memory leak.
(@buchgr) (@buchgr)
- Fix a potential tsd cleanup leak. (Christopher Ferris, @jasone) - Fix a potential tsd cleanup leak. (@cferris1000, @jasone)
- Fix run quantization. In practice this bug had no impact unless - Fix run quantization. In practice this bug had no impact unless
applications requested memory with alignment exceeding one page. applications requested memory with alignment exceeding one page.
(@jasone, @djwatson) (@jasone, @djwatson)

View File

@ -35,6 +35,10 @@ any of the following arguments (not a definitive list) to 'configure':
will cause files to be installed into /usr/local/include, /usr/local/lib, will cause files to be installed into /usr/local/include, /usr/local/lib,
and /usr/local/man. and /usr/local/man.
--with-version=<major>.<minor>.<bugfix>-<nrev>-g<gid>
Use the specified version string rather than trying to generate one (if in
a git repository) or use existing the VERSION file (if present).
--with-rpath=<colon-separated-rpath> --with-rpath=<colon-separated-rpath>
Embed one or more library paths, so that libjemalloc can find the libraries Embed one or more library paths, so that libjemalloc can find the libraries
it is linked to. This works only on ELF-based systems. it is linked to. This works only on ELF-based systems.

View File

@ -28,7 +28,6 @@ CFLAGS := @CFLAGS@
LDFLAGS := @LDFLAGS@ LDFLAGS := @LDFLAGS@
EXTRA_LDFLAGS := @EXTRA_LDFLAGS@ EXTRA_LDFLAGS := @EXTRA_LDFLAGS@
LIBS := @LIBS@ LIBS := @LIBS@
TESTLIBS := @TESTLIBS@
RPATH_EXTRA := @RPATH_EXTRA@ RPATH_EXTRA := @RPATH_EXTRA@
SO := @so@ SO := @so@
IMPORTLIB := @importlib@ IMPORTLIB := @importlib@
@ -103,7 +102,8 @@ C_SRCS := $(srcroot)src/jemalloc.c \
$(srcroot)src/tcache.c \ $(srcroot)src/tcache.c \
$(srcroot)src/ticker.c \ $(srcroot)src/ticker.c \
$(srcroot)src/tsd.c \ $(srcroot)src/tsd.c \
$(srcroot)src/util.c $(srcroot)src/util.c \
$(srcroot)src/witness.c
ifeq ($(enable_valgrind), 1) ifeq ($(enable_valgrind), 1)
C_SRCS += $(srcroot)src/valgrind.c C_SRCS += $(srcroot)src/valgrind.c
endif endif
@ -134,7 +134,10 @@ C_TESTLIB_SRCS := $(srcroot)test/src/btalloc.c $(srcroot)test/src/btalloc_0.c \
$(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \ $(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \
$(srcroot)test/src/thd.c $(srcroot)test/src/timer.c $(srcroot)test/src/thd.c $(srcroot)test/src/timer.c
C_UTIL_INTEGRATION_SRCS := $(srcroot)src/nstime.c $(srcroot)src/util.c C_UTIL_INTEGRATION_SRCS := $(srcroot)src/nstime.c $(srcroot)src/util.c
TESTS_UNIT := $(srcroot)test/unit/atomic.c \ TESTS_UNIT := \
$(srcroot)test/unit/a0.c \
$(srcroot)test/unit/arena_reset.c \
$(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/bitmap.c \ $(srcroot)test/unit/bitmap.c \
$(srcroot)test/unit/ckh.c \ $(srcroot)test/unit/ckh.c \
$(srcroot)test/unit/decay.c \ $(srcroot)test/unit/decay.c \
@ -148,6 +151,7 @@ TESTS_UNIT := $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/math.c \ $(srcroot)test/unit/math.c \
$(srcroot)test/unit/mq.c \ $(srcroot)test/unit/mq.c \
$(srcroot)test/unit/mtx.c \ $(srcroot)test/unit/mtx.c \
$(srcroot)test/unit/ph.c \
$(srcroot)test/unit/prng.c \ $(srcroot)test/unit/prng.c \
$(srcroot)test/unit/prof_accum.c \ $(srcroot)test/unit/prof_accum.c \
$(srcroot)test/unit/prof_active.c \ $(srcroot)test/unit/prof_active.c \
@ -169,6 +173,7 @@ TESTS_UNIT := $(srcroot)test/unit/atomic.c \
$(srcroot)test/unit/nstime.c \ $(srcroot)test/unit/nstime.c \
$(srcroot)test/unit/tsd.c \ $(srcroot)test/unit/tsd.c \
$(srcroot)test/unit/util.c \ $(srcroot)test/unit/util.c \
$(srcroot)test/unit/witness.c \
$(srcroot)test/unit/zero.c $(srcroot)test/unit/zero.c
TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \ TESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \
$(srcroot)test/integration/allocated.c \ $(srcroot)test/integration/allocated.c \
@ -290,15 +295,15 @@ $(STATIC_LIBS):
$(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS) $(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(TESTS_UNIT_LINK_OBJS) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)
$(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(TESTLIBS) $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(filter -lpthread,$(LIBS))) -lm $(EXTRA_LDFLAGS)
$(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)
@mkdir -p $(@D) @mkdir -p $(@D)
$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(TESTLIBS) $(EXTRA_LDFLAGS) $(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)
build_lib_shared: $(DSOS) build_lib_shared: $(DSOS)
build_lib_static: $(STATIC_LIBS) build_lib_static: $(STATIC_LIBS)

View File

@ -141,6 +141,7 @@ if test "x$CFLAGS" = "x" ; then
JE_CFLAGS_APPEND([-Wall]) JE_CFLAGS_APPEND([-Wall])
JE_CFLAGS_APPEND([-Werror=declaration-after-statement]) JE_CFLAGS_APPEND([-Werror=declaration-after-statement])
JE_CFLAGS_APPEND([-Wshorten-64-to-32]) JE_CFLAGS_APPEND([-Wshorten-64-to-32])
JE_CFLAGS_APPEND([-Wsign-compare])
JE_CFLAGS_APPEND([-pipe]) JE_CFLAGS_APPEND([-pipe])
JE_CFLAGS_APPEND([-g3]) JE_CFLAGS_APPEND([-g3])
elif test "x$je_cv_msvc" = "xyes" ; then elif test "x$je_cv_msvc" = "xyes" ; then
@ -304,6 +305,7 @@ case "${host}" in
*-*-freebsd*) *-*-freebsd*)
CFLAGS="$CFLAGS" CFLAGS="$CFLAGS"
abi="elf" abi="elf"
AC_DEFINE([JEMALLOC_SYSCTL_VM_OVERCOMMIT], [ ])
AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ])
force_lazy_lock="1" force_lazy_lock="1"
;; ;;
@ -328,6 +330,7 @@ case "${host}" in
CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE" CPPFLAGS="$CPPFLAGS -D_GNU_SOURCE"
abi="elf" abi="elf"
AC_DEFINE([JEMALLOC_HAS_ALLOCA_H]) AC_DEFINE([JEMALLOC_HAS_ALLOCA_H])
AC_DEFINE([JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY], [ ])
AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ]) AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ])
AC_DEFINE([JEMALLOC_THREADED_INIT], [ ]) AC_DEFINE([JEMALLOC_THREADED_INIT], [ ])
AC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ]) AC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ])
@ -1172,27 +1175,36 @@ dnl ============================================================================
dnl jemalloc configuration. dnl jemalloc configuration.
dnl dnl
dnl Set VERSION if source directory is inside a git repository. AC_ARG_WITH([version],
if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then [AS_HELP_STRING([--with-version=<major>.<minor>.<bugfix>-<nrev>-g<gid>],
dnl Pattern globs aren't powerful enough to match both single- and [Version string])],
dnl double-digit version numbers, so iterate over patterns to support up to [
dnl version 99.99.99 without any accidental matches. echo "${with_version}" | grep ['^[0-9]\+\.[0-9]\+\.[0-9]\+-[0-9]\+-g[0-9a-f]\+$'] 2>&1 1>/dev/null
rm -f "${objroot}VERSION" if test $? -ne 0 ; then
for pattern in ['[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \ AC_MSG_ERROR([${with_version} does not match <major>.<minor>.<bugfix>-<nrev>-g<gid>])
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]']; do
if test ! -e "${objroot}VERSION" ; then
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
fi fi
done echo "$with_version" > "${objroot}VERSION"
fi ], [
rm -f "${objroot}VERSION.tmp" dnl Set VERSION if source directory is inside a git repository.
if test "x`test ! \"${srcroot}\" && cd \"${srcroot}\"; git rev-parse --is-inside-work-tree 2>/dev/null`" = "xtrue" ; then
dnl Pattern globs aren't powerful enough to match both single- and
dnl double-digit version numbers, so iterate over patterns to support up
dnl to version 99.99.99 without any accidental matches.
for pattern in ['[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \
'[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9]' \
'[0-9][0-9].[0-9][0-9].[0-9][0-9]']; do
(test ! "${srcroot}" && cd "${srcroot}"; git describe --long --abbrev=40 --match="${pattern}") > "${objroot}VERSION.tmp" 2>/dev/null
if test $? -eq 0 ; then
mv "${objroot}VERSION.tmp" "${objroot}VERSION"
break
fi
done
fi
rm -f "${objroot}VERSION.tmp"
])
if test ! -e "${objroot}VERSION" ; then if test ! -e "${objroot}VERSION" ; then
if test ! -e "${srcroot}VERSION" ; then if test ! -e "${srcroot}VERSION" ; then
AC_MSG_RESULT( AC_MSG_RESULT(
@ -1229,13 +1241,8 @@ fi
CPPFLAGS="$CPPFLAGS -D_REENTRANT" CPPFLAGS="$CPPFLAGS -D_REENTRANT"
dnl Check whether clock_gettime(2) is in libc or librt. This function is only dnl Check whether clock_gettime(2) is in libc or librt.
dnl used in test code, so save the result to TESTLIBS to avoid poluting LIBS. AC_SEARCH_LIBS([clock_gettime], [rt])
SAVED_LIBS="${LIBS}"
LIBS=
AC_SEARCH_LIBS([clock_gettime], [rt], [TESTLIBS="${LIBS}"])
AC_SUBST([TESTLIBS])
LIBS="${SAVED_LIBS}"
dnl Check if the GNU-specific secure_getenv function exists. dnl Check if the GNU-specific secure_getenv function exists.
AC_CHECK_FUNC([secure_getenv], AC_CHECK_FUNC([secure_getenv],
@ -1741,7 +1748,6 @@ AC_MSG_RESULT([CPPFLAGS : ${CPPFLAGS}])
AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}]) AC_MSG_RESULT([LDFLAGS : ${LDFLAGS}])
AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}]) AC_MSG_RESULT([EXTRA_LDFLAGS : ${EXTRA_LDFLAGS}])
AC_MSG_RESULT([LIBS : ${LIBS}]) AC_MSG_RESULT([LIBS : ${LIBS}])
AC_MSG_RESULT([TESTLIBS : ${TESTLIBS}])
AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}]) AC_MSG_RESULT([RPATH_EXTRA : ${RPATH_EXTRA}])
AC_MSG_RESULT([]) AC_MSG_RESULT([])
AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}]) AC_MSG_RESULT([XSLTPROC : ${XSLTPROC}])

View File

@ -540,8 +540,8 @@ for (i = 0; i < nbins; i++) {
are smaller than four times the page size, large size classes are smaller are smaller than four times the page size, large size classes are smaller
than the chunk size (see the <link than the chunk size (see the <link
linkend="opt.lg_chunk"><mallctl>opt.lg_chunk</mallctl></link> option), and linkend="opt.lg_chunk"><mallctl>opt.lg_chunk</mallctl></link> option), and
huge size classes extend from the chunk size up to one size class less than huge size classes extend from the chunk size up to the largest size class
the full address space size.</para> that does not exceed <constant>PTRDIFF_MAX</constant>.</para>
<para>Allocations are packed tightly together, which can be an issue for <para>Allocations are packed tightly together, which can be an issue for
multi-threaded applications. If you need to assure that allocations do not multi-threaded applications. If you need to assure that allocations do not
@ -659,7 +659,7 @@ for (i = 0; i < nbins; i++) {
<entry>[1280 KiB, 1536 KiB, 1792 KiB]</entry> <entry>[1280 KiB, 1536 KiB, 1792 KiB]</entry>
</row> </row>
<row> <row>
<entry morerows="6">Huge</entry> <entry morerows="8">Huge</entry>
<entry>256 KiB</entry> <entry>256 KiB</entry>
<entry>[2 MiB]</entry> <entry>[2 MiB]</entry>
</row> </row>
@ -687,6 +687,14 @@ for (i = 0; i < nbins; i++) {
<entry>...</entry> <entry>...</entry>
<entry>...</entry> <entry>...</entry>
</row> </row>
<row>
<entry>512 PiB</entry>
<entry>[2560 PiB, 3 EiB, 3584 PiB, 4 EiB]</entry>
</row>
<row>
<entry>1 EiB</entry>
<entry>[5 EiB, 6 EiB, 7 EiB]</entry>
</row>
</tbody> </tbody>
</tgroup> </tgroup>
</table> </table>
@ -1550,6 +1558,23 @@ malloc_conf = "xmalloc:true";]]></programlisting>
details.</para></listitem> details.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="arena.i.reset">
<term>
<mallctl>arena.&lt;i&gt;.reset</mallctl>
(<type>void</type>)
<literal>--</literal>
</term>
<listitem><para>Discard all of the arena's extant allocations. This
interface can only be used with arenas created via <link
linkend="arenas.extend"><mallctl>arenas.extend</mallctl></link>. None
of the arena's discarded/cached allocations may accessed afterward. As
part of this requirement, all thread caches which were used to
allocate/deallocate in conjunction with the arena must be flushed
beforehand. This interface cannot be used if running inside Valgrind,
nor if the <link linkend="opt.quarantine">quarantine</link> size is
non-zero.</para></listitem>
</varlistentry>
<varlistentry id="arena.i.dss"> <varlistentry id="arena.i.dss">
<term> <term>
<mallctl>arena.&lt;i&gt;.dss</mallctl> <mallctl>arena.&lt;i&gt;.dss</mallctl>
@ -2161,6 +2186,25 @@ typedef struct {
linkend="stats.resident"><mallctl>stats.resident</mallctl></link>.</para></listitem> linkend="stats.resident"><mallctl>stats.resident</mallctl></link>.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.retained">
<term>
<mallctl>stats.retained</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Total number of bytes in virtual memory mappings that
were retained rather than being returned to the operating system via
e.g. <citerefentry><refentrytitle>munmap</refentrytitle>
<manvolnum>2</manvolnum></citerefentry>. Retained virtual memory is
typically untouched, decommitted, or purged, so it has no strongly
associated physical memory (see <link
linkend="arena.i.chunk_hooks">chunk hooks</link> for details). Retained
memory is excluded from mapped memory statistics, e.g. <link
linkend="stats.mapped"><mallctl>stats.mapped</mallctl></link>.
</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.dss"> <varlistentry id="stats.arenas.i.dss">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.dss</mallctl> <mallctl>stats.arenas.&lt;i&gt;.dss</mallctl>
@ -2241,6 +2285,18 @@ typedef struct {
<listitem><para>Number of mapped bytes.</para></listitem> <listitem><para>Number of mapped bytes.</para></listitem>
</varlistentry> </varlistentry>
<varlistentry id="stats.arenas.i.retained">
<term>
<mallctl>stats.arenas.&lt;i&gt;.retained</mallctl>
(<type>size_t</type>)
<literal>r-</literal>
[<option>--enable-stats</option>]
</term>
<listitem><para>Number of retained bytes. See <link
linkend="stats.retained"><mallctl>stats.retained</mallctl></link> for
details.</para></listitem>
</varlistentry>
<varlistentry id="stats.arenas.i.metadata.mapped"> <varlistentry id="stats.arenas.i.metadata.mapped">
<term> <term>
<mallctl>stats.arenas.&lt;i&gt;.metadata.mapped</mallctl> <mallctl>stats.arenas.&lt;i&gt;.metadata.mapped</mallctl>

View File

@ -36,6 +36,7 @@ typedef enum {
#define DECAY_NTICKS_PER_UPDATE 1000 #define DECAY_NTICKS_PER_UPDATE 1000
typedef struct arena_runs_dirty_link_s arena_runs_dirty_link_t; typedef struct arena_runs_dirty_link_s arena_runs_dirty_link_t;
typedef struct arena_avail_links_s arena_avail_links_t;
typedef struct arena_run_s arena_run_t; typedef struct arena_run_s arena_run_t;
typedef struct arena_chunk_map_bits_s arena_chunk_map_bits_t; typedef struct arena_chunk_map_bits_s arena_chunk_map_bits_t;
typedef struct arena_chunk_map_misc_s arena_chunk_map_misc_t; typedef struct arena_chunk_map_misc_s arena_chunk_map_misc_t;
@ -153,13 +154,13 @@ struct arena_runs_dirty_link_s {
*/ */
struct arena_chunk_map_misc_s { struct arena_chunk_map_misc_s {
/* /*
* Linkage for run trees. There are two disjoint uses: * Linkage for run heaps. There are two disjoint uses:
* *
* 1) arena_t's runs_avail tree. * 1) arena_t's runs_avail heaps.
* 2) arena_run_t conceptually uses this linkage for in-use non-full * 2) arena_run_t conceptually uses this linkage for in-use non-full
* runs, rather than directly embedding linkage. * runs, rather than directly embedding linkage.
*/ */
rb_node(arena_chunk_map_misc_t) rb_link; phn(arena_chunk_map_misc_t) ph_link;
union { union {
/* Linkage for list of dirty runs. */ /* Linkage for list of dirty runs. */
@ -175,7 +176,7 @@ struct arena_chunk_map_misc_s {
arena_run_t run; arena_run_t run;
}; };
}; };
typedef rb_tree(arena_chunk_map_misc_t) arena_run_tree_t; typedef ph(arena_chunk_map_misc_t) arena_run_heap_t;
#endif /* JEMALLOC_ARENA_STRUCTS_A */ #endif /* JEMALLOC_ARENA_STRUCTS_A */
#ifdef JEMALLOC_ARENA_STRUCTS_B #ifdef JEMALLOC_ARENA_STRUCTS_B
@ -272,13 +273,13 @@ struct arena_bin_s {
arena_run_t *runcur; arena_run_t *runcur;
/* /*
* Tree of non-full runs. This tree is used when looking for an * Heap of non-full runs. This heap is used when looking for an
* existing run when runcur is no longer usable. We choose the * existing run when runcur is no longer usable. We choose the
* non-full run that is lowest in memory; this policy tends to keep * non-full run that is lowest in memory; this policy tends to keep
* objects packed well, and it can also help reduce the number of * objects packed well, and it can also help reduce the number of
* almost-empty chunks. * almost-empty chunks.
*/ */
arena_run_tree_t runs; arena_run_heap_t runs;
/* Bin statistics. */ /* Bin statistics. */
malloc_bin_stats_t stats; malloc_bin_stats_t stats;
@ -289,10 +290,18 @@ struct arena_s {
unsigned ind; unsigned ind;
/* /*
* Number of threads currently assigned to this arena. This field is * Number of threads currently assigned to this arena, synchronized via
* synchronized via atomic operations. * atomic operations. Each thread has two distinct assignments, one for
* application-serving allocation, and the other for internal metadata
* allocation. Internal metadata must not be allocated from arenas
* created via the arenas.extend mallctl, because the arena.<i>.reset
* mallctl indiscriminately discards all allocations for the affected
* arena.
*
* 0: Application allocation.
* 1: Internal metadata allocation.
*/ */
unsigned nthreads; unsigned nthreads[2];
/* /*
* There are three classes of arena operations from a locking * There are three classes of arena operations from a locking
@ -321,6 +330,10 @@ struct arena_s {
dss_prec_t dss_prec; dss_prec_t dss_prec;
/* Extant arena chunks. */
ql_head(extent_node_t) achunks;
/* /*
* In order to avoid rapid chunk allocation/deallocation when an arena * In order to avoid rapid chunk allocation/deallocation when an arena
* oscillates right on the cusp of needing a new chunk, cache the most * oscillates right on the cusp of needing a new chunk, cache the most
@ -457,10 +470,10 @@ struct arena_s {
arena_bin_t bins[NBINS]; arena_bin_t bins[NBINS];
/* /*
* Quantized address-ordered trees of this arena's available runs. The * Quantized address-ordered heaps of this arena's available runs. The
* trees are used for first-best-fit run allocation. * heaps are used for first-best-fit run allocation.
*/ */
arena_run_tree_t runs_avail[1]; /* Dynamically sized. */ arena_run_heap_t runs_avail[1]; /* Dynamically sized. */
}; };
/* Used in conjunction with tsd for fast arena-related context lookup. */ /* Used in conjunction with tsd for fast arena-related context lookup. */
@ -505,25 +518,28 @@ void arena_chunk_cache_maybe_insert(arena_t *arena, extent_node_t *node,
bool cache); bool cache);
void arena_chunk_cache_maybe_remove(arena_t *arena, extent_node_t *node, void arena_chunk_cache_maybe_remove(arena_t *arena, extent_node_t *node,
bool cache); bool cache);
extent_node_t *arena_node_alloc(arena_t *arena); extent_node_t *arena_node_alloc(tsdn_t *tsdn, arena_t *arena);
void arena_node_dalloc(arena_t *arena, extent_node_t *node); void arena_node_dalloc(tsdn_t *tsdn, arena_t *arena, extent_node_t *node);
void *arena_chunk_alloc_huge(arena_t *arena, size_t usize, size_t alignment, void *arena_chunk_alloc_huge(tsdn_t *tsdn, arena_t *arena, size_t usize,
bool *zero); size_t alignment, bool *zero);
void arena_chunk_dalloc_huge(arena_t *arena, void *chunk, size_t usize); void arena_chunk_dalloc_huge(tsdn_t *tsdn, arena_t *arena, void *chunk,
void arena_chunk_ralloc_huge_similar(arena_t *arena, void *chunk, size_t usize);
size_t oldsize, size_t usize); void arena_chunk_ralloc_huge_similar(tsdn_t *tsdn, arena_t *arena,
void arena_chunk_ralloc_huge_shrink(arena_t *arena, void *chunk, void *chunk, size_t oldsize, size_t usize);
size_t oldsize, size_t usize); void arena_chunk_ralloc_huge_shrink(tsdn_t *tsdn, arena_t *arena,
bool arena_chunk_ralloc_huge_expand(arena_t *arena, void *chunk, void *chunk, size_t oldsize, size_t usize);
size_t oldsize, size_t usize, bool *zero); bool arena_chunk_ralloc_huge_expand(tsdn_t *tsdn, arena_t *arena,
ssize_t arena_lg_dirty_mult_get(arena_t *arena); void *chunk, size_t oldsize, size_t usize, bool *zero);
bool arena_lg_dirty_mult_set(arena_t *arena, ssize_t lg_dirty_mult); ssize_t arena_lg_dirty_mult_get(tsdn_t *tsdn, arena_t *arena);
ssize_t arena_decay_time_get(arena_t *arena); bool arena_lg_dirty_mult_set(tsdn_t *tsdn, arena_t *arena,
bool arena_decay_time_set(arena_t *arena, ssize_t decay_time); ssize_t lg_dirty_mult);
void arena_maybe_purge(arena_t *arena); ssize_t arena_decay_time_get(tsdn_t *tsdn, arena_t *arena);
void arena_purge(arena_t *arena, bool all); bool arena_decay_time_set(tsdn_t *tsdn, arena_t *arena, ssize_t decay_time);
void arena_tcache_fill_small(tsd_t *tsd, arena_t *arena, tcache_bin_t *tbin, void arena_purge(tsdn_t *tsdn, arena_t *arena, bool all);
szind_t binind, uint64_t prof_accumbytes); void arena_maybe_purge(tsdn_t *tsdn, arena_t *arena);
void arena_reset(tsd_t *tsd, arena_t *arena);
void arena_tcache_fill_small(tsdn_t *tsdn, arena_t *arena,
tcache_bin_t *tbin, szind_t binind, uint64_t prof_accumbytes);
void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info, void arena_alloc_junk_small(void *ptr, arena_bin_info_t *bin_info,
bool zero); bool zero);
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
@ -536,17 +552,18 @@ extern arena_dalloc_junk_small_t *arena_dalloc_junk_small;
void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info); void arena_dalloc_junk_small(void *ptr, arena_bin_info_t *bin_info);
#endif #endif
void arena_quarantine_junk_small(void *ptr, size_t usize); void arena_quarantine_junk_small(void *ptr, size_t usize);
void *arena_malloc_large(tsd_t *tsd, arena_t *arena, szind_t ind, bool zero); void *arena_malloc_large(tsdn_t *tsdn, arena_t *arena, szind_t ind,
void *arena_malloc_hard(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind, bool zero);
bool zero, tcache_t *tcache); void *arena_malloc_hard(tsdn_t *tsdn, arena_t *arena, size_t size,
void *arena_palloc(tsd_t *tsd, arena_t *arena, size_t usize, szind_t ind, bool zero);
void *arena_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize,
size_t alignment, bool zero, tcache_t *tcache); size_t alignment, bool zero, tcache_t *tcache);
void arena_prof_promoted(const void *ptr, size_t size); void arena_prof_promoted(tsdn_t *tsdn, const void *ptr, size_t size);
void arena_dalloc_bin_junked_locked(arena_t *arena, arena_chunk_t *chunk, void arena_dalloc_bin_junked_locked(tsdn_t *tsdn, arena_t *arena,
void *ptr, arena_chunk_map_bits_t *bitselm); arena_chunk_t *chunk, void *ptr, arena_chunk_map_bits_t *bitselm);
void arena_dalloc_bin(arena_t *arena, arena_chunk_t *chunk, void *ptr, void arena_dalloc_bin(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk,
size_t pageind, arena_chunk_map_bits_t *bitselm); void *ptr, size_t pageind, arena_chunk_map_bits_t *bitselm);
void arena_dalloc_small(tsd_t *tsd, arena_t *arena, arena_chunk_t *chunk, void arena_dalloc_small(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk,
void *ptr, size_t pageind); void *ptr, size_t pageind);
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
typedef void (arena_dalloc_junk_large_t)(void *, size_t); typedef void (arena_dalloc_junk_large_t)(void *, size_t);
@ -554,70 +571,80 @@ extern arena_dalloc_junk_large_t *arena_dalloc_junk_large;
#else #else
void arena_dalloc_junk_large(void *ptr, size_t usize); void arena_dalloc_junk_large(void *ptr, size_t usize);
#endif #endif
void arena_dalloc_large_junked_locked(arena_t *arena, arena_chunk_t *chunk, void arena_dalloc_large_junked_locked(tsdn_t *tsdn, arena_t *arena,
void *ptr); arena_chunk_t *chunk, void *ptr);
void arena_dalloc_large(tsd_t *tsd, arena_t *arena, arena_chunk_t *chunk, void arena_dalloc_large(tsdn_t *tsdn, arena_t *arena, arena_chunk_t *chunk,
void *ptr); void *ptr);
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
typedef void (arena_ralloc_junk_large_t)(void *, size_t, size_t); typedef void (arena_ralloc_junk_large_t)(void *, size_t, size_t);
extern arena_ralloc_junk_large_t *arena_ralloc_junk_large; extern arena_ralloc_junk_large_t *arena_ralloc_junk_large;
#endif #endif
bool arena_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, bool arena_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize,
size_t extra, bool zero); size_t size, size_t extra, bool zero);
void *arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, void *arena_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
size_t size, size_t alignment, bool zero, tcache_t *tcache); size_t size, size_t alignment, bool zero, tcache_t *tcache);
dss_prec_t arena_dss_prec_get(arena_t *arena); dss_prec_t arena_dss_prec_get(tsdn_t *tsdn, arena_t *arena);
bool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec); bool arena_dss_prec_set(tsdn_t *tsdn, arena_t *arena, dss_prec_t dss_prec);
ssize_t arena_lg_dirty_mult_default_get(void); ssize_t arena_lg_dirty_mult_default_get(void);
bool arena_lg_dirty_mult_default_set(ssize_t lg_dirty_mult); bool arena_lg_dirty_mult_default_set(ssize_t lg_dirty_mult);
ssize_t arena_decay_time_default_get(void); ssize_t arena_decay_time_default_get(void);
bool arena_decay_time_default_set(ssize_t decay_time); bool arena_decay_time_default_set(ssize_t decay_time);
void arena_basic_stats_merge(arena_t *arena, unsigned *nthreads, void arena_basic_stats_merge(tsdn_t *tsdn, arena_t *arena,
unsigned *nthreads, const char **dss, ssize_t *lg_dirty_mult,
ssize_t *decay_time, size_t *nactive, size_t *ndirty);
void arena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads,
const char **dss, ssize_t *lg_dirty_mult, ssize_t *decay_time, const char **dss, ssize_t *lg_dirty_mult, ssize_t *decay_time,
size_t *nactive, size_t *ndirty); size_t *nactive, size_t *ndirty, arena_stats_t *astats,
void arena_stats_merge(arena_t *arena, unsigned *nthreads, const char **dss, malloc_bin_stats_t *bstats, malloc_large_stats_t *lstats,
ssize_t *lg_dirty_mult, ssize_t *decay_time, size_t *nactive, malloc_huge_stats_t *hstats);
size_t *ndirty, arena_stats_t *astats, malloc_bin_stats_t *bstats, unsigned arena_nthreads_get(arena_t *arena, bool internal);
malloc_large_stats_t *lstats, malloc_huge_stats_t *hstats); void arena_nthreads_inc(arena_t *arena, bool internal);
unsigned arena_nthreads_get(arena_t *arena); void arena_nthreads_dec(arena_t *arena, bool internal);
void arena_nthreads_inc(arena_t *arena); arena_t *arena_new(tsdn_t *tsdn, unsigned ind);
void arena_nthreads_dec(arena_t *arena);
arena_t *arena_new(unsigned ind);
bool arena_boot(void); bool arena_boot(void);
void arena_prefork0(arena_t *arena); void arena_prefork0(tsdn_t *tsdn, arena_t *arena);
void arena_prefork1(arena_t *arena); void arena_prefork1(tsdn_t *tsdn, arena_t *arena);
void arena_prefork2(arena_t *arena); void arena_prefork2(tsdn_t *tsdn, arena_t *arena);
void arena_prefork3(arena_t *arena); void arena_prefork3(tsdn_t *tsdn, arena_t *arena);
void arena_postfork_parent(arena_t *arena); void arena_postfork_parent(tsdn_t *tsdn, arena_t *arena);
void arena_postfork_child(arena_t *arena); void arena_postfork_child(tsdn_t *tsdn, arena_t *arena);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
arena_chunk_map_bits_t *arena_bitselm_get(arena_chunk_t *chunk, arena_chunk_map_bits_t *arena_bitselm_get_mutable(arena_chunk_t *chunk,
size_t pageind); size_t pageind);
arena_chunk_map_misc_t *arena_miscelm_get(arena_chunk_t *chunk, const arena_chunk_map_bits_t *arena_bitselm_get_const(
const arena_chunk_t *chunk, size_t pageind);
arena_chunk_map_misc_t *arena_miscelm_get_mutable(arena_chunk_t *chunk,
size_t pageind); size_t pageind);
const arena_chunk_map_misc_t *arena_miscelm_get_const(
const arena_chunk_t *chunk, size_t pageind);
size_t arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm); size_t arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm);
void *arena_miscelm_to_rpages(arena_chunk_map_misc_t *miscelm); void *arena_miscelm_to_rpages(const arena_chunk_map_misc_t *miscelm);
arena_chunk_map_misc_t *arena_rd_to_miscelm(arena_runs_dirty_link_t *rd); arena_chunk_map_misc_t *arena_rd_to_miscelm(arena_runs_dirty_link_t *rd);
arena_chunk_map_misc_t *arena_run_to_miscelm(arena_run_t *run); arena_chunk_map_misc_t *arena_run_to_miscelm(arena_run_t *run);
size_t *arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind); size_t *arena_mapbitsp_get_mutable(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbitsp_read(size_t *mapbitsp); const size_t *arena_mapbitsp_get_const(const arena_chunk_t *chunk,
size_t arena_mapbits_get(arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_size_decode(size_t mapbits);
size_t arena_mapbits_unallocated_size_get(arena_chunk_t *chunk,
size_t pageind); size_t pageind);
size_t arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbitsp_read(const size_t *mapbitsp);
size_t arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_get(const arena_chunk_t *chunk, size_t pageind);
szind_t arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_size_decode(size_t mapbits);
size_t arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_unallocated_size_get(const arena_chunk_t *chunk,
size_t arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind); size_t pageind);
size_t arena_mapbits_decommitted_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_large_size_get(const arena_chunk_t *chunk,
size_t arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind); size_t pageind);
size_t arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind); size_t arena_mapbits_small_runind_get(const arena_chunk_t *chunk,
size_t pageind);
szind_t arena_mapbits_binind_get(const arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_dirty_get(const arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_unzeroed_get(const arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_decommitted_get(const arena_chunk_t *chunk,
size_t pageind);
size_t arena_mapbits_large_get(const arena_chunk_t *chunk, size_t pageind);
size_t arena_mapbits_allocated_get(const arena_chunk_t *chunk, size_t pageind);
void arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits); void arena_mapbitsp_write(size_t *mapbitsp, size_t mapbits);
size_t arena_mapbits_size_encode(size_t size); size_t arena_mapbits_size_encode(size_t size);
void arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, void arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind,
@ -637,29 +664,31 @@ void arena_metadata_allocated_sub(arena_t *arena, size_t size);
size_t arena_metadata_allocated_get(arena_t *arena); size_t arena_metadata_allocated_get(arena_t *arena);
bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum_impl(arena_t *arena, uint64_t accumbytes);
bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes);
bool arena_prof_accum(arena_t *arena, uint64_t accumbytes); bool arena_prof_accum(tsdn_t *tsdn, arena_t *arena, uint64_t accumbytes);
szind_t arena_ptr_small_binind_get(const void *ptr, size_t mapbits); szind_t arena_ptr_small_binind_get(const void *ptr, size_t mapbits);
szind_t arena_bin_index(arena_t *arena, arena_bin_t *bin); szind_t arena_bin_index(arena_t *arena, arena_bin_t *bin);
size_t arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, size_t arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info,
const void *ptr); const void *ptr);
prof_tctx_t *arena_prof_tctx_get(const void *ptr); prof_tctx_t *arena_prof_tctx_get(tsdn_t *tsdn, const void *ptr);
void arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx); void arena_prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize,
void arena_prof_tctx_reset(const void *ptr, size_t usize, prof_tctx_t *tctx);
void arena_prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize,
const void *old_ptr, prof_tctx_t *old_tctx); const void *old_ptr, prof_tctx_t *old_tctx);
void arena_decay_ticks(tsd_t *tsd, arena_t *arena, unsigned nticks); void arena_decay_ticks(tsdn_t *tsdn, arena_t *arena, unsigned nticks);
void arena_decay_tick(tsd_t *tsd, arena_t *arena); void arena_decay_tick(tsdn_t *tsdn, arena_t *arena);
void *arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind, void *arena_malloc(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind,
bool zero, tcache_t *tcache, bool slow_path); bool zero, tcache_t *tcache, bool slow_path);
arena_t *arena_aalloc(const void *ptr); arena_t *arena_aalloc(const void *ptr);
size_t arena_salloc(const void *ptr, bool demote); size_t arena_salloc(tsdn_t *tsdn, const void *ptr, bool demote);
void arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path); void arena_dalloc(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool slow_path);
void arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache); void arena_sdalloc(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache,
bool slow_path);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_ARENA_C_))
# ifdef JEMALLOC_ARENA_INLINE_A # ifdef JEMALLOC_ARENA_INLINE_A
JEMALLOC_ALWAYS_INLINE arena_chunk_map_bits_t * JEMALLOC_ALWAYS_INLINE arena_chunk_map_bits_t *
arena_bitselm_get(arena_chunk_t *chunk, size_t pageind) arena_bitselm_get_mutable(arena_chunk_t *chunk, size_t pageind)
{ {
assert(pageind >= map_bias); assert(pageind >= map_bias);
@ -668,8 +697,15 @@ arena_bitselm_get(arena_chunk_t *chunk, size_t pageind)
return (&chunk->map_bits[pageind-map_bias]); return (&chunk->map_bits[pageind-map_bias]);
} }
JEMALLOC_ALWAYS_INLINE const arena_chunk_map_bits_t *
arena_bitselm_get_const(const arena_chunk_t *chunk, size_t pageind)
{
return (arena_bitselm_get_mutable((arena_chunk_t *)chunk, pageind));
}
JEMALLOC_ALWAYS_INLINE arena_chunk_map_misc_t * JEMALLOC_ALWAYS_INLINE arena_chunk_map_misc_t *
arena_miscelm_get(arena_chunk_t *chunk, size_t pageind) arena_miscelm_get_mutable(arena_chunk_t *chunk, size_t pageind)
{ {
assert(pageind >= map_bias); assert(pageind >= map_bias);
@ -679,6 +715,13 @@ arena_miscelm_get(arena_chunk_t *chunk, size_t pageind)
(uintptr_t)map_misc_offset) + pageind-map_bias); (uintptr_t)map_misc_offset) + pageind-map_bias);
} }
JEMALLOC_ALWAYS_INLINE const arena_chunk_map_misc_t *
arena_miscelm_get_const(const arena_chunk_t *chunk, size_t pageind)
{
return (arena_miscelm_get_mutable((arena_chunk_t *)chunk, pageind));
}
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm) arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm)
{ {
@ -693,7 +736,7 @@ arena_miscelm_to_pageind(const arena_chunk_map_misc_t *miscelm)
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
arena_miscelm_to_rpages(arena_chunk_map_misc_t *miscelm) arena_miscelm_to_rpages(const arena_chunk_map_misc_t *miscelm)
{ {
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(miscelm); arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(miscelm);
size_t pageind = arena_miscelm_to_pageind(miscelm); size_t pageind = arena_miscelm_to_pageind(miscelm);
@ -726,24 +769,31 @@ arena_run_to_miscelm(arena_run_t *run)
} }
JEMALLOC_ALWAYS_INLINE size_t * JEMALLOC_ALWAYS_INLINE size_t *
arena_mapbitsp_get(arena_chunk_t *chunk, size_t pageind) arena_mapbitsp_get_mutable(arena_chunk_t *chunk, size_t pageind)
{ {
return (&arena_bitselm_get(chunk, pageind)->bits); return (&arena_bitselm_get_mutable(chunk, pageind)->bits);
}
JEMALLOC_ALWAYS_INLINE const size_t *
arena_mapbitsp_get_const(const arena_chunk_t *chunk, size_t pageind)
{
return (arena_mapbitsp_get_mutable((arena_chunk_t *)chunk, pageind));
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbitsp_read(size_t *mapbitsp) arena_mapbitsp_read(const size_t *mapbitsp)
{ {
return (*mapbitsp); return (*mapbitsp);
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_get(const arena_chunk_t *chunk, size_t pageind)
{ {
return (arena_mapbitsp_read(arena_mapbitsp_get(chunk, pageind))); return (arena_mapbitsp_read(arena_mapbitsp_get_const(chunk, pageind)));
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
@ -763,7 +813,7 @@ arena_mapbits_size_decode(size_t mapbits)
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_unallocated_size_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
@ -773,7 +823,7 @@ arena_mapbits_unallocated_size_get(arena_chunk_t *chunk, size_t pageind)
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_large_size_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
@ -784,7 +834,7 @@ arena_mapbits_large_size_get(arena_chunk_t *chunk, size_t pageind)
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_small_runind_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
@ -795,7 +845,7 @@ arena_mapbits_small_runind_get(arena_chunk_t *chunk, size_t pageind)
} }
JEMALLOC_ALWAYS_INLINE szind_t JEMALLOC_ALWAYS_INLINE szind_t
arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_binind_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
szind_t binind; szind_t binind;
@ -807,7 +857,7 @@ arena_mapbits_binind_get(arena_chunk_t *chunk, size_t pageind)
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_dirty_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
@ -818,7 +868,7 @@ arena_mapbits_dirty_get(arena_chunk_t *chunk, size_t pageind)
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_unzeroed_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
@ -829,7 +879,7 @@ arena_mapbits_unzeroed_get(arena_chunk_t *chunk, size_t pageind)
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_decommitted_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_decommitted_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
@ -840,7 +890,7 @@ arena_mapbits_decommitted_get(arena_chunk_t *chunk, size_t pageind)
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_large_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
@ -849,7 +899,7 @@ arena_mapbits_large_get(arena_chunk_t *chunk, size_t pageind)
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_mapbits_allocated_get(arena_chunk_t *chunk, size_t pageind) arena_mapbits_allocated_get(const arena_chunk_t *chunk, size_t pageind)
{ {
size_t mapbits; size_t mapbits;
@ -885,7 +935,7 @@ JEMALLOC_ALWAYS_INLINE void
arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size, arena_mapbits_unallocated_set(arena_chunk_t *chunk, size_t pageind, size_t size,
size_t flags) size_t flags)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind);
assert((size & PAGE_MASK) == 0); assert((size & PAGE_MASK) == 0);
assert((flags & CHUNK_MAP_FLAGS_MASK) == flags); assert((flags & CHUNK_MAP_FLAGS_MASK) == flags);
@ -899,7 +949,7 @@ JEMALLOC_ALWAYS_INLINE void
arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind, arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind,
size_t size) size_t size)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind);
size_t mapbits = arena_mapbitsp_read(mapbitsp); size_t mapbits = arena_mapbitsp_read(mapbitsp);
assert((size & PAGE_MASK) == 0); assert((size & PAGE_MASK) == 0);
@ -911,7 +961,7 @@ arena_mapbits_unallocated_size_set(arena_chunk_t *chunk, size_t pageind,
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_mapbits_internal_set(arena_chunk_t *chunk, size_t pageind, size_t flags) arena_mapbits_internal_set(arena_chunk_t *chunk, size_t pageind, size_t flags)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind);
assert((flags & CHUNK_MAP_UNZEROED) == flags); assert((flags & CHUNK_MAP_UNZEROED) == flags);
arena_mapbitsp_write(mapbitsp, flags); arena_mapbitsp_write(mapbitsp, flags);
@ -921,7 +971,7 @@ JEMALLOC_ALWAYS_INLINE void
arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, size_t size, arena_mapbits_large_set(arena_chunk_t *chunk, size_t pageind, size_t size,
size_t flags) size_t flags)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind);
assert((size & PAGE_MASK) == 0); assert((size & PAGE_MASK) == 0);
assert((flags & CHUNK_MAP_FLAGS_MASK) == flags); assert((flags & CHUNK_MAP_FLAGS_MASK) == flags);
@ -936,7 +986,7 @@ JEMALLOC_ALWAYS_INLINE void
arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind, arena_mapbits_large_binind_set(arena_chunk_t *chunk, size_t pageind,
szind_t binind) szind_t binind)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind);
size_t mapbits = arena_mapbitsp_read(mapbitsp); size_t mapbits = arena_mapbitsp_read(mapbitsp);
assert(binind <= BININD_INVALID); assert(binind <= BININD_INVALID);
@ -950,7 +1000,7 @@ JEMALLOC_ALWAYS_INLINE void
arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind, arena_mapbits_small_set(arena_chunk_t *chunk, size_t pageind, size_t runind,
szind_t binind, size_t flags) szind_t binind, size_t flags)
{ {
size_t *mapbitsp = arena_mapbitsp_get(chunk, pageind); size_t *mapbitsp = arena_mapbitsp_get_mutable(chunk, pageind);
assert(binind < BININD_INVALID); assert(binind < BININD_INVALID);
assert(pageind - runind >= map_bias); assert(pageind - runind >= map_bias);
@ -1007,7 +1057,7 @@ arena_prof_accum_locked(arena_t *arena, uint64_t accumbytes)
} }
JEMALLOC_INLINE bool JEMALLOC_INLINE bool
arena_prof_accum(arena_t *arena, uint64_t accumbytes) arena_prof_accum(tsdn_t *tsdn, arena_t *arena, uint64_t accumbytes)
{ {
cassert(config_prof); cassert(config_prof);
@ -1018,9 +1068,9 @@ arena_prof_accum(arena_t *arena, uint64_t accumbytes)
{ {
bool ret; bool ret;
malloc_mutex_lock(&arena->lock); malloc_mutex_lock(tsdn, &arena->lock);
ret = arena_prof_accum_impl(arena, accumbytes); ret = arena_prof_accum_impl(arena, accumbytes);
malloc_mutex_unlock(&arena->lock); malloc_mutex_unlock(tsdn, &arena->lock);
return (ret); return (ret);
} }
} }
@ -1038,12 +1088,12 @@ arena_ptr_small_binind_get(const void *ptr, size_t mapbits)
size_t pageind; size_t pageind;
size_t actual_mapbits; size_t actual_mapbits;
size_t rpages_ind; size_t rpages_ind;
arena_run_t *run; const arena_run_t *run;
arena_bin_t *bin; arena_bin_t *bin;
szind_t run_binind, actual_binind; szind_t run_binind, actual_binind;
arena_bin_info_t *bin_info; arena_bin_info_t *bin_info;
arena_chunk_map_misc_t *miscelm; const arena_chunk_map_misc_t *miscelm;
void *rpages; const void *rpages;
assert(binind != BININD_INVALID); assert(binind != BININD_INVALID);
assert(binind < NBINS); assert(binind < NBINS);
@ -1056,7 +1106,7 @@ arena_ptr_small_binind_get(const void *ptr, size_t mapbits)
assert(arena_mapbits_allocated_get(chunk, pageind) != 0); assert(arena_mapbits_allocated_get(chunk, pageind) != 0);
rpages_ind = pageind - arena_mapbits_small_runind_get(chunk, rpages_ind = pageind - arena_mapbits_small_runind_get(chunk,
pageind); pageind);
miscelm = arena_miscelm_get(chunk, rpages_ind); miscelm = arena_miscelm_get_const(chunk, rpages_ind);
run = &miscelm->run; run = &miscelm->run;
run_binind = run->binind; run_binind = run->binind;
bin = &arena->bins[run_binind]; bin = &arena->bins[run_binind];
@ -1156,7 +1206,7 @@ arena_run_regind(arena_run_t *run, arena_bin_info_t *bin_info, const void *ptr)
} }
JEMALLOC_INLINE prof_tctx_t * JEMALLOC_INLINE prof_tctx_t *
arena_prof_tctx_get(const void *ptr) arena_prof_tctx_get(tsdn_t *tsdn, const void *ptr)
{ {
prof_tctx_t *ret; prof_tctx_t *ret;
arena_chunk_t *chunk; arena_chunk_t *chunk;
@ -1172,18 +1222,19 @@ arena_prof_tctx_get(const void *ptr)
if (likely((mapbits & CHUNK_MAP_LARGE) == 0)) if (likely((mapbits & CHUNK_MAP_LARGE) == 0))
ret = (prof_tctx_t *)(uintptr_t)1U; ret = (prof_tctx_t *)(uintptr_t)1U;
else { else {
arena_chunk_map_misc_t *elm = arena_miscelm_get(chunk, arena_chunk_map_misc_t *elm =
pageind); arena_miscelm_get_mutable(chunk, pageind);
ret = atomic_read_p(&elm->prof_tctx_pun); ret = atomic_read_p(&elm->prof_tctx_pun);
} }
} else } else
ret = huge_prof_tctx_get(ptr); ret = huge_prof_tctx_get(tsdn, ptr);
return (ret); return (ret);
} }
JEMALLOC_INLINE void JEMALLOC_INLINE void
arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx) arena_prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize,
prof_tctx_t *tctx)
{ {
arena_chunk_t *chunk; arena_chunk_t *chunk;
@ -1202,7 +1253,7 @@ arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
assert(arena_mapbits_large_get(chunk, pageind) != 0); assert(arena_mapbits_large_get(chunk, pageind) != 0);
elm = arena_miscelm_get(chunk, pageind); elm = arena_miscelm_get_mutable(chunk, pageind);
atomic_write_p(&elm->prof_tctx_pun, tctx); atomic_write_p(&elm->prof_tctx_pun, tctx);
} else { } else {
/* /*
@ -1214,12 +1265,12 @@ arena_prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx)
assert(arena_mapbits_large_get(chunk, pageind) == 0); assert(arena_mapbits_large_get(chunk, pageind) == 0);
} }
} else } else
huge_prof_tctx_set(ptr, tctx); huge_prof_tctx_set(tsdn, ptr, tctx);
} }
JEMALLOC_INLINE void JEMALLOC_INLINE void
arena_prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr, arena_prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize,
prof_tctx_t *old_tctx) const void *old_ptr, prof_tctx_t *old_tctx)
{ {
cassert(config_prof); cassert(config_prof);
@ -1238,56 +1289,59 @@ arena_prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
0); 0);
assert(arena_mapbits_large_get(chunk, pageind) != 0); assert(arena_mapbits_large_get(chunk, pageind) != 0);
elm = arena_miscelm_get(chunk, pageind); elm = arena_miscelm_get_mutable(chunk, pageind);
atomic_write_p(&elm->prof_tctx_pun, atomic_write_p(&elm->prof_tctx_pun,
(prof_tctx_t *)(uintptr_t)1U); (prof_tctx_t *)(uintptr_t)1U);
} else } else
huge_prof_tctx_reset(ptr); huge_prof_tctx_reset(tsdn, ptr);
} }
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_decay_ticks(tsd_t *tsd, arena_t *arena, unsigned nticks) arena_decay_ticks(tsdn_t *tsdn, arena_t *arena, unsigned nticks)
{ {
tsd_t *tsd;
ticker_t *decay_ticker; ticker_t *decay_ticker;
if (unlikely(tsd == NULL)) if (unlikely(tsdn_null(tsdn)))
return; return;
tsd = tsdn_tsd(tsdn);
decay_ticker = decay_ticker_get(tsd, arena->ind); decay_ticker = decay_ticker_get(tsd, arena->ind);
if (unlikely(decay_ticker == NULL)) if (unlikely(decay_ticker == NULL))
return; return;
if (unlikely(ticker_ticks(decay_ticker, nticks))) if (unlikely(ticker_ticks(decay_ticker, nticks)))
arena_purge(arena, false); arena_purge(tsdn, arena, false);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_decay_tick(tsd_t *tsd, arena_t *arena) arena_decay_tick(tsdn_t *tsdn, arena_t *arena)
{ {
arena_decay_ticks(tsd, arena, 1); arena_decay_ticks(tsdn, arena, 1);
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
arena_malloc(tsd_t *tsd, arena_t *arena, size_t size, szind_t ind, bool zero, arena_malloc(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind, bool zero,
tcache_t *tcache, bool slow_path) tcache_t *tcache, bool slow_path)
{ {
assert(!tsdn_null(tsdn) || tcache == NULL);
assert(size != 0); assert(size != 0);
if (likely(tcache != NULL)) { if (likely(tcache != NULL)) {
if (likely(size <= SMALL_MAXCLASS)) { if (likely(size <= SMALL_MAXCLASS)) {
return (tcache_alloc_small(tsd, arena, tcache, size, return (tcache_alloc_small(tsdn_tsd(tsdn), arena,
ind, zero, slow_path)); tcache, size, ind, zero, slow_path));
} }
if (likely(size <= tcache_maxclass)) { if (likely(size <= tcache_maxclass)) {
return (tcache_alloc_large(tsd, arena, tcache, size, return (tcache_alloc_large(tsdn_tsd(tsdn), arena,
ind, zero, slow_path)); tcache, size, ind, zero, slow_path));
} }
/* (size > tcache_maxclass) case falls through. */ /* (size > tcache_maxclass) case falls through. */
assert(size > tcache_maxclass); assert(size > tcache_maxclass);
} }
return (arena_malloc_hard(tsd, arena, size, ind, zero, tcache)); return (arena_malloc_hard(tsdn, arena, size, ind, zero));
} }
JEMALLOC_ALWAYS_INLINE arena_t * JEMALLOC_ALWAYS_INLINE arena_t *
@ -1304,7 +1358,7 @@ arena_aalloc(const void *ptr)
/* Return the size of the allocation pointed to by ptr. */ /* Return the size of the allocation pointed to by ptr. */
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
arena_salloc(const void *ptr, bool demote) arena_salloc(tsdn_t *tsdn, const void *ptr, bool demote)
{ {
size_t ret; size_t ret;
arena_chunk_t *chunk; arena_chunk_t *chunk;
@ -1347,17 +1401,18 @@ arena_salloc(const void *ptr, bool demote)
ret = index2size(binind); ret = index2size(binind);
} }
} else } else
ret = huge_salloc(ptr); ret = huge_salloc(tsdn, ptr);
return (ret); return (ret);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path) arena_dalloc(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool slow_path)
{ {
arena_chunk_t *chunk; arena_chunk_t *chunk;
size_t pageind, mapbits; size_t pageind, mapbits;
assert(!tsdn_null(tsdn) || tcache == NULL);
assert(ptr != NULL); assert(ptr != NULL);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
@ -1370,11 +1425,12 @@ arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path)
if (likely(tcache != NULL)) { if (likely(tcache != NULL)) {
szind_t binind = arena_ptr_small_binind_get(ptr, szind_t binind = arena_ptr_small_binind_get(ptr,
mapbits); mapbits);
tcache_dalloc_small(tsd, tcache, ptr, binind, tcache_dalloc_small(tsdn_tsd(tsdn), tcache, ptr,
slow_path); binind, slow_path);
} else { } else {
arena_dalloc_small(tsd, extent_node_arena_get( arena_dalloc_small(tsdn,
&chunk->node), chunk, ptr, pageind); extent_node_arena_get(&chunk->node), chunk,
ptr, pageind);
} }
} else { } else {
size_t size = arena_mapbits_large_size_get(chunk, size_t size = arena_mapbits_large_size_get(chunk,
@ -1385,22 +1441,26 @@ arena_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path)
if (likely(tcache != NULL) && size - large_pad <= if (likely(tcache != NULL) && size - large_pad <=
tcache_maxclass) { tcache_maxclass) {
tcache_dalloc_large(tsd, tcache, ptr, size - tcache_dalloc_large(tsdn_tsd(tsdn), tcache, ptr,
large_pad, slow_path); size - large_pad, slow_path);
} else { } else {
arena_dalloc_large(tsd, extent_node_arena_get( arena_dalloc_large(tsdn,
&chunk->node), chunk, ptr); extent_node_arena_get(&chunk->node), chunk,
ptr);
} }
} }
} else } else
huge_dalloc(tsd, ptr, tcache); huge_dalloc(tsdn, ptr);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache) arena_sdalloc(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache,
bool slow_path)
{ {
arena_chunk_t *chunk; arena_chunk_t *chunk;
assert(!tsdn_null(tsdn) || tcache == NULL);
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (likely(chunk != ptr)) { if (likely(chunk != ptr)) {
if (config_prof && opt_prof) { if (config_prof && opt_prof) {
@ -1417,34 +1477,36 @@ arena_sdalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache)
pageind) - large_pad; pageind) - large_pad;
} }
} }
assert(s2u(size) == s2u(arena_salloc(ptr, false))); assert(s2u(size) == s2u(arena_salloc(tsdn, ptr, false)));
if (likely(size <= SMALL_MAXCLASS)) { if (likely(size <= SMALL_MAXCLASS)) {
/* Small allocation. */ /* Small allocation. */
if (likely(tcache != NULL)) { if (likely(tcache != NULL)) {
szind_t binind = size2index(size); szind_t binind = size2index(size);
tcache_dalloc_small(tsd, tcache, ptr, binind, tcache_dalloc_small(tsdn_tsd(tsdn), tcache, ptr,
true); binind, slow_path);
} else { } else {
size_t pageind = ((uintptr_t)ptr - size_t pageind = ((uintptr_t)ptr -
(uintptr_t)chunk) >> LG_PAGE; (uintptr_t)chunk) >> LG_PAGE;
arena_dalloc_small(tsd, extent_node_arena_get( arena_dalloc_small(tsdn,
&chunk->node), chunk, ptr, pageind); extent_node_arena_get(&chunk->node), chunk,
ptr, pageind);
} }
} else { } else {
assert(config_cache_oblivious || ((uintptr_t)ptr & assert(config_cache_oblivious || ((uintptr_t)ptr &
PAGE_MASK) == 0); PAGE_MASK) == 0);
if (likely(tcache != NULL) && size <= tcache_maxclass) { if (likely(tcache != NULL) && size <= tcache_maxclass) {
tcache_dalloc_large(tsd, tcache, ptr, size, tcache_dalloc_large(tsdn_tsd(tsdn), tcache, ptr,
true); size, slow_path);
} else { } else {
arena_dalloc_large(tsd, extent_node_arena_get( arena_dalloc_large(tsdn,
&chunk->node), chunk, ptr); extent_node_arena_get(&chunk->node), chunk,
ptr);
} }
} }
} else } else
huge_dalloc(tsd, ptr, tcache); huge_dalloc(tsdn, ptr);
} }
# endif /* JEMALLOC_ARENA_INLINE_B */ # endif /* JEMALLOC_ARENA_INLINE_B */
#endif #endif

View File

@ -9,12 +9,13 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *base_alloc(size_t size); void *base_alloc(tsdn_t *tsdn, size_t size);
void base_stats_get(size_t *allocated, size_t *resident, size_t *mapped); void base_stats_get(tsdn_t *tsdn, size_t *allocated, size_t *resident,
size_t *mapped);
bool base_boot(void); bool base_boot(void);
void base_prefork(void); void base_prefork(tsdn_t *tsdn);
void base_postfork_parent(void); void base_postfork_parent(tsdn_t *tsdn);
void base_postfork_child(void); void base_postfork_child(tsdn_t *tsdn);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -17,8 +17,8 @@ typedef unsigned long bitmap_t;
/* /*
* Do some analysis on how big the bitmap is before we use a tree. For a brute * Do some analysis on how big the bitmap is before we use a tree. For a brute
* force linear search, if we would have to call ffsl more than 2^3 times, use a * force linear search, if we would have to call ffs_lu() more than 2^3 times,
* tree instead. * use a tree instead.
*/ */
#if LG_BITMAP_MAXBITS - LG_BITMAP_GROUP_NBITS > 3 #if LG_BITMAP_MAXBITS - LG_BITMAP_GROUP_NBITS > 3
# define USE_TREE # define USE_TREE
@ -223,7 +223,7 @@ bitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo)
i++; i++;
g = bitmap[i]; g = bitmap[i];
} }
bit = (bit - 1) + (i << LG_BITMAP_GROUP_NBITS); bit = (i << LG_BITMAP_GROUP_NBITS) + (bit - 1);
#endif #endif
bitmap_set(bitmap, binfo, bit); bitmap_set(bitmap, binfo, bit);
return (bit); return (bit);

View File

@ -48,28 +48,32 @@ extern size_t chunk_npages;
extern const chunk_hooks_t chunk_hooks_default; extern const chunk_hooks_t chunk_hooks_default;
chunk_hooks_t chunk_hooks_get(arena_t *arena); chunk_hooks_t chunk_hooks_get(tsdn_t *tsdn, arena_t *arena);
chunk_hooks_t chunk_hooks_set(arena_t *arena, chunk_hooks_t chunk_hooks_set(tsdn_t *tsdn, arena_t *arena,
const chunk_hooks_t *chunk_hooks); const chunk_hooks_t *chunk_hooks);
bool chunk_register(const void *chunk, const extent_node_t *node); bool chunk_register(tsdn_t *tsdn, const void *chunk,
const extent_node_t *node);
void chunk_deregister(const void *chunk, const extent_node_t *node); void chunk_deregister(const void *chunk, const extent_node_t *node);
void *chunk_alloc_base(size_t size); void *chunk_alloc_base(size_t size);
void *chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk_alloc_cache(tsdn_t *tsdn, arena_t *arena,
void *new_addr, size_t size, size_t alignment, bool *zero, chunk_hooks_t *chunk_hooks, void *new_addr, size_t size, size_t alignment,
bool dalloc_node); bool *zero, bool dalloc_node);
void *chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk_alloc_wrapper(tsdn_t *tsdn, arena_t *arena,
void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit); chunk_hooks_t *chunk_hooks, void *new_addr, size_t size, size_t alignment,
void chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, bool *zero, bool *commit);
void *chunk, size_t size, bool committed); void chunk_dalloc_cache(tsdn_t *tsdn, arena_t *arena,
void chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_hooks_t *chunk_hooks, void *chunk, size_t size, bool committed);
void *chunk, size_t size, bool zeroed, bool committed); void chunk_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena,
bool chunk_purge_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_hooks_t *chunk_hooks, void *chunk, size_t size, bool zeroed,
void *chunk, size_t size, size_t offset, size_t length); bool committed);
bool chunk_purge_wrapper(tsdn_t *tsdn, arena_t *arena,
chunk_hooks_t *chunk_hooks, void *chunk, size_t size, size_t offset,
size_t length);
bool chunk_boot(void); bool chunk_boot(void);
void chunk_prefork(void); void chunk_prefork(tsdn_t *tsdn);
void chunk_postfork_parent(void); void chunk_postfork_parent(tsdn_t *tsdn);
void chunk_postfork_child(void); void chunk_postfork_child(tsdn_t *tsdn);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -21,15 +21,15 @@ extern const char *dss_prec_names[];
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
dss_prec_t chunk_dss_prec_get(void); dss_prec_t chunk_dss_prec_get(tsdn_t *tsdn);
bool chunk_dss_prec_set(dss_prec_t dss_prec); bool chunk_dss_prec_set(tsdn_t *tsdn, dss_prec_t dss_prec);
void *chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, void *chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr,
size_t alignment, bool *zero, bool *commit); size_t size, size_t alignment, bool *zero, bool *commit);
bool chunk_in_dss(void *chunk); bool chunk_in_dss(tsdn_t *tsdn, void *chunk);
bool chunk_dss_boot(void); bool chunk_dss_boot(void);
void chunk_dss_prefork(void); void chunk_dss_prefork(tsdn_t *tsdn);
void chunk_dss_postfork_parent(void); void chunk_dss_postfork_parent(tsdn_t *tsdn);
void chunk_dss_postfork_child(void); void chunk_dss_postfork_child(tsdn_t *tsdn);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -64,13 +64,13 @@ struct ckh_s {
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
bool ckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *hash, bool ckh_new(tsdn_t *tsdn, ckh_t *ckh, size_t minitems, ckh_hash_t *hash,
ckh_keycomp_t *keycomp); ckh_keycomp_t *keycomp);
void ckh_delete(tsd_t *tsd, ckh_t *ckh); void ckh_delete(tsdn_t *tsdn, ckh_t *ckh);
size_t ckh_count(ckh_t *ckh); size_t ckh_count(ckh_t *ckh);
bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data); bool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data);
bool ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data); bool ckh_insert(tsdn_t *tsdn, ckh_t *ckh, const void *key, const void *data);
bool ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key, bool ckh_remove(tsdn_t *tsdn, ckh_t *ckh, const void *searchkey, void **key,
void **data); void **data);
bool ckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data); bool ckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data);
void ckh_string_hash(const void *key, size_t r_hash[2]); void ckh_string_hash(const void *key, size_t r_hash[2]);

View File

@ -21,13 +21,14 @@ struct ctl_named_node_s {
/* If (nchildren == 0), this is a terminal node. */ /* If (nchildren == 0), this is a terminal node. */
unsigned nchildren; unsigned nchildren;
const ctl_node_t *children; const ctl_node_t *children;
int (*ctl)(const size_t *, size_t, void *, size_t *, int (*ctl)(tsd_t *, const size_t *, size_t, void *,
void *, size_t); size_t *, void *, size_t);
}; };
struct ctl_indexed_node_s { struct ctl_indexed_node_s {
struct ctl_node_s node; struct ctl_node_s node;
const ctl_named_node_t *(*index)(const size_t *, size_t, size_t); const ctl_named_node_t *(*index)(tsdn_t *, const size_t *, size_t,
size_t);
}; };
struct ctl_arena_stats_s { struct ctl_arena_stats_s {
@ -60,6 +61,7 @@ struct ctl_stats_s {
size_t metadata; size_t metadata;
size_t resident; size_t resident;
size_t mapped; size_t mapped;
size_t retained;
unsigned narenas; unsigned narenas;
ctl_arena_stats_t *arenas; /* (narenas + 1) elements. */ ctl_arena_stats_t *arenas; /* (narenas + 1) elements. */
}; };
@ -68,16 +70,17 @@ struct ctl_stats_s {
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
int ctl_byname(const char *name, void *oldp, size_t *oldlenp, void *newp, int ctl_byname(tsd_t *tsd, const char *name, void *oldp, size_t *oldlenp,
size_t newlen);
int ctl_nametomib(const char *name, size_t *mibp, size_t *miblenp);
int ctl_bymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp,
void *newp, size_t newlen); void *newp, size_t newlen);
int ctl_nametomib(tsdn_t *tsdn, const char *name, size_t *mibp,
size_t *miblenp);
int ctl_bymib(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,
size_t *oldlenp, void *newp, size_t newlen);
bool ctl_boot(void); bool ctl_boot(void);
void ctl_prefork(void); void ctl_prefork(tsdn_t *tsdn);
void ctl_postfork_parent(void); void ctl_postfork_parent(tsdn_t *tsdn);
void ctl_postfork_child(void); void ctl_postfork_child(tsdn_t *tsdn);
#define xmallctl(name, oldp, oldlenp, newp, newlen) do { \ #define xmallctl(name, oldp, oldlenp, newp, newlen) do { \
if (je_mallctl(name, oldp, oldlenp, newp, newlen) \ if (je_mallctl(name, oldp, oldlenp, newp, newlen) \

View File

@ -48,7 +48,7 @@ struct extent_node_s {
/* Linkage for the size/address-ordered tree. */ /* Linkage for the size/address-ordered tree. */
rb_node(extent_node_t) szad_link; rb_node(extent_node_t) szad_link;
/* Linkage for arena's huge and node_cache lists. */ /* Linkage for arena's achunks, huge, and node_cache lists. */
ql_elm(extent_node_t) ql_link; ql_elm(extent_node_t) ql_link;
}; };

View File

@ -9,24 +9,23 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *huge_malloc(tsd_t *tsd, arena_t *arena, size_t usize, bool zero, void *huge_malloc(tsdn_t *tsdn, arena_t *arena, size_t usize, bool zero);
tcache_t *tcache); void *huge_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize,
void *huge_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment, size_t alignment, bool zero);
bool zero, tcache_t *tcache); bool huge_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize,
bool huge_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize,
size_t usize_min, size_t usize_max, bool zero); size_t usize_min, size_t usize_max, bool zero);
void *huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, void *huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
size_t usize, size_t alignment, bool zero, tcache_t *tcache); size_t usize, size_t alignment, bool zero, tcache_t *tcache);
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
typedef void (huge_dalloc_junk_t)(void *, size_t); typedef void (huge_dalloc_junk_t)(tsdn_t *, void *, size_t);
extern huge_dalloc_junk_t *huge_dalloc_junk; extern huge_dalloc_junk_t *huge_dalloc_junk;
#endif #endif
void huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache); void huge_dalloc(tsdn_t *tsdn, void *ptr);
arena_t *huge_aalloc(const void *ptr); arena_t *huge_aalloc(const void *ptr);
size_t huge_salloc(const void *ptr); size_t huge_salloc(tsdn_t *tsdn, const void *ptr);
prof_tctx_t *huge_prof_tctx_get(const void *ptr); prof_tctx_t *huge_prof_tctx_get(tsdn_t *tsdn, const void *ptr);
void huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx); void huge_prof_tctx_set(tsdn_t *tsdn, const void *ptr, prof_tctx_t *tctx);
void huge_prof_tctx_reset(const void *ptr); void huge_prof_tctx_reset(tsdn_t *tsdn, const void *ptr);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -161,6 +161,7 @@ static const bool config_cache_oblivious =
#include <malloc/malloc.h> #include <malloc/malloc.h>
#endif #endif
#include "jemalloc/internal/ph.h"
#define RB_COMPACT #define RB_COMPACT
#include "jemalloc/internal/rb.h" #include "jemalloc/internal/rb.h"
#include "jemalloc/internal/qr.h" #include "jemalloc/internal/qr.h"
@ -257,6 +258,9 @@ typedef unsigned szind_t;
# ifdef __powerpc__ # ifdef __powerpc__
# define LG_QUANTUM 4 # define LG_QUANTUM 4
# endif # endif
# ifdef __riscv__
# define LG_QUANTUM 4
# endif
# ifdef __s390__ # ifdef __s390__
# define LG_QUANTUM 4 # define LG_QUANTUM 4
# endif # endif
@ -367,6 +371,7 @@ typedef unsigned szind_t;
#include "jemalloc/internal/smoothstep.h" #include "jemalloc/internal/smoothstep.h"
#include "jemalloc/internal/stats.h" #include "jemalloc/internal/stats.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/witness.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/tsd.h" #include "jemalloc/internal/tsd.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
@ -398,6 +403,7 @@ typedef unsigned szind_t;
#include "jemalloc/internal/smoothstep.h" #include "jemalloc/internal/smoothstep.h"
#include "jemalloc/internal/stats.h" #include "jemalloc/internal/stats.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/witness.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
#include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/bitmap.h"
@ -440,6 +446,9 @@ extern bool in_valgrind;
/* Number of CPUs. */ /* Number of CPUs. */
extern unsigned ncpus; extern unsigned ncpus;
/* Number of arenas used for automatic multiplexing of threads and arenas. */
extern unsigned narenas_auto;
/* /*
* Arenas that are used to service external requests. Not all elements of the * Arenas that are used to service external requests. Not all elements of the
* arenas array are necessarily used; arenas are created lazily as needed. * arenas array are necessarily used; arenas are created lazily as needed.
@ -463,14 +472,14 @@ void a0dalloc(void *ptr);
void *bootstrap_malloc(size_t size); void *bootstrap_malloc(size_t size);
void *bootstrap_calloc(size_t num, size_t size); void *bootstrap_calloc(size_t num, size_t size);
void bootstrap_free(void *ptr); void bootstrap_free(void *ptr);
arena_t *arenas_extend(unsigned ind);
unsigned narenas_total_get(void); unsigned narenas_total_get(void);
arena_t *arena_init(unsigned ind); arena_t *arena_init(tsdn_t *tsdn, unsigned ind);
arena_tdata_t *arena_tdata_get_hard(tsd_t *tsd, unsigned ind); arena_tdata_t *arena_tdata_get_hard(tsd_t *tsd, unsigned ind);
arena_t *arena_choose_hard(tsd_t *tsd); arena_t *arena_choose_hard(tsd_t *tsd, bool internal);
void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind); void arena_migrate(tsd_t *tsd, unsigned oldind, unsigned newind);
void thread_allocated_cleanup(tsd_t *tsd); void thread_allocated_cleanup(tsd_t *tsd);
void thread_deallocated_cleanup(tsd_t *tsd); void thread_deallocated_cleanup(tsd_t *tsd);
void iarena_cleanup(tsd_t *tsd);
void arena_cleanup(tsd_t *tsd); void arena_cleanup(tsd_t *tsd);
void arenas_tdata_cleanup(tsd_t *tsd); void arenas_tdata_cleanup(tsd_t *tsd);
void narenas_tdata_cleanup(tsd_t *tsd); void narenas_tdata_cleanup(tsd_t *tsd);
@ -490,6 +499,7 @@ void jemalloc_postfork_child(void);
#include "jemalloc/internal/smoothstep.h" #include "jemalloc/internal/smoothstep.h"
#include "jemalloc/internal/stats.h" #include "jemalloc/internal/stats.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/witness.h"
#include "jemalloc/internal/mutex.h" #include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
#include "jemalloc/internal/bitmap.h" #include "jemalloc/internal/bitmap.h"
@ -521,8 +531,9 @@ void jemalloc_postfork_child(void);
#include "jemalloc/internal/smoothstep.h" #include "jemalloc/internal/smoothstep.h"
#include "jemalloc/internal/stats.h" #include "jemalloc/internal/stats.h"
#include "jemalloc/internal/ctl.h" #include "jemalloc/internal/ctl.h"
#include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/tsd.h" #include "jemalloc/internal/tsd.h"
#include "jemalloc/internal/witness.h"
#include "jemalloc/internal/mutex.h"
#include "jemalloc/internal/mb.h" #include "jemalloc/internal/mb.h"
#include "jemalloc/internal/extent.h" #include "jemalloc/internal/extent.h"
#include "jemalloc/internal/base.h" #include "jemalloc/internal/base.h"
@ -542,10 +553,12 @@ size_t s2u_compute(size_t size);
size_t s2u_lookup(size_t size); size_t s2u_lookup(size_t size);
size_t s2u(size_t size); size_t s2u(size_t size);
size_t sa2u(size_t size, size_t alignment); size_t sa2u(size_t size, size_t alignment);
arena_t *arena_choose_impl(tsd_t *tsd, arena_t *arena, bool internal);
arena_t *arena_choose(tsd_t *tsd, arena_t *arena); arena_t *arena_choose(tsd_t *tsd, arena_t *arena);
arena_t *arena_ichoose(tsdn_t *tsdn, arena_t *arena);
arena_tdata_t *arena_tdata_get(tsd_t *tsd, unsigned ind, arena_tdata_t *arena_tdata_get(tsd_t *tsd, unsigned ind,
bool refresh_if_missing); bool refresh_if_missing);
arena_t *arena_get(unsigned ind, bool init_if_missing); arena_t *arena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing);
ticker_t *decay_ticker_get(tsd_t *tsd, unsigned ind); ticker_t *decay_ticker_get(tsd_t *tsd, unsigned ind);
#endif #endif
@ -741,7 +754,7 @@ sa2u(size_t size, size_t alignment)
* Calculate the size of the over-size run that arena_palloc() * Calculate the size of the over-size run that arena_palloc()
* would need to allocate in order to guarantee the alignment. * would need to allocate in order to guarantee the alignment.
*/ */
if (usize + large_pad + alignment - PAGE <= arena_maxrun) if (usize + large_pad + alignment <= arena_maxrun)
return (usize); return (usize);
} }
@ -771,7 +784,7 @@ sa2u(size_t size, size_t alignment)
* Calculate the multi-chunk mapping that huge_palloc() would need in * Calculate the multi-chunk mapping that huge_palloc() would need in
* order to guarantee the alignment. * order to guarantee the alignment.
*/ */
if (usize + alignment - PAGE < usize) { if (usize + alignment < usize) {
/* size_t overflow. */ /* size_t overflow. */
return (0); return (0);
} }
@ -780,19 +793,38 @@ sa2u(size_t size, size_t alignment)
/* Choose an arena based on a per-thread value. */ /* Choose an arena based on a per-thread value. */
JEMALLOC_INLINE arena_t * JEMALLOC_INLINE arena_t *
arena_choose(tsd_t *tsd, arena_t *arena) arena_choose_impl(tsd_t *tsd, arena_t *arena, bool internal)
{ {
arena_t *ret; arena_t *ret;
if (arena != NULL) if (arena != NULL)
return (arena); return (arena);
if (unlikely((ret = tsd_arena_get(tsd)) == NULL)) ret = internal ? tsd_iarena_get(tsd) : tsd_arena_get(tsd);
ret = arena_choose_hard(tsd); if (unlikely(ret == NULL))
ret = arena_choose_hard(tsd, internal);
return (ret); return (ret);
} }
JEMALLOC_INLINE arena_t *
arena_choose(tsd_t *tsd, arena_t *arena)
{
return (arena_choose_impl(tsd, arena, false));
}
JEMALLOC_INLINE arena_t *
arena_ichoose(tsdn_t *tsdn, arena_t *arena)
{
assert(!tsdn_null(tsdn) || arena != NULL);
if (!tsdn_null(tsdn))
return (arena_choose_impl(tsdn_tsd(tsdn), NULL, true));
return (arena);
}
JEMALLOC_INLINE arena_tdata_t * JEMALLOC_INLINE arena_tdata_t *
arena_tdata_get(tsd_t *tsd, unsigned ind, bool refresh_if_missing) arena_tdata_get(tsd_t *tsd, unsigned ind, bool refresh_if_missing)
{ {
@ -819,7 +851,7 @@ arena_tdata_get(tsd_t *tsd, unsigned ind, bool refresh_if_missing)
} }
JEMALLOC_INLINE arena_t * JEMALLOC_INLINE arena_t *
arena_get(unsigned ind, bool init_if_missing) arena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing)
{ {
arena_t *ret; arena_t *ret;
@ -829,7 +861,7 @@ arena_get(unsigned ind, bool init_if_missing)
if (unlikely(ret == NULL)) { if (unlikely(ret == NULL)) {
ret = atomic_read_p((void *)&arenas[ind]); ret = atomic_read_p((void *)&arenas[ind]);
if (init_if_missing && unlikely(ret == NULL)) if (init_if_missing && unlikely(ret == NULL))
ret = arena_init(ind); ret = arena_init(tsdn, ind);
} }
return (ret); return (ret);
} }
@ -863,30 +895,27 @@ decay_ticker_get(tsd_t *tsd, unsigned ind)
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
arena_t *iaalloc(const void *ptr); arena_t *iaalloc(const void *ptr);
size_t isalloc(const void *ptr, bool demote); size_t isalloc(tsdn_t *tsdn, const void *ptr, bool demote);
void *iallocztm(tsd_t *tsd, size_t size, szind_t ind, bool zero, void *iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena, bool slow_path); tcache_t *tcache, bool is_metadata, arena_t *arena, bool slow_path);
void *imalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, void *ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero,
arena_t *arena); bool slow_path);
void *imalloc(tsd_t *tsd, size_t size, szind_t ind, bool slow_path); void *ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,
void *icalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache,
arena_t *arena);
void *icalloc(tsd_t *tsd, size_t size, szind_t ind);
void *ipallocztm(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena); tcache_t *tcache, bool is_metadata, arena_t *arena);
void *ipalloct(tsd_t *tsd, size_t usize, size_t alignment, bool zero, void *ipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, arena_t *arena); tcache_t *tcache, arena_t *arena);
void *ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero); void *ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero);
size_t ivsalloc(const void *ptr, bool demote); size_t ivsalloc(tsdn_t *tsdn, const void *ptr, bool demote);
size_t u2rz(size_t usize); size_t u2rz(size_t usize);
size_t p2rz(const void *ptr); size_t p2rz(tsdn_t *tsdn, const void *ptr);
void idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata, void idalloctm(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool is_metadata,
bool slow_path); bool slow_path);
void idalloct(tsd_t *tsd, void *ptr, tcache_t *tcache);
void idalloc(tsd_t *tsd, void *ptr); void idalloc(tsd_t *tsd, void *ptr);
void iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path); void iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path);
void isdalloct(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache); void isdalloct(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache,
void isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache); bool slow_path);
void isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache,
bool slow_path);
void *iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, void *iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
size_t extra, size_t alignment, bool zero, tcache_t *tcache, size_t extra, size_t alignment, bool zero, tcache_t *tcache,
arena_t *arena); arena_t *arena);
@ -894,7 +923,7 @@ void *iralloct(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
size_t alignment, bool zero, tcache_t *tcache, arena_t *arena); size_t alignment, bool zero, tcache_t *tcache, arena_t *arena);
void *iralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, void *iralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
size_t alignment, bool zero); size_t alignment, bool zero);
bool ixalloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, bool ixalloc(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size,
size_t extra, size_t alignment, bool zero); size_t extra, size_t alignment, bool zero);
#endif #endif
@ -910,102 +939,85 @@ iaalloc(const void *ptr)
/* /*
* Typical usage: * Typical usage:
* tsdn_t *tsdn = [...]
* void *ptr = [...] * void *ptr = [...]
* size_t sz = isalloc(ptr, config_prof); * size_t sz = isalloc(tsdn, ptr, config_prof);
*/ */
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
isalloc(const void *ptr, bool demote) isalloc(tsdn_t *tsdn, const void *ptr, bool demote)
{ {
assert(ptr != NULL); assert(ptr != NULL);
/* Demotion only makes sense if config_prof is true. */ /* Demotion only makes sense if config_prof is true. */
assert(config_prof || !demote); assert(config_prof || !demote);
return (arena_salloc(ptr, demote)); return (arena_salloc(tsdn, ptr, demote));
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
iallocztm(tsd_t *tsd, size_t size, szind_t ind, bool zero, tcache_t *tcache, iallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, tcache_t *tcache,
bool is_metadata, arena_t *arena, bool slow_path) bool is_metadata, arena_t *arena, bool slow_path)
{ {
void *ret; void *ret;
assert(size != 0); assert(size != 0);
assert(!is_metadata || tcache == NULL);
assert(!is_metadata || arena == NULL || arena->ind < narenas_auto);
ret = arena_malloc(tsd, arena, size, ind, zero, tcache, slow_path); ret = arena_malloc(tsdn, arena, size, ind, zero, tcache, slow_path);
if (config_stats && is_metadata && likely(ret != NULL)) { if (config_stats && is_metadata && likely(ret != NULL)) {
arena_metadata_allocated_add(iaalloc(ret), isalloc(ret, arena_metadata_allocated_add(iaalloc(ret),
config_prof)); isalloc(tsdn, ret, config_prof));
} }
return (ret); return (ret);
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
imalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, arena_t *arena) ialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero, bool slow_path)
{ {
return (iallocztm(tsd, size, ind, false, tcache, false, arena, true)); return (iallocztm(tsd_tsdn(tsd), size, ind, zero, tcache_get(tsd, true),
false, NULL, slow_path));
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
imalloc(tsd_t *tsd, size_t size, szind_t ind, bool slow_path) ipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,
{
return (iallocztm(tsd, size, ind, false, tcache_get(tsd, true), false,
NULL, slow_path));
}
JEMALLOC_ALWAYS_INLINE void *
icalloct(tsd_t *tsd, size_t size, szind_t ind, tcache_t *tcache, arena_t *arena)
{
return (iallocztm(tsd, size, ind, true, tcache, false, arena, true));
}
JEMALLOC_ALWAYS_INLINE void *
icalloc(tsd_t *tsd, size_t size, szind_t ind)
{
return (iallocztm(tsd, size, ind, true, tcache_get(tsd, true), false,
NULL, true));
}
JEMALLOC_ALWAYS_INLINE void *
ipallocztm(tsd_t *tsd, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, bool is_metadata, arena_t *arena) tcache_t *tcache, bool is_metadata, arena_t *arena)
{ {
void *ret; void *ret;
assert(usize != 0); assert(usize != 0);
assert(usize == sa2u(usize, alignment)); assert(usize == sa2u(usize, alignment));
assert(!is_metadata || tcache == NULL);
assert(!is_metadata || arena == NULL || arena->ind < narenas_auto);
ret = arena_palloc(tsd, arena, usize, alignment, zero, tcache); ret = arena_palloc(tsdn, arena, usize, alignment, zero, tcache);
assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret); assert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret);
if (config_stats && is_metadata && likely(ret != NULL)) { if (config_stats && is_metadata && likely(ret != NULL)) {
arena_metadata_allocated_add(iaalloc(ret), isalloc(ret, arena_metadata_allocated_add(iaalloc(ret), isalloc(tsdn, ret,
config_prof)); config_prof));
} }
return (ret); return (ret);
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
ipalloct(tsd_t *tsd, size_t usize, size_t alignment, bool zero, ipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,
tcache_t *tcache, arena_t *arena) tcache_t *tcache, arena_t *arena)
{ {
return (ipallocztm(tsd, usize, alignment, zero, tcache, false, arena)); return (ipallocztm(tsdn, usize, alignment, zero, tcache, false, arena));
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero) ipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero)
{ {
return (ipallocztm(tsd, usize, alignment, zero, tcache_get(tsd, true), return (ipallocztm(tsd_tsdn(tsd), usize, alignment, zero,
false, NULL)); tcache_get(tsd, true), false, NULL));
} }
JEMALLOC_ALWAYS_INLINE size_t JEMALLOC_ALWAYS_INLINE size_t
ivsalloc(const void *ptr, bool demote) ivsalloc(tsdn_t *tsdn, const void *ptr, bool demote)
{ {
extent_node_t *node; extent_node_t *node;
@ -1017,7 +1029,7 @@ ivsalloc(const void *ptr, bool demote)
assert(extent_node_addr_get(node) == ptr || assert(extent_node_addr_get(node) == ptr ||
extent_node_achunk_get(node)); extent_node_achunk_get(node));
return (isalloc(ptr, demote)); return (isalloc(tsdn, ptr, demote));
} }
JEMALLOC_INLINE size_t JEMALLOC_INLINE size_t
@ -1035,39 +1047,34 @@ u2rz(size_t usize)
} }
JEMALLOC_INLINE size_t JEMALLOC_INLINE size_t
p2rz(const void *ptr) p2rz(tsdn_t *tsdn, const void *ptr)
{ {
size_t usize = isalloc(ptr, false); size_t usize = isalloc(tsdn, ptr, false);
return (u2rz(usize)); return (u2rz(usize));
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
idalloctm(tsd_t *tsd, void *ptr, tcache_t *tcache, bool is_metadata, idalloctm(tsdn_t *tsdn, void *ptr, tcache_t *tcache, bool is_metadata,
bool slow_path) bool slow_path)
{ {
assert(ptr != NULL); assert(ptr != NULL);
assert(!is_metadata || tcache == NULL);
assert(!is_metadata || iaalloc(ptr)->ind < narenas_auto);
if (config_stats && is_metadata) { if (config_stats && is_metadata) {
arena_metadata_allocated_sub(iaalloc(ptr), isalloc(ptr, arena_metadata_allocated_sub(iaalloc(ptr), isalloc(tsdn, ptr,
config_prof)); config_prof));
} }
arena_dalloc(tsd, ptr, tcache, slow_path); arena_dalloc(tsdn, ptr, tcache, slow_path);
}
JEMALLOC_ALWAYS_INLINE void
idalloct(tsd_t *tsd, void *ptr, tcache_t *tcache)
{
idalloctm(tsd, ptr, tcache, false, true);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
idalloc(tsd_t *tsd, void *ptr) idalloc(tsd_t *tsd, void *ptr)
{ {
idalloctm(tsd, ptr, tcache_get(tsd, false), false, true); idalloctm(tsd_tsdn(tsd), ptr, tcache_get(tsd, false), false, true);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
@ -1077,24 +1084,25 @@ iqalloc(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path)
if (slow_path && config_fill && unlikely(opt_quarantine)) if (slow_path && config_fill && unlikely(opt_quarantine))
quarantine(tsd, ptr); quarantine(tsd, ptr);
else else
idalloctm(tsd, ptr, tcache, false, slow_path); idalloctm(tsd_tsdn(tsd), ptr, tcache, false, slow_path);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
isdalloct(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache) isdalloct(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache,
bool slow_path)
{ {
arena_sdalloc(tsd, ptr, size, tcache); arena_sdalloc(tsdn, ptr, size, tcache, slow_path);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache) isqalloc(tsd_t *tsd, void *ptr, size_t size, tcache_t *tcache, bool slow_path)
{ {
if (config_fill && unlikely(opt_quarantine)) if (slow_path && config_fill && unlikely(opt_quarantine))
quarantine(tsd, ptr); quarantine(tsd, ptr);
else else
isdalloct(tsd, ptr, size, tcache); isdalloct(tsd_tsdn(tsd), ptr, size, tcache, slow_path);
} }
JEMALLOC_ALWAYS_INLINE void * JEMALLOC_ALWAYS_INLINE void *
@ -1107,7 +1115,7 @@ iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
usize = sa2u(size + extra, alignment); usize = sa2u(size + extra, alignment);
if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) if (unlikely(usize == 0 || usize > HUGE_MAXCLASS))
return (NULL); return (NULL);
p = ipalloct(tsd, usize, alignment, zero, tcache, arena); p = ipalloct(tsd_tsdn(tsd), usize, alignment, zero, tcache, arena);
if (p == NULL) { if (p == NULL) {
if (extra == 0) if (extra == 0)
return (NULL); return (NULL);
@ -1115,7 +1123,8 @@ iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
usize = sa2u(size, alignment); usize = sa2u(size, alignment);
if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) if (unlikely(usize == 0 || usize > HUGE_MAXCLASS))
return (NULL); return (NULL);
p = ipalloct(tsd, usize, alignment, zero, tcache, arena); p = ipalloct(tsd_tsdn(tsd), usize, alignment, zero, tcache,
arena);
if (p == NULL) if (p == NULL)
return (NULL); return (NULL);
} }
@ -1125,7 +1134,7 @@ iralloct_realign(tsd_t *tsd, void *ptr, size_t oldsize, size_t size,
*/ */
copysize = (size < oldsize) ? size : oldsize; copysize = (size < oldsize) ? size : oldsize;
memcpy(p, ptr, copysize); memcpy(p, ptr, copysize);
isqalloc(tsd, ptr, oldsize, tcache); isqalloc(tsd, ptr, oldsize, tcache, true);
return (p); return (p);
} }
@ -1161,7 +1170,7 @@ iralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t alignment,
} }
JEMALLOC_ALWAYS_INLINE bool JEMALLOC_ALWAYS_INLINE bool
ixalloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t extra, ixalloc(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra,
size_t alignment, bool zero) size_t alignment, bool zero)
{ {
@ -1174,7 +1183,7 @@ ixalloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t extra,
return (true); return (true);
} }
return (arena_ralloc_no_move(tsd, ptr, oldsize, size, extra, zero)); return (arena_ralloc_no_move(tsdn, ptr, oldsize, size, extra, zero));
} }
#endif #endif

View File

@ -214,6 +214,15 @@
#undef JEMALLOC_ZONE #undef JEMALLOC_ZONE
#undef JEMALLOC_ZONE_VERSION #undef JEMALLOC_ZONE_VERSION
/*
* Methods for determining whether the OS overcommits.
* JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY: Linux's
* /proc/sys/vm.overcommit_memory file.
* JEMALLOC_SYSCTL_VM_OVERCOMMIT: FreeBSD's vm.overcommit sysctl.
*/
#undef JEMALLOC_SYSCTL_VM_OVERCOMMIT
#undef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY
/* /*
* Methods for purging unused pages differ between operating systems. * Methods for purging unused pages differ between operating systems.
* *

View File

@ -42,7 +42,7 @@ mb_write(void)
: /* Inputs. */ : /* Inputs. */
: "memory" /* Clobbers. */ : "memory" /* Clobbers. */
); );
#else # else
/* /*
* This is hopefully enough to keep the compiler from reordering * This is hopefully enough to keep the compiler from reordering
* instructions around this one. * instructions around this one.
@ -52,7 +52,7 @@ mb_write(void)
: /* Inputs. */ : /* Inputs. */
: "memory" /* Clobbers. */ : "memory" /* Clobbers. */
); );
#endif # endif
} }
#elif (defined(__amd64__) || defined(__x86_64__)) #elif (defined(__amd64__) || defined(__x86_64__))
JEMALLOC_INLINE void JEMALLOC_INLINE void
@ -104,9 +104,9 @@ mb_write(void)
{ {
malloc_mutex_t mtx; malloc_mutex_t mtx;
malloc_mutex_init(&mtx); malloc_mutex_init(&mtx, "mb", WITNESS_RANK_OMIT);
malloc_mutex_lock(&mtx); malloc_mutex_lock(NULL, &mtx);
malloc_mutex_unlock(&mtx); malloc_mutex_unlock(NULL, &mtx);
} }
#endif #endif
#endif #endif

View File

@ -6,17 +6,21 @@ typedef struct malloc_mutex_s malloc_mutex_t;
#ifdef _WIN32 #ifdef _WIN32
# define MALLOC_MUTEX_INITIALIZER # define MALLOC_MUTEX_INITIALIZER
#elif (defined(JEMALLOC_OSSPIN)) #elif (defined(JEMALLOC_OSSPIN))
# define MALLOC_MUTEX_INITIALIZER {0} # define MALLOC_MUTEX_INITIALIZER {0, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)}
#elif (defined(JEMALLOC_MUTEX_INIT_CB)) #elif (defined(JEMALLOC_MUTEX_INIT_CB))
# define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER, NULL} # define MALLOC_MUTEX_INITIALIZER \
{PTHREAD_MUTEX_INITIALIZER, NULL, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)}
#else #else
# if (defined(JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP) && \ # if (defined(JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP) && \
defined(PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP)) defined(PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP))
# define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_ADAPTIVE_NP # define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_ADAPTIVE_NP
# define MALLOC_MUTEX_INITIALIZER {PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP} # define MALLOC_MUTEX_INITIALIZER \
{PTHREAD_ADAPTIVE_MUTEX_INITIALIZER_NP, \
WITNESS_INITIALIZER(WITNESS_RANK_OMIT)}
# else # else
# define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT # define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT
# define MALLOC_MUTEX_INITIALIZER {PTHREAD_MUTEX_INITIALIZER} # define MALLOC_MUTEX_INITIALIZER \
{PTHREAD_MUTEX_INITIALIZER, WITNESS_INITIALIZER(WITNESS_RANK_OMIT)}
# endif # endif
#endif #endif
@ -39,6 +43,7 @@ struct malloc_mutex_s {
#else #else
pthread_mutex_t lock; pthread_mutex_t lock;
#endif #endif
witness_t witness;
}; };
#endif /* JEMALLOC_H_STRUCTS */ #endif /* JEMALLOC_H_STRUCTS */
@ -52,27 +57,31 @@ extern bool isthreaded;
# define isthreaded true # define isthreaded true
#endif #endif
bool malloc_mutex_init(malloc_mutex_t *mutex); bool malloc_mutex_init(malloc_mutex_t *mutex, const char *name,
void malloc_mutex_prefork(malloc_mutex_t *mutex); witness_rank_t rank);
void malloc_mutex_postfork_parent(malloc_mutex_t *mutex); void malloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex);
void malloc_mutex_postfork_child(malloc_mutex_t *mutex); void malloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex);
bool mutex_boot(void); void malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex);
bool malloc_mutex_boot(void);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_INLINES #ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE #ifndef JEMALLOC_ENABLE_INLINE
void malloc_mutex_lock(malloc_mutex_t *mutex); void malloc_mutex_lock(tsdn_t *tsdn, malloc_mutex_t *mutex);
void malloc_mutex_unlock(malloc_mutex_t *mutex); void malloc_mutex_unlock(tsdn_t *tsdn, malloc_mutex_t *mutex);
void malloc_mutex_assert_owner(tsdn_t *tsdn, malloc_mutex_t *mutex);
void malloc_mutex_assert_not_owner(tsdn_t *tsdn, malloc_mutex_t *mutex);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_))
JEMALLOC_INLINE void JEMALLOC_INLINE void
malloc_mutex_lock(malloc_mutex_t *mutex) malloc_mutex_lock(tsdn_t *tsdn, malloc_mutex_t *mutex)
{ {
if (isthreaded) { if (isthreaded) {
witness_assert_not_owner(tsdn, &mutex->witness);
#ifdef _WIN32 #ifdef _WIN32
# if _WIN32_WINNT >= 0x0600 # if _WIN32_WINNT >= 0x0600
AcquireSRWLockExclusive(&mutex->lock); AcquireSRWLockExclusive(&mutex->lock);
@ -84,14 +93,16 @@ malloc_mutex_lock(malloc_mutex_t *mutex)
#else #else
pthread_mutex_lock(&mutex->lock); pthread_mutex_lock(&mutex->lock);
#endif #endif
witness_lock(tsdn, &mutex->witness);
} }
} }
JEMALLOC_INLINE void JEMALLOC_INLINE void
malloc_mutex_unlock(malloc_mutex_t *mutex) malloc_mutex_unlock(tsdn_t *tsdn, malloc_mutex_t *mutex)
{ {
if (isthreaded) { if (isthreaded) {
witness_unlock(tsdn, &mutex->witness);
#ifdef _WIN32 #ifdef _WIN32
# if _WIN32_WINNT >= 0x0600 # if _WIN32_WINNT >= 0x0600
ReleaseSRWLockExclusive(&mutex->lock); ReleaseSRWLockExclusive(&mutex->lock);
@ -105,6 +116,22 @@ malloc_mutex_unlock(malloc_mutex_t *mutex)
#endif #endif
} }
} }
JEMALLOC_INLINE void
malloc_mutex_assert_owner(tsdn_t *tsdn, malloc_mutex_t *mutex)
{
if (isthreaded)
witness_assert_owner(tsdn, &mutex->witness);
}
JEMALLOC_INLINE void
malloc_mutex_assert_not_owner(tsdn_t *tsdn, malloc_mutex_t *mutex)
{
if (isthreaded)
witness_assert_not_owner(tsdn, &mutex->witness);
}
#endif #endif
#endif /* JEMALLOC_H_INLINES */ #endif /* JEMALLOC_H_INLINES */

View File

@ -1,13 +1,13 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_TYPES #ifdef JEMALLOC_H_TYPES
#define JEMALLOC_CLOCK_GETTIME defined(_POSIX_MONOTONIC_CLOCK) \ #define JEMALLOC_CLOCK_GETTIME defined(_POSIX_MONOTONIC_CLOCK) \
&& _POSIX_MONOTONIC_CLOCK >= 0 && _POSIX_MONOTONIC_CLOCK >= 0
typedef struct nstime_s nstime_t; typedef struct nstime_s nstime_t;
/* Maximum supported number of seconds (~584 years). */ /* Maximum supported number of seconds (~584 years). */
#define NSTIME_SEC_MAX 18446744072 #define NSTIME_SEC_MAX KQU(18446744072)
#endif /* JEMALLOC_H_TYPES */ #endif /* JEMALLOC_H_TYPES */
/******************************************************************************/ /******************************************************************************/

View File

@ -9,13 +9,14 @@
/******************************************************************************/ /******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS #ifdef JEMALLOC_H_EXTERNS
void *pages_map(void *addr, size_t size); void *pages_map(void *addr, size_t size, bool *commit);
void pages_unmap(void *addr, size_t size); void pages_unmap(void *addr, size_t size);
void *pages_trim(void *addr, size_t alloc_size, size_t leadsize, void *pages_trim(void *addr, size_t alloc_size, size_t leadsize,
size_t size); size_t size, bool *commit);
bool pages_commit(void *addr, size_t size); bool pages_commit(void *addr, size_t size);
bool pages_decommit(void *addr, size_t size); bool pages_decommit(void *addr, size_t size);
bool pages_purge(void *addr, size_t size); bool pages_purge(void *addr, size_t size);
void pages_boot(void);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/

View File

@ -0,0 +1,345 @@
/*
* A Pairing Heap implementation.
*
* "The Pairing Heap: A New Form of Self-Adjusting Heap"
* https://www.cs.cmu.edu/~sleator/papers/pairing-heaps.pdf
*
* With auxiliary twopass list, described in a follow on paper.
*
* "Pairing Heaps: Experiments and Analysis"
* http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.2988&rep=rep1&type=pdf
*
*******************************************************************************
*/
#ifndef PH_H_
#define PH_H_
/* Node structure. */
#define phn(a_type) \
struct { \
a_type *phn_prev; \
a_type *phn_next; \
a_type *phn_lchild; \
}
/* Root structure. */
#define ph(a_type) \
struct { \
a_type *ph_root; \
}
/* Internal utility macros. */
#define phn_lchild_get(a_type, a_field, a_phn) \
(a_phn->a_field.phn_lchild)
#define phn_lchild_set(a_type, a_field, a_phn, a_lchild) do { \
a_phn->a_field.phn_lchild = a_lchild; \
} while (0)
#define phn_next_get(a_type, a_field, a_phn) \
(a_phn->a_field.phn_next)
#define phn_prev_set(a_type, a_field, a_phn, a_prev) do { \
a_phn->a_field.phn_prev = a_prev; \
} while (0)
#define phn_prev_get(a_type, a_field, a_phn) \
(a_phn->a_field.phn_prev)
#define phn_next_set(a_type, a_field, a_phn, a_next) do { \
a_phn->a_field.phn_next = a_next; \
} while (0)
#define phn_merge_ordered(a_type, a_field, a_phn0, a_phn1, a_cmp) do { \
a_type *phn0child; \
\
assert(a_phn0 != NULL); \
assert(a_phn1 != NULL); \
assert(a_cmp(a_phn0, a_phn1) <= 0); \
\
phn_prev_set(a_type, a_field, a_phn1, a_phn0); \
phn0child = phn_lchild_get(a_type, a_field, a_phn0); \
phn_next_set(a_type, a_field, a_phn1, phn0child); \
if (phn0child != NULL) \
phn_prev_set(a_type, a_field, phn0child, a_phn1); \
phn_lchild_set(a_type, a_field, a_phn0, a_phn1); \
} while (0)
#define phn_merge(a_type, a_field, a_phn0, a_phn1, a_cmp, r_phn) do { \
if (a_phn0 == NULL) \
r_phn = a_phn1; \
else if (a_phn1 == NULL) \
r_phn = a_phn0; \
else if (a_cmp(a_phn0, a_phn1) < 0) { \
phn_merge_ordered(a_type, a_field, a_phn0, a_phn1, \
a_cmp); \
r_phn = a_phn0; \
} else { \
phn_merge_ordered(a_type, a_field, a_phn1, a_phn0, \
a_cmp); \
r_phn = a_phn1; \
} \
} while (0)
#define ph_merge_siblings(a_type, a_field, a_phn, a_cmp, r_phn) do { \
a_type *head = NULL; \
a_type *tail = NULL; \
a_type *phn0 = a_phn; \
a_type *phn1 = phn_next_get(a_type, a_field, phn0); \
\
/* \
* Multipass merge, wherein the first two elements of a FIFO \
* are repeatedly merged, and each result is appended to the \
* singly linked FIFO, until the FIFO contains only a single \
* element. We start with a sibling list but no reference to \
* its tail, so we do a single pass over the sibling list to \
* populate the FIFO. \
*/ \
if (phn1 != NULL) { \
a_type *phnrest = phn_next_get(a_type, a_field, phn1); \
if (phnrest != NULL) \
phn_prev_set(a_type, a_field, phnrest, NULL); \
phn_prev_set(a_type, a_field, phn0, NULL); \
phn_next_set(a_type, a_field, phn0, NULL); \
phn_prev_set(a_type, a_field, phn1, NULL); \
phn_next_set(a_type, a_field, phn1, NULL); \
phn_merge(a_type, a_field, phn0, phn1, a_cmp, phn0); \
head = tail = phn0; \
phn0 = phnrest; \
while (phn0 != NULL) { \
phn1 = phn_next_get(a_type, a_field, phn0); \
if (phn1 != NULL) { \
phnrest = phn_next_get(a_type, a_field, \
phn1); \
if (phnrest != NULL) { \
phn_prev_set(a_type, a_field, \
phnrest, NULL); \
} \
phn_prev_set(a_type, a_field, phn0, \
NULL); \
phn_next_set(a_type, a_field, phn0, \
NULL); \
phn_prev_set(a_type, a_field, phn1, \
NULL); \
phn_next_set(a_type, a_field, phn1, \
NULL); \
phn_merge(a_type, a_field, phn0, phn1, \
a_cmp, phn0); \
phn_next_set(a_type, a_field, tail, \
phn0); \
tail = phn0; \
phn0 = phnrest; \
} else { \
phn_next_set(a_type, a_field, tail, \
phn0); \
tail = phn0; \
phn0 = NULL; \
} \
} \
phn0 = head; \
phn1 = phn_next_get(a_type, a_field, phn0); \
if (phn1 != NULL) { \
while (true) { \
head = phn_next_get(a_type, a_field, \
phn1); \
assert(phn_prev_get(a_type, a_field, \
phn0) == NULL); \
phn_next_set(a_type, a_field, phn0, \
NULL); \
assert(phn_prev_get(a_type, a_field, \
phn1) == NULL); \
phn_next_set(a_type, a_field, phn1, \
NULL); \
phn_merge(a_type, a_field, phn0, phn1, \
a_cmp, phn0); \
if (head == NULL) \
break; \
phn_next_set(a_type, a_field, tail, \
phn0); \
tail = phn0; \
phn0 = head; \
phn1 = phn_next_get(a_type, a_field, \
phn0); \
} \
} \
} \
r_phn = phn0; \
} while (0)
#define ph_merge_aux(a_type, a_field, a_ph, a_cmp) do { \
a_type *phn = phn_next_get(a_type, a_field, a_ph->ph_root); \
if (phn != NULL) { \
phn_prev_set(a_type, a_field, a_ph->ph_root, NULL); \
phn_next_set(a_type, a_field, a_ph->ph_root, NULL); \
phn_prev_set(a_type, a_field, phn, NULL); \
ph_merge_siblings(a_type, a_field, phn, a_cmp, phn); \
assert(phn_next_get(a_type, a_field, phn) == NULL); \
phn_merge(a_type, a_field, a_ph->ph_root, phn, a_cmp, \
a_ph->ph_root); \
} \
} while (0)
#define ph_merge_children(a_type, a_field, a_phn, a_cmp, r_phn) do { \
a_type *lchild = phn_lchild_get(a_type, a_field, a_phn); \
if (lchild == NULL) \
r_phn = NULL; \
else { \
ph_merge_siblings(a_type, a_field, lchild, a_cmp, \
r_phn); \
} \
} while (0)
/*
* The ph_proto() macro generates function prototypes that correspond to the
* functions generated by an equivalently parameterized call to ph_gen().
*/
#define ph_proto(a_attr, a_prefix, a_ph_type, a_type) \
a_attr void a_prefix##new(a_ph_type *ph); \
a_attr bool a_prefix##empty(a_ph_type *ph); \
a_attr a_type *a_prefix##first(a_ph_type *ph); \
a_attr void a_prefix##insert(a_ph_type *ph, a_type *phn); \
a_attr a_type *a_prefix##remove_first(a_ph_type *ph); \
a_attr void a_prefix##remove(a_ph_type *ph, a_type *phn);
/*
* The ph_gen() macro generates a type-specific pairing heap implementation,
* based on the above cpp macros.
*/
#define ph_gen(a_attr, a_prefix, a_ph_type, a_type, a_field, a_cmp) \
a_attr void \
a_prefix##new(a_ph_type *ph) \
{ \
\
memset(ph, 0, sizeof(ph(a_type))); \
} \
a_attr bool \
a_prefix##empty(a_ph_type *ph) \
{ \
\
return (ph->ph_root == NULL); \
} \
a_attr a_type * \
a_prefix##first(a_ph_type *ph) \
{ \
\
if (ph->ph_root == NULL) \
return (NULL); \
ph_merge_aux(a_type, a_field, ph, a_cmp); \
return (ph->ph_root); \
} \
a_attr void \
a_prefix##insert(a_ph_type *ph, a_type *phn) \
{ \
\
memset(&phn->a_field, 0, sizeof(phn(a_type))); \
\
/* \
* Treat the root as an aux list during insertion, and lazily \
* merge during a_prefix##remove_first(). For elements that \
* are inserted, then removed via a_prefix##remove() before the \
* aux list is ever processed, this makes insert/remove \
* constant-time, whereas eager merging would make insert \
* O(log n). \
*/ \
if (ph->ph_root == NULL) \
ph->ph_root = phn; \
else { \
phn_next_set(a_type, a_field, phn, phn_next_get(a_type, \
a_field, ph->ph_root)); \
if (phn_next_get(a_type, a_field, ph->ph_root) != \
NULL) { \
phn_prev_set(a_type, a_field, \
phn_next_get(a_type, a_field, ph->ph_root), \
phn); \
} \
phn_prev_set(a_type, a_field, phn, ph->ph_root); \
phn_next_set(a_type, a_field, ph->ph_root, phn); \
} \
} \
a_attr a_type * \
a_prefix##remove_first(a_ph_type *ph) \
{ \
a_type *ret; \
\
if (ph->ph_root == NULL) \
return (NULL); \
ph_merge_aux(a_type, a_field, ph, a_cmp); \
\
ret = ph->ph_root; \
\
ph_merge_children(a_type, a_field, ph->ph_root, a_cmp, \
ph->ph_root); \
\
return (ret); \
} \
a_attr void \
a_prefix##remove(a_ph_type *ph, a_type *phn) \
{ \
a_type *replace, *parent; \
\
/* \
* We can delete from aux list without merging it, but we need \
* to merge if we are dealing with the root node. \
*/ \
if (ph->ph_root == phn) { \
ph_merge_aux(a_type, a_field, ph, a_cmp); \
if (ph->ph_root == phn) { \
ph_merge_children(a_type, a_field, ph->ph_root, \
a_cmp, ph->ph_root); \
return; \
} \
} \
\
/* Get parent (if phn is leftmost child) before mutating. */ \
if ((parent = phn_prev_get(a_type, a_field, phn)) != NULL) { \
if (phn_lchild_get(a_type, a_field, parent) != phn) \
parent = NULL; \
} \
/* Find a possible replacement node, and link to parent. */ \
ph_merge_children(a_type, a_field, phn, a_cmp, replace); \
/* Set next/prev for sibling linked list. */ \
if (replace != NULL) { \
if (parent != NULL) { \
phn_prev_set(a_type, a_field, replace, parent); \
phn_lchild_set(a_type, a_field, parent, \
replace); \
} else { \
phn_prev_set(a_type, a_field, replace, \
phn_prev_get(a_type, a_field, phn)); \
if (phn_prev_get(a_type, a_field, phn) != \
NULL) { \
phn_next_set(a_type, a_field, \
phn_prev_get(a_type, a_field, phn), \
replace); \
} \
} \
phn_next_set(a_type, a_field, replace, \
phn_next_get(a_type, a_field, phn)); \
if (phn_next_get(a_type, a_field, phn) != NULL) { \
phn_prev_set(a_type, a_field, \
phn_next_get(a_type, a_field, phn), \
replace); \
} \
} else { \
if (parent != NULL) { \
a_type *next = phn_next_get(a_type, a_field, \
phn); \
phn_lchild_set(a_type, a_field, parent, next); \
if (next != NULL) { \
phn_prev_set(a_type, a_field, next, \
parent); \
} \
} else { \
assert(phn_prev_get(a_type, a_field, phn) != \
NULL); \
phn_next_set(a_type, a_field, \
phn_prev_get(a_type, a_field, phn), \
phn_next_get(a_type, a_field, phn)); \
} \
if (phn_next_get(a_type, a_field, phn) != NULL) { \
phn_prev_set(a_type, a_field, \
phn_next_get(a_type, a_field, phn), \
phn_prev_get(a_type, a_field, phn)); \
} \
} \
}
#endif /* PH_H_ */

View File

@ -5,10 +5,12 @@ arena_alloc_junk_small
arena_basic_stats_merge arena_basic_stats_merge
arena_bin_index arena_bin_index
arena_bin_info arena_bin_info
arena_bitselm_get arena_bitselm_get_const
arena_bitselm_get_mutable
arena_boot arena_boot
arena_choose arena_choose
arena_choose_hard arena_choose_hard
arena_choose_impl
arena_chunk_alloc_huge arena_chunk_alloc_huge
arena_chunk_cache_maybe_insert arena_chunk_cache_maybe_insert
arena_chunk_cache_maybe_remove arena_chunk_cache_maybe_remove
@ -21,9 +23,7 @@ arena_dalloc
arena_dalloc_bin arena_dalloc_bin
arena_dalloc_bin_junked_locked arena_dalloc_bin_junked_locked
arena_dalloc_junk_large arena_dalloc_junk_large
arena_dalloc_junk_large_impl
arena_dalloc_junk_small arena_dalloc_junk_small
arena_dalloc_junk_small_impl
arena_dalloc_large arena_dalloc_large
arena_dalloc_large_junked_locked arena_dalloc_large_junked_locked
arena_dalloc_small arena_dalloc_small
@ -36,6 +36,7 @@ arena_decay_time_set
arena_dss_prec_get arena_dss_prec_get
arena_dss_prec_set arena_dss_prec_set
arena_get arena_get
arena_ichoose
arena_init arena_init
arena_lg_dirty_mult_default_get arena_lg_dirty_mult_default_get
arena_lg_dirty_mult_default_set arena_lg_dirty_mult_default_set
@ -62,7 +63,8 @@ arena_mapbits_unallocated_set
arena_mapbits_unallocated_size_get arena_mapbits_unallocated_size_get
arena_mapbits_unallocated_size_set arena_mapbits_unallocated_size_set
arena_mapbits_unzeroed_get arena_mapbits_unzeroed_get
arena_mapbitsp_get arena_mapbitsp_get_const
arena_mapbitsp_get_mutable
arena_mapbitsp_read arena_mapbitsp_read
arena_mapbitsp_write arena_mapbitsp_write
arena_maxrun arena_maxrun
@ -71,7 +73,8 @@ arena_metadata_allocated_add
arena_metadata_allocated_get arena_metadata_allocated_get
arena_metadata_allocated_sub arena_metadata_allocated_sub
arena_migrate arena_migrate
arena_miscelm_get arena_miscelm_get_const
arena_miscelm_get_mutable
arena_miscelm_to_pageind arena_miscelm_to_pageind
arena_miscelm_to_rpages arena_miscelm_to_rpages
arena_new arena_new
@ -102,6 +105,7 @@ arena_ralloc_junk_large
arena_ralloc_no_move arena_ralloc_no_move
arena_rd_to_miscelm arena_rd_to_miscelm
arena_redzone_corruption arena_redzone_corruption
arena_reset
arena_run_regind arena_run_regind
arena_run_to_miscelm arena_run_to_miscelm
arena_salloc arena_salloc
@ -287,14 +291,11 @@ huge_ralloc
huge_ralloc_no_move huge_ralloc_no_move
huge_salloc huge_salloc
iaalloc iaalloc
ialloc
iallocztm iallocztm
icalloc iarena_cleanup
icalloct
idalloc idalloc
idalloct
idalloctm idalloctm
imalloc
imalloct
in_valgrind in_valgrind
index2size index2size
index2size_compute index2size_compute
@ -320,6 +321,9 @@ large_maxclass
lg_floor lg_floor
lg_prof_sample lg_prof_sample
malloc_cprintf malloc_cprintf
malloc_mutex_assert_not_owner
malloc_mutex_assert_owner
malloc_mutex_boot
malloc_mutex_init malloc_mutex_init
malloc_mutex_lock malloc_mutex_lock
malloc_mutex_postfork_child malloc_mutex_postfork_child
@ -341,7 +345,7 @@ malloc_write
map_bias map_bias
map_misc_offset map_misc_offset
mb_write mb_write
mutex_boot narenas_auto
narenas_tdata_cleanup narenas_tdata_cleanup
narenas_total_get narenas_total_get
ncpus ncpus
@ -361,7 +365,6 @@ nstime_nsec
nstime_sec nstime_sec
nstime_subtract nstime_subtract
nstime_update nstime_update
nstime_update_impl
opt_abort opt_abort
opt_decay_time opt_decay_time
opt_dss opt_dss
@ -391,6 +394,7 @@ opt_utrace
opt_xmalloc opt_xmalloc
opt_zero opt_zero
p2rz p2rz
pages_boot
pages_commit pages_commit
pages_decommit pages_decommit
pages_map pages_map
@ -492,8 +496,6 @@ tcache_alloc_easy
tcache_alloc_large tcache_alloc_large
tcache_alloc_small tcache_alloc_small
tcache_alloc_small_hard tcache_alloc_small_hard
tcache_arena_associate
tcache_arena_dissociate
tcache_arena_reassociate tcache_arena_reassociate
tcache_bin_flush_large tcache_bin_flush_large
tcache_bin_flush_small tcache_bin_flush_small
@ -539,19 +541,23 @@ tsd_boot
tsd_boot0 tsd_boot0
tsd_boot1 tsd_boot1
tsd_booted tsd_booted
tsd_booted_get
tsd_cleanup tsd_cleanup
tsd_cleanup_wrapper tsd_cleanup_wrapper
tsd_fetch tsd_fetch
tsd_get tsd_get
tsd_iarena_get
tsd_iarena_set
tsd_iarenap_get
tsd_initialized
tsd_init_check_recursion
tsd_init_finish
tsd_init_head
tsd_narenas_tdata_get tsd_narenas_tdata_get
tsd_narenas_tdata_set tsd_narenas_tdata_set
tsd_narenas_tdatap_get tsd_narenas_tdatap_get
tsd_wrapper_get tsd_wrapper_get
tsd_wrapper_set tsd_wrapper_set
tsd_initialized
tsd_init_check_recursion
tsd_init_finish
tsd_init_head
tsd_nominal tsd_nominal
tsd_prof_tdata_get tsd_prof_tdata_get
tsd_prof_tdata_set tsd_prof_tdata_set
@ -574,8 +580,33 @@ tsd_thread_deallocated_set
tsd_thread_deallocatedp_get tsd_thread_deallocatedp_get
tsd_tls tsd_tls
tsd_tsd tsd_tsd
tsd_tsdn
tsd_witness_fork_get
tsd_witness_fork_set
tsd_witness_forkp_get
tsd_witnesses_get
tsd_witnesses_set
tsd_witnessesp_get
tsdn_fetch
tsdn_null
tsdn_tsd
u2rz u2rz
valgrind_freelike_block valgrind_freelike_block
valgrind_make_mem_defined valgrind_make_mem_defined
valgrind_make_mem_noaccess valgrind_make_mem_noaccess
valgrind_make_mem_undefined valgrind_make_mem_undefined
witness_assert_lockless
witness_assert_not_owner
witness_assert_owner
witness_fork_cleanup
witness_init
witness_lock
witness_lock_error
witness_lockless_error
witness_not_owner_error
witness_owner_error
witness_postfork_child
witness_postfork_parent
witness_prefork
witness_unlock
witnesses_cleanup

View File

@ -281,7 +281,7 @@ extern uint64_t prof_interval;
extern size_t lg_prof_sample; extern size_t lg_prof_sample;
void prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated); void prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx, bool updated);
void prof_malloc_sample_object(const void *ptr, size_t usize, void prof_malloc_sample_object(tsdn_t *tsdn, const void *ptr, size_t usize,
prof_tctx_t *tctx); prof_tctx_t *tctx);
void prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_tctx_t *tctx); void prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_tctx_t *tctx);
void bt_init(prof_bt_t *bt, void **vec); void bt_init(prof_bt_t *bt, void **vec);
@ -293,33 +293,33 @@ size_t prof_bt_count(void);
const prof_cnt_t *prof_cnt_all(void); const prof_cnt_t *prof_cnt_all(void);
typedef int (prof_dump_open_t)(bool, const char *); typedef int (prof_dump_open_t)(bool, const char *);
extern prof_dump_open_t *prof_dump_open; extern prof_dump_open_t *prof_dump_open;
typedef bool (prof_dump_header_t)(bool, const prof_cnt_t *); typedef bool (prof_dump_header_t)(tsdn_t *, bool, const prof_cnt_t *);
extern prof_dump_header_t *prof_dump_header; extern prof_dump_header_t *prof_dump_header;
#endif #endif
void prof_idump(void); void prof_idump(tsdn_t *tsdn);
bool prof_mdump(const char *filename); bool prof_mdump(tsd_t *tsd, const char *filename);
void prof_gdump(void); void prof_gdump(tsdn_t *tsdn);
prof_tdata_t *prof_tdata_init(tsd_t *tsd); prof_tdata_t *prof_tdata_init(tsdn_t *tsdn);
prof_tdata_t *prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata); prof_tdata_t *prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata);
void prof_reset(tsd_t *tsd, size_t lg_sample); void prof_reset(tsdn_t *tsdn, size_t lg_sample);
void prof_tdata_cleanup(tsd_t *tsd); void prof_tdata_cleanup(tsd_t *tsd);
const char *prof_thread_name_get(void); bool prof_active_get(tsdn_t *tsdn);
bool prof_active_get(void); bool prof_active_set(tsdn_t *tsdn, bool active);
bool prof_active_set(bool active); const char *prof_thread_name_get(tsd_t *tsd);
int prof_thread_name_set(tsd_t *tsd, const char *thread_name); int prof_thread_name_set(tsd_t *tsd, const char *thread_name);
bool prof_thread_active_get(void); bool prof_thread_active_get(tsd_t *tsd);
bool prof_thread_active_set(bool active); bool prof_thread_active_set(tsd_t *tsd, bool active);
bool prof_thread_active_init_get(void); bool prof_thread_active_init_get(tsdn_t *tsdn);
bool prof_thread_active_init_set(bool active_init); bool prof_thread_active_init_set(tsdn_t *tsdn, bool active_init);
bool prof_gdump_get(void); bool prof_gdump_get(tsdn_t *tsdn);
bool prof_gdump_set(bool active); bool prof_gdump_set(tsdn_t *tsdn, bool active);
void prof_boot0(void); void prof_boot0(void);
void prof_boot1(void); void prof_boot1(void);
bool prof_boot2(void); bool prof_boot2(tsdn_t *tsdn);
void prof_prefork0(void); void prof_prefork0(tsdn_t *tsdn);
void prof_prefork1(void); void prof_prefork1(tsdn_t *tsdn);
void prof_postfork_parent(void); void prof_postfork_parent(tsdn_t *tsdn);
void prof_postfork_child(void); void prof_postfork_child(tsdn_t *tsdn);
void prof_sample_threshold_update(prof_tdata_t *tdata); void prof_sample_threshold_update(prof_tdata_t *tdata);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
@ -330,17 +330,17 @@ void prof_sample_threshold_update(prof_tdata_t *tdata);
bool prof_active_get_unlocked(void); bool prof_active_get_unlocked(void);
bool prof_gdump_get_unlocked(void); bool prof_gdump_get_unlocked(void);
prof_tdata_t *prof_tdata_get(tsd_t *tsd, bool create); prof_tdata_t *prof_tdata_get(tsd_t *tsd, bool create);
prof_tctx_t *prof_tctx_get(tsdn_t *tsdn, const void *ptr);
void prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize,
prof_tctx_t *tctx);
void prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize,
const void *old_ptr, prof_tctx_t *tctx);
bool prof_sample_accum_update(tsd_t *tsd, size_t usize, bool commit, bool prof_sample_accum_update(tsd_t *tsd, size_t usize, bool commit,
prof_tdata_t **tdata_out); prof_tdata_t **tdata_out);
prof_tctx_t *prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, prof_tctx_t *prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active,
bool update); bool update);
prof_tctx_t *prof_tctx_get(const void *ptr); void prof_malloc(tsdn_t *tsdn, const void *ptr, size_t usize,
void prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx);
void prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr,
prof_tctx_t *tctx); prof_tctx_t *tctx);
void prof_malloc_sample_object(const void *ptr, size_t usize,
prof_tctx_t *tctx);
void prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx);
void prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, void prof_realloc(tsd_t *tsd, const void *ptr, size_t usize,
prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr, prof_tctx_t *tctx, bool prof_active, bool updated, const void *old_ptr,
size_t old_usize, prof_tctx_t *old_tctx); size_t old_usize, prof_tctx_t *old_tctx);
@ -384,7 +384,7 @@ prof_tdata_get(tsd_t *tsd, bool create)
if (create) { if (create) {
if (unlikely(tdata == NULL)) { if (unlikely(tdata == NULL)) {
if (tsd_nominal(tsd)) { if (tsd_nominal(tsd)) {
tdata = prof_tdata_init(tsd); tdata = prof_tdata_init(tsd_tsdn(tsd));
tsd_prof_tdata_set(tsd, tdata); tsd_prof_tdata_set(tsd, tdata);
} }
} else if (unlikely(tdata->expired)) { } else if (unlikely(tdata->expired)) {
@ -398,34 +398,34 @@ prof_tdata_get(tsd_t *tsd, bool create)
} }
JEMALLOC_ALWAYS_INLINE prof_tctx_t * JEMALLOC_ALWAYS_INLINE prof_tctx_t *
prof_tctx_get(const void *ptr) prof_tctx_get(tsdn_t *tsdn, const void *ptr)
{ {
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
return (arena_prof_tctx_get(ptr)); return (arena_prof_tctx_get(tsdn, ptr));
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
prof_tctx_set(const void *ptr, size_t usize, prof_tctx_t *tctx) prof_tctx_set(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx)
{ {
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
arena_prof_tctx_set(ptr, usize, tctx); arena_prof_tctx_set(tsdn, ptr, usize, tctx);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
prof_tctx_reset(const void *ptr, size_t usize, const void *old_ptr, prof_tctx_reset(tsdn_t *tsdn, const void *ptr, size_t usize, const void *old_ptr,
prof_tctx_t *old_tctx) prof_tctx_t *old_tctx)
{ {
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
arena_prof_tctx_reset(ptr, usize, old_ptr, old_tctx); arena_prof_tctx_reset(tsdn, ptr, usize, old_ptr, old_tctx);
} }
JEMALLOC_ALWAYS_INLINE bool JEMALLOC_ALWAYS_INLINE bool
@ -480,17 +480,17 @@ prof_alloc_prep(tsd_t *tsd, size_t usize, bool prof_active, bool update)
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
prof_malloc(const void *ptr, size_t usize, prof_tctx_t *tctx) prof_malloc(tsdn_t *tsdn, const void *ptr, size_t usize, prof_tctx_t *tctx)
{ {
cassert(config_prof); cassert(config_prof);
assert(ptr != NULL); assert(ptr != NULL);
assert(usize == isalloc(ptr, true)); assert(usize == isalloc(tsdn, ptr, true));
if (unlikely((uintptr_t)tctx > (uintptr_t)1U)) if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
prof_malloc_sample_object(ptr, usize, tctx); prof_malloc_sample_object(tsdn, ptr, usize, tctx);
else else
prof_tctx_set(ptr, usize, (prof_tctx_t *)(uintptr_t)1U); prof_tctx_set(tsdn, ptr, usize, (prof_tctx_t *)(uintptr_t)1U);
} }
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
@ -504,7 +504,7 @@ prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
assert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U); assert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U);
if (prof_active && !updated && ptr != NULL) { if (prof_active && !updated && ptr != NULL) {
assert(usize == isalloc(ptr, true)); assert(usize == isalloc(tsd_tsdn(tsd), ptr, true));
if (prof_sample_accum_update(tsd, usize, true, NULL)) { if (prof_sample_accum_update(tsd, usize, true, NULL)) {
/* /*
* Don't sample. The usize passed to prof_alloc_prep() * Don't sample. The usize passed to prof_alloc_prep()
@ -521,9 +521,9 @@ prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
old_sampled = ((uintptr_t)old_tctx > (uintptr_t)1U); old_sampled = ((uintptr_t)old_tctx > (uintptr_t)1U);
if (unlikely(sampled)) if (unlikely(sampled))
prof_malloc_sample_object(ptr, usize, tctx); prof_malloc_sample_object(tsd_tsdn(tsd), ptr, usize, tctx);
else else
prof_tctx_reset(ptr, usize, old_ptr, old_tctx); prof_tctx_reset(tsd_tsdn(tsd), ptr, usize, old_ptr, old_tctx);
if (unlikely(old_sampled)) if (unlikely(old_sampled))
prof_free_sampled_object(tsd, old_usize, old_tctx); prof_free_sampled_object(tsd, old_usize, old_tctx);
@ -532,10 +532,10 @@ prof_realloc(tsd_t *tsd, const void *ptr, size_t usize, prof_tctx_t *tctx,
JEMALLOC_ALWAYS_INLINE void JEMALLOC_ALWAYS_INLINE void
prof_free(tsd_t *tsd, const void *ptr, size_t usize) prof_free(tsd_t *tsd, const void *ptr, size_t usize)
{ {
prof_tctx_t *tctx = prof_tctx_get(ptr); prof_tctx_t *tctx = prof_tctx_get(tsd_tsdn(tsd), ptr);
cassert(config_prof); cassert(config_prof);
assert(usize == isalloc(ptr, true)); assert(usize == isalloc(tsd_tsdn(tsd), ptr, true));
if (unlikely((uintptr_t)tctx > (uintptr_t)1U)) if (unlikely((uintptr_t)tctx > (uintptr_t)1U))
prof_free_sampled_object(tsd, usize, tctx); prof_free_sampled_object(tsd, usize, tctx);

View File

@ -15,9 +15,10 @@ typedef struct rtree_s rtree_t;
* machine address width. * machine address width.
*/ */
#define LG_RTREE_BITS_PER_LEVEL 4 #define LG_RTREE_BITS_PER_LEVEL 4
#define RTREE_BITS_PER_LEVEL (ZU(1) << LG_RTREE_BITS_PER_LEVEL) #define RTREE_BITS_PER_LEVEL (1U << LG_RTREE_BITS_PER_LEVEL)
/* Maximum rtree height. */
#define RTREE_HEIGHT_MAX \ #define RTREE_HEIGHT_MAX \
((ZU(1) << (LG_SIZEOF_PTR+3)) / RTREE_BITS_PER_LEVEL) ((1U << (LG_SIZEOF_PTR+3)) / RTREE_BITS_PER_LEVEL)
/* Used for two-stage lock-free node initialization. */ /* Used for two-stage lock-free node initialization. */
#define RTREE_NODE_INITIALIZING ((rtree_node_elm_t *)0x1) #define RTREE_NODE_INITIALIZING ((rtree_node_elm_t *)0x1)
@ -111,22 +112,25 @@ unsigned rtree_start_level(rtree_t *rtree, uintptr_t key);
uintptr_t rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level); uintptr_t rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level);
bool rtree_node_valid(rtree_node_elm_t *node); bool rtree_node_valid(rtree_node_elm_t *node);
rtree_node_elm_t *rtree_child_tryread(rtree_node_elm_t *elm); rtree_node_elm_t *rtree_child_tryread(rtree_node_elm_t *elm,
bool dependent);
rtree_node_elm_t *rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, rtree_node_elm_t *rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm,
unsigned level); unsigned level, bool dependent);
extent_node_t *rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm, extent_node_t *rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm,
bool dependent); bool dependent);
void rtree_val_write(rtree_t *rtree, rtree_node_elm_t *elm, void rtree_val_write(rtree_t *rtree, rtree_node_elm_t *elm,
const extent_node_t *val); const extent_node_t *val);
rtree_node_elm_t *rtree_subtree_tryread(rtree_t *rtree, unsigned level); rtree_node_elm_t *rtree_subtree_tryread(rtree_t *rtree, unsigned level,
rtree_node_elm_t *rtree_subtree_read(rtree_t *rtree, unsigned level); bool dependent);
rtree_node_elm_t *rtree_subtree_read(rtree_t *rtree, unsigned level,
bool dependent);
extent_node_t *rtree_get(rtree_t *rtree, uintptr_t key, bool dependent); extent_node_t *rtree_get(rtree_t *rtree, uintptr_t key, bool dependent);
bool rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val); bool rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_RTREE_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_RTREE_C_))
JEMALLOC_INLINE unsigned JEMALLOC_ALWAYS_INLINE unsigned
rtree_start_level(rtree_t *rtree, uintptr_t key) rtree_start_level(rtree_t *rtree, uintptr_t key)
{ {
unsigned start_level; unsigned start_level;
@ -140,7 +144,7 @@ rtree_start_level(rtree_t *rtree, uintptr_t key)
return (start_level); return (start_level);
} }
JEMALLOC_INLINE uintptr_t JEMALLOC_ALWAYS_INLINE uintptr_t
rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level) rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level)
{ {
@ -149,37 +153,40 @@ rtree_subkey(rtree_t *rtree, uintptr_t key, unsigned level)
rtree->levels[level].bits) - 1)); rtree->levels[level].bits) - 1));
} }
JEMALLOC_INLINE bool JEMALLOC_ALWAYS_INLINE bool
rtree_node_valid(rtree_node_elm_t *node) rtree_node_valid(rtree_node_elm_t *node)
{ {
return ((uintptr_t)node > (uintptr_t)RTREE_NODE_INITIALIZING); return ((uintptr_t)node > (uintptr_t)RTREE_NODE_INITIALIZING);
} }
JEMALLOC_INLINE rtree_node_elm_t * JEMALLOC_ALWAYS_INLINE rtree_node_elm_t *
rtree_child_tryread(rtree_node_elm_t *elm) rtree_child_tryread(rtree_node_elm_t *elm, bool dependent)
{ {
rtree_node_elm_t *child; rtree_node_elm_t *child;
/* Double-checked read (first read may be stale. */ /* Double-checked read (first read may be stale. */
child = elm->child; child = elm->child;
if (!rtree_node_valid(child)) if (!dependent && !rtree_node_valid(child))
child = atomic_read_p(&elm->pun); child = atomic_read_p(&elm->pun);
assert(!dependent || child != NULL);
return (child); return (child);
} }
JEMALLOC_INLINE rtree_node_elm_t * JEMALLOC_ALWAYS_INLINE rtree_node_elm_t *
rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level) rtree_child_read(rtree_t *rtree, rtree_node_elm_t *elm, unsigned level,
bool dependent)
{ {
rtree_node_elm_t *child; rtree_node_elm_t *child;
child = rtree_child_tryread(elm); child = rtree_child_tryread(elm, dependent);
if (unlikely(!rtree_node_valid(child))) if (!dependent && unlikely(!rtree_node_valid(child)))
child = rtree_child_read_hard(rtree, elm, level); child = rtree_child_read_hard(rtree, elm, level);
assert(!dependent || child != NULL);
return (child); return (child);
} }
JEMALLOC_INLINE extent_node_t * JEMALLOC_ALWAYS_INLINE extent_node_t *
rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm, bool dependent) rtree_val_read(rtree_t *rtree, rtree_node_elm_t *elm, bool dependent)
{ {
@ -208,54 +215,119 @@ rtree_val_write(rtree_t *rtree, rtree_node_elm_t *elm, const extent_node_t *val)
atomic_write_p(&elm->pun, val); atomic_write_p(&elm->pun, val);
} }
JEMALLOC_INLINE rtree_node_elm_t * JEMALLOC_ALWAYS_INLINE rtree_node_elm_t *
rtree_subtree_tryread(rtree_t *rtree, unsigned level) rtree_subtree_tryread(rtree_t *rtree, unsigned level, bool dependent)
{ {
rtree_node_elm_t *subtree; rtree_node_elm_t *subtree;
/* Double-checked read (first read may be stale. */ /* Double-checked read (first read may be stale. */
subtree = rtree->levels[level].subtree; subtree = rtree->levels[level].subtree;
if (!rtree_node_valid(subtree)) if (!dependent && unlikely(!rtree_node_valid(subtree)))
subtree = atomic_read_p(&rtree->levels[level].subtree_pun); subtree = atomic_read_p(&rtree->levels[level].subtree_pun);
assert(!dependent || subtree != NULL);
return (subtree); return (subtree);
} }
JEMALLOC_INLINE rtree_node_elm_t * JEMALLOC_ALWAYS_INLINE rtree_node_elm_t *
rtree_subtree_read(rtree_t *rtree, unsigned level) rtree_subtree_read(rtree_t *rtree, unsigned level, bool dependent)
{ {
rtree_node_elm_t *subtree; rtree_node_elm_t *subtree;
subtree = rtree_subtree_tryread(rtree, level); subtree = rtree_subtree_tryread(rtree, level, dependent);
if (unlikely(!rtree_node_valid(subtree))) if (!dependent && unlikely(!rtree_node_valid(subtree)))
subtree = rtree_subtree_read_hard(rtree, level); subtree = rtree_subtree_read_hard(rtree, level);
assert(!dependent || subtree != NULL);
return (subtree); return (subtree);
} }
JEMALLOC_INLINE extent_node_t * JEMALLOC_ALWAYS_INLINE extent_node_t *
rtree_get(rtree_t *rtree, uintptr_t key, bool dependent) rtree_get(rtree_t *rtree, uintptr_t key, bool dependent)
{ {
uintptr_t subkey; uintptr_t subkey;
unsigned i, start_level; unsigned start_level;
rtree_node_elm_t *node, *child; rtree_node_elm_t *node;
start_level = rtree_start_level(rtree, key); start_level = rtree_start_level(rtree, key);
for (i = start_level, node = rtree_subtree_tryread(rtree, start_level); node = rtree_subtree_tryread(rtree, start_level, dependent);
/**/; i++, node = child) { #define RTREE_GET_BIAS (RTREE_HEIGHT_MAX - rtree->height)
if (!dependent && unlikely(!rtree_node_valid(node))) switch (start_level + RTREE_GET_BIAS) {
return (NULL); #define RTREE_GET_SUBTREE(level) \
subkey = rtree_subkey(rtree, key, i); case level: \
if (i == rtree->height - 1) { assert(level < (RTREE_HEIGHT_MAX-1)); \
/* if (!dependent && unlikely(!rtree_node_valid(node))) \
* node is a leaf, so it contains values rather than return (NULL); \
* child pointers. subkey = rtree_subkey(rtree, key, level - \
*/ RTREE_GET_BIAS); \
return (rtree_val_read(rtree, &node[subkey], node = rtree_child_tryread(&node[subkey], dependent); \
dependent)); /* Fall through. */
} #define RTREE_GET_LEAF(level) \
assert(i < rtree->height - 1); case level: \
child = rtree_child_tryread(&node[subkey]); assert(level == (RTREE_HEIGHT_MAX-1)); \
if (!dependent && unlikely(!rtree_node_valid(node))) \
return (NULL); \
subkey = rtree_subkey(rtree, key, level - \
RTREE_GET_BIAS); \
/* \
* node is a leaf, so it contains values rather than \
* child pointers. \
*/ \
return (rtree_val_read(rtree, &node[subkey], \
dependent));
#if RTREE_HEIGHT_MAX > 1
RTREE_GET_SUBTREE(0)
#endif
#if RTREE_HEIGHT_MAX > 2
RTREE_GET_SUBTREE(1)
#endif
#if RTREE_HEIGHT_MAX > 3
RTREE_GET_SUBTREE(2)
#endif
#if RTREE_HEIGHT_MAX > 4
RTREE_GET_SUBTREE(3)
#endif
#if RTREE_HEIGHT_MAX > 5
RTREE_GET_SUBTREE(4)
#endif
#if RTREE_HEIGHT_MAX > 6
RTREE_GET_SUBTREE(5)
#endif
#if RTREE_HEIGHT_MAX > 7
RTREE_GET_SUBTREE(6)
#endif
#if RTREE_HEIGHT_MAX > 8
RTREE_GET_SUBTREE(7)
#endif
#if RTREE_HEIGHT_MAX > 9
RTREE_GET_SUBTREE(8)
#endif
#if RTREE_HEIGHT_MAX > 10
RTREE_GET_SUBTREE(9)
#endif
#if RTREE_HEIGHT_MAX > 11
RTREE_GET_SUBTREE(10)
#endif
#if RTREE_HEIGHT_MAX > 12
RTREE_GET_SUBTREE(11)
#endif
#if RTREE_HEIGHT_MAX > 13
RTREE_GET_SUBTREE(12)
#endif
#if RTREE_HEIGHT_MAX > 14
RTREE_GET_SUBTREE(13)
#endif
#if RTREE_HEIGHT_MAX > 15
RTREE_GET_SUBTREE(14)
#endif
#if RTREE_HEIGHT_MAX > 16
# error Unsupported RTREE_HEIGHT_MAX
#endif
RTREE_GET_LEAF(RTREE_HEIGHT_MAX-1)
#undef RTREE_GET_SUBTREE
#undef RTREE_GET_LEAF
default: not_reached();
} }
#undef RTREE_GET_BIAS
not_reached(); not_reached();
} }
@ -268,7 +340,7 @@ rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val)
start_level = rtree_start_level(rtree, key); start_level = rtree_start_level(rtree, key);
node = rtree_subtree_read(rtree, start_level); node = rtree_subtree_read(rtree, start_level, false);
if (node == NULL) if (node == NULL)
return (true); return (true);
for (i = start_level; /**/; i++, node = child) { for (i = start_level; /**/; i++, node = child) {
@ -282,7 +354,7 @@ rtree_set(rtree_t *rtree, uintptr_t key, const extent_node_t *val)
return (false); return (false);
} }
assert(i + 1 < rtree->height); assert(i + 1 < rtree->height);
child = rtree_child_read(rtree, &node[subkey], i); child = rtree_child_read(rtree, &node[subkey], i, false);
if (child == NULL) if (child == NULL)
return (true); return (true);
} }

View File

@ -102,6 +102,14 @@ struct arena_stats_s {
/* Number of bytes currently mapped. */ /* Number of bytes currently mapped. */
size_t mapped; size_t mapped;
/*
* Number of bytes currently retained as a side effect of munmap() being
* disabled/bypassed. Retained bytes are technically mapped (though
* always decommitted or purged), but they are excluded from the mapped
* statistic (above).
*/
size_t retained;
/* /*
* Total number of purge sweeps, total number of madvise calls made, * Total number of purge sweeps, total number of madvise calls made,
* and total pages purged in order to keep dirty unused memory under * and total pages purged in order to keep dirty unused memory under

View File

@ -130,27 +130,25 @@ extern size_t tcache_maxclass;
*/ */
extern tcaches_t *tcaches; extern tcaches_t *tcaches;
size_t tcache_salloc(const void *ptr); size_t tcache_salloc(tsdn_t *tsdn, const void *ptr);
void tcache_event_hard(tsd_t *tsd, tcache_t *tcache); void tcache_event_hard(tsd_t *tsd, tcache_t *tcache);
void *tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache, void *tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache,
tcache_bin_t *tbin, szind_t binind, bool *tcache_success); tcache_bin_t *tbin, szind_t binind, bool *tcache_success);
void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin, void tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
szind_t binind, unsigned rem); szind_t binind, unsigned rem);
void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind, void tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
unsigned rem, tcache_t *tcache); unsigned rem, tcache_t *tcache);
void tcache_arena_associate(tcache_t *tcache, arena_t *arena); void tcache_arena_reassociate(tsdn_t *tsdn, tcache_t *tcache,
void tcache_arena_reassociate(tcache_t *tcache, arena_t *oldarena, arena_t *oldarena, arena_t *newarena);
arena_t *newarena);
void tcache_arena_dissociate(tcache_t *tcache, arena_t *arena);
tcache_t *tcache_get_hard(tsd_t *tsd); tcache_t *tcache_get_hard(tsd_t *tsd);
tcache_t *tcache_create(tsd_t *tsd, arena_t *arena); tcache_t *tcache_create(tsdn_t *tsdn, arena_t *arena);
void tcache_cleanup(tsd_t *tsd); void tcache_cleanup(tsd_t *tsd);
void tcache_enabled_cleanup(tsd_t *tsd); void tcache_enabled_cleanup(tsd_t *tsd);
void tcache_stats_merge(tcache_t *tcache, arena_t *arena); void tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena);
bool tcaches_create(tsd_t *tsd, unsigned *r_ind); bool tcaches_create(tsdn_t *tsdn, unsigned *r_ind);
void tcaches_flush(tsd_t *tsd, unsigned ind); void tcaches_flush(tsd_t *tsd, unsigned ind);
void tcaches_destroy(tsd_t *tsd, unsigned ind); void tcaches_destroy(tsd_t *tsd, unsigned ind);
bool tcache_boot(void); bool tcache_boot(tsdn_t *tsdn);
#endif /* JEMALLOC_H_EXTERNS */ #endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/ /******************************************************************************/
@ -297,8 +295,8 @@ tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
if (unlikely(arena == NULL)) if (unlikely(arena == NULL))
return (NULL); return (NULL);
ret = tcache_alloc_small_hard(tsd, arena, tcache, tbin, binind, ret = tcache_alloc_small_hard(tsd_tsdn(tsd), arena, tcache,
&tcache_hard_success); tbin, binind, &tcache_hard_success);
if (tcache_hard_success == false) if (tcache_hard_success == false)
return (NULL); return (NULL);
} }
@ -310,7 +308,7 @@ tcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
*/ */
if (config_prof || (slow_path && config_fill) || unlikely(zero)) { if (config_prof || (slow_path && config_fill) || unlikely(zero)) {
usize = index2size(binind); usize = index2size(binind);
assert(tcache_salloc(ret) == usize); assert(tcache_salloc(tsd_tsdn(tsd), ret) == usize);
} }
if (likely(!zero)) { if (likely(!zero)) {
@ -358,7 +356,7 @@ tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
if (unlikely(arena == NULL)) if (unlikely(arena == NULL))
return (NULL); return (NULL);
ret = arena_malloc_large(tsd, arena, binind, zero); ret = arena_malloc_large(tsd_tsdn(tsd), arena, binind, zero);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
} else { } else {
@ -381,9 +379,10 @@ tcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,
} }
if (likely(!zero)) { if (likely(!zero)) {
if (slow_path && config_fill) { if (slow_path && config_fill) {
if (unlikely(opt_junk_alloc)) if (unlikely(opt_junk_alloc)) {
memset(ret, 0xa5, usize); memset(ret, JEMALLOC_ALLOC_JUNK,
else if (unlikely(opt_zero)) usize);
} else if (unlikely(opt_zero))
memset(ret, 0, usize); memset(ret, 0, usize);
} }
} else } else
@ -406,7 +405,7 @@ tcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind,
tcache_bin_t *tbin; tcache_bin_t *tbin;
tcache_bin_info_t *tbin_info; tcache_bin_info_t *tbin_info;
assert(tcache_salloc(ptr) <= SMALL_MAXCLASS); assert(tcache_salloc(tsd_tsdn(tsd), ptr) <= SMALL_MAXCLASS);
if (slow_path && config_fill && unlikely(opt_junk_free)) if (slow_path && config_fill && unlikely(opt_junk_free))
arena_dalloc_junk_small(ptr, &arena_bin_info[binind]); arena_dalloc_junk_small(ptr, &arena_bin_info[binind]);
@ -433,8 +432,8 @@ tcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, size_t size,
tcache_bin_info_t *tbin_info; tcache_bin_info_t *tbin_info;
assert((size & PAGE_MASK) == 0); assert((size & PAGE_MASK) == 0);
assert(tcache_salloc(ptr) > SMALL_MAXCLASS); assert(tcache_salloc(tsd_tsdn(tsd), ptr) > SMALL_MAXCLASS);
assert(tcache_salloc(ptr) <= tcache_maxclass); assert(tcache_salloc(tsd_tsdn(tsd), ptr) <= tcache_maxclass);
binind = size2index(size); binind = size2index(size);
@ -458,8 +457,10 @@ JEMALLOC_ALWAYS_INLINE tcache_t *
tcaches_get(tsd_t *tsd, unsigned ind) tcaches_get(tsd_t *tsd, unsigned ind)
{ {
tcaches_t *elm = &tcaches[ind]; tcaches_t *elm = &tcaches[ind];
if (unlikely(elm->tcache == NULL)) if (unlikely(elm->tcache == NULL)) {
elm->tcache = tcache_create(tsd, arena_choose(tsd, NULL)); elm->tcache = tcache_create(tsd_tsdn(tsd), arena_choose(tsd,
NULL));
}
return (elm->tcache); return (elm->tcache);
} }
#endif #endif

View File

@ -13,6 +13,9 @@ typedef struct tsd_init_head_s tsd_init_head_t;
#endif #endif
typedef struct tsd_s tsd_t; typedef struct tsd_s tsd_t;
typedef struct tsdn_s tsdn_t;
#define TSDN_NULL ((tsdn_t *)0)
typedef enum { typedef enum {
tsd_state_uninitialized, tsd_state_uninitialized,
@ -44,6 +47,7 @@ typedef enum {
* The result is a set of generated functions, e.g.: * The result is a set of generated functions, e.g.:
* *
* bool example_tsd_boot(void) {...} * bool example_tsd_boot(void) {...}
* bool example_tsd_booted_get(void) {...}
* example_t *example_tsd_get() {...} * example_t *example_tsd_get() {...}
* void example_tsd_set(example_t *val) {...} * void example_tsd_set(example_t *val) {...}
* *
@ -98,6 +102,8 @@ a_attr void \
a_name##tsd_boot1(void); \ a_name##tsd_boot1(void); \
a_attr bool \ a_attr bool \
a_name##tsd_boot(void); \ a_name##tsd_boot(void); \
a_attr bool \
a_name##tsd_booted_get(void); \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(void); \ a_name##tsd_get(void); \
a_attr void \ a_attr void \
@ -201,6 +207,12 @@ a_name##tsd_boot(void) \
\ \
return (a_name##tsd_boot0()); \ return (a_name##tsd_boot0()); \
} \ } \
a_attr bool \
a_name##tsd_booted_get(void) \
{ \
\
return (a_name##tsd_booted); \
} \
/* Get/set. */ \ /* Get/set. */ \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(void) \ a_name##tsd_get(void) \
@ -246,6 +258,12 @@ a_name##tsd_boot(void) \
\ \
return (a_name##tsd_boot0()); \ return (a_name##tsd_boot0()); \
} \ } \
a_attr bool \
a_name##tsd_booted_get(void) \
{ \
\
return (a_name##tsd_booted); \
} \
/* Get/set. */ \ /* Get/set. */ \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(void) \ a_name##tsd_get(void) \
@ -368,6 +386,12 @@ a_name##tsd_boot(void) \
a_name##tsd_boot1(); \ a_name##tsd_boot1(); \
return (false); \ return (false); \
} \ } \
a_attr bool \
a_name##tsd_booted_get(void) \
{ \
\
return (a_name##tsd_booted); \
} \
/* Get/set. */ \ /* Get/set. */ \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(void) \ a_name##tsd_get(void) \
@ -490,6 +514,12 @@ a_name##tsd_boot(void) \
a_name##tsd_boot1(); \ a_name##tsd_boot1(); \
return (false); \ return (false); \
} \ } \
a_attr bool \
a_name##tsd_booted_get(void) \
{ \
\
return (a_name##tsd_booted); \
} \
/* Get/set. */ \ /* Get/set. */ \
a_attr a_type * \ a_attr a_type * \
a_name##tsd_get(void) \ a_name##tsd_get(void) \
@ -536,12 +566,15 @@ struct tsd_init_head_s {
O(thread_allocated, uint64_t) \ O(thread_allocated, uint64_t) \
O(thread_deallocated, uint64_t) \ O(thread_deallocated, uint64_t) \
O(prof_tdata, prof_tdata_t *) \ O(prof_tdata, prof_tdata_t *) \
O(iarena, arena_t *) \
O(arena, arena_t *) \ O(arena, arena_t *) \
O(arenas_tdata, arena_tdata_t *) \ O(arenas_tdata, arena_tdata_t *) \
O(narenas_tdata, unsigned) \ O(narenas_tdata, unsigned) \
O(arenas_tdata_bypass, bool) \ O(arenas_tdata_bypass, bool) \
O(tcache_enabled, tcache_enabled_t) \ O(tcache_enabled, tcache_enabled_t) \
O(quarantine, quarantine_t *) \ O(quarantine, quarantine_t *) \
O(witnesses, witness_list_t) \
O(witness_fork, bool) \
#define TSD_INITIALIZER { \ #define TSD_INITIALIZER { \
tsd_state_uninitialized, \ tsd_state_uninitialized, \
@ -551,10 +584,13 @@ struct tsd_init_head_s {
NULL, \ NULL, \
NULL, \ NULL, \
NULL, \ NULL, \
NULL, \
0, \ 0, \
false, \ false, \
tcache_enabled_default, \ tcache_enabled_default, \
NULL \ NULL, \
ql_head_initializer(witnesses), \
false \
} }
struct tsd_s { struct tsd_s {
@ -565,6 +601,15 @@ MALLOC_TSD
#undef O #undef O
}; };
/*
* Wrapper around tsd_t that makes it possible to avoid implicit conversion
* between tsd_t and tsdn_t, where tsdn_t is "nullable" and has to be
* explicitly converted to tsd_t, which is non-nullable.
*/
struct tsdn_s {
tsd_t tsd;
};
static const tsd_t tsd_initializer = TSD_INITIALIZER; static const tsd_t tsd_initializer = TSD_INITIALIZER;
malloc_tsd_types(, tsd_t) malloc_tsd_types(, tsd_t)
@ -577,7 +622,7 @@ void *malloc_tsd_malloc(size_t size);
void malloc_tsd_dalloc(void *wrapper); void malloc_tsd_dalloc(void *wrapper);
void malloc_tsd_no_cleanup(void *arg); void malloc_tsd_no_cleanup(void *arg);
void malloc_tsd_cleanup_register(bool (*f)(void)); void malloc_tsd_cleanup_register(bool (*f)(void));
bool malloc_tsd_boot0(void); tsd_t *malloc_tsd_boot0(void);
void malloc_tsd_boot1(void); void malloc_tsd_boot1(void);
#if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \ #if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \
!defined(_WIN32)) !defined(_WIN32))
@ -595,6 +640,7 @@ void tsd_cleanup(void *arg);
malloc_tsd_protos(JEMALLOC_ATTR(unused), , tsd_t) malloc_tsd_protos(JEMALLOC_ATTR(unused), , tsd_t)
tsd_t *tsd_fetch(void); tsd_t *tsd_fetch(void);
tsdn_t *tsd_tsdn(tsd_t *tsd);
bool tsd_nominal(tsd_t *tsd); bool tsd_nominal(tsd_t *tsd);
#define O(n, t) \ #define O(n, t) \
t *tsd_##n##p_get(tsd_t *tsd); \ t *tsd_##n##p_get(tsd_t *tsd); \
@ -602,6 +648,9 @@ t tsd_##n##_get(tsd_t *tsd); \
void tsd_##n##_set(tsd_t *tsd, t n); void tsd_##n##_set(tsd_t *tsd, t n);
MALLOC_TSD MALLOC_TSD
#undef O #undef O
tsdn_t *tsdn_fetch(void);
bool tsdn_null(const tsdn_t *tsdn);
tsd_t *tsdn_tsd(tsdn_t *tsdn);
#endif #endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TSD_C_)) #if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_TSD_C_))
@ -628,6 +677,13 @@ tsd_fetch(void)
return (tsd); return (tsd);
} }
JEMALLOC_ALWAYS_INLINE tsdn_t *
tsd_tsdn(tsd_t *tsd)
{
return ((tsdn_t *)tsd);
}
JEMALLOC_INLINE bool JEMALLOC_INLINE bool
tsd_nominal(tsd_t *tsd) tsd_nominal(tsd_t *tsd)
{ {
@ -659,6 +715,32 @@ tsd_##n##_set(tsd_t *tsd, t n) \
} }
MALLOC_TSD MALLOC_TSD
#undef O #undef O
JEMALLOC_ALWAYS_INLINE tsdn_t *
tsdn_fetch(void)
{
if (!tsd_booted_get())
return (NULL);
return (tsd_tsdn(tsd_fetch()));
}
JEMALLOC_ALWAYS_INLINE bool
tsdn_null(const tsdn_t *tsdn)
{
return (tsdn == NULL);
}
JEMALLOC_ALWAYS_INLINE tsd_t *
tsdn_tsd(tsdn_t *tsdn)
{
assert(!tsdn_null(tsdn));
return (&tsdn->tsd);
}
#endif #endif
#endif /* JEMALLOC_H_INLINES */ #endif /* JEMALLOC_H_INLINES */

View File

@ -40,6 +40,10 @@
*/ */
#define MALLOC_PRINTF_BUFSIZE 4096 #define MALLOC_PRINTF_BUFSIZE 4096
/* Junk fill patterns. */
#define JEMALLOC_ALLOC_JUNK ((uint8_t)0xa5)
#define JEMALLOC_FREE_JUNK ((uint8_t)0x5a)
/* /*
* Wrap a cpp argument that contains commas such that it isn't broken up into * Wrap a cpp argument that contains commas such that it isn't broken up into
* multiple arguments. * multiple arguments.
@ -73,12 +77,12 @@
JEMALLOC_CLANG_HAS_BUILTIN(__builtin_unreachable) JEMALLOC_CLANG_HAS_BUILTIN(__builtin_unreachable)
# define unreachable() __builtin_unreachable() # define unreachable() __builtin_unreachable()
# else # else
# define unreachable() # define unreachable() abort()
# endif # endif
#else #else
# define likely(x) !!(x) # define likely(x) !!(x)
# define unlikely(x) !!(x) # define unlikely(x) !!(x)
# define unreachable() # define unreachable() abort()
#endif #endif
#include "jemalloc/internal/assert.h" #include "jemalloc/internal/assert.h"
@ -106,9 +110,9 @@ void malloc_write(const char *s);
* malloc_vsnprintf() supports a subset of snprintf(3) that avoids floating * malloc_vsnprintf() supports a subset of snprintf(3) that avoids floating
* point math. * point math.
*/ */
int malloc_vsnprintf(char *str, size_t size, const char *format, size_t malloc_vsnprintf(char *str, size_t size, const char *format,
va_list ap); va_list ap);
int malloc_snprintf(char *str, size_t size, const char *format, ...) size_t malloc_snprintf(char *str, size_t size, const char *format, ...)
JEMALLOC_FORMAT_PRINTF(3, 4); JEMALLOC_FORMAT_PRINTF(3, 4);
void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque, void malloc_vcprintf(void (*write_cb)(void *, const char *), void *cbopaque,
const char *format, va_list ap); const char *format, va_list ap);

View File

@ -30,15 +30,17 @@
* calls must be embedded in macros rather than in functions so that when * calls must be embedded in macros rather than in functions so that when
* Valgrind reports errors, there are no extra stack frames in the backtraces. * Valgrind reports errors, there are no extra stack frames in the backtraces.
*/ */
#define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do { \ #define JEMALLOC_VALGRIND_MALLOC(cond, tsdn, ptr, usize, zero) do { \
if (unlikely(in_valgrind && cond)) \ if (unlikely(in_valgrind && cond)) { \
VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, p2rz(ptr), zero); \ VALGRIND_MALLOCLIKE_BLOCK(ptr, usize, p2rz(tsdn, ptr), \
zero); \
} \
} while (0) } while (0)
#define JEMALLOC_VALGRIND_REALLOC(maybe_moved, ptr, usize, \ #define JEMALLOC_VALGRIND_REALLOC(maybe_moved, tsdn, ptr, usize, \
ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \ ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \
zero) do { \ zero) do { \
if (unlikely(in_valgrind)) { \ if (unlikely(in_valgrind)) { \
size_t rzsize = p2rz(ptr); \ size_t rzsize = p2rz(tsdn, ptr); \
\ \
if (!maybe_moved || ptr == old_ptr) { \ if (!maybe_moved || ptr == old_ptr) { \
VALGRIND_RESIZEINPLACE_BLOCK(ptr, old_usize, \ VALGRIND_RESIZEINPLACE_BLOCK(ptr, old_usize, \
@ -81,8 +83,8 @@
#define JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(ptr, usize) do {} while (0)
#define JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(ptr, usize) do {} while (0)
#define JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ptr, usize) do {} while (0) #define JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ptr, usize) do {} while (0)
#define JEMALLOC_VALGRIND_MALLOC(cond, ptr, usize, zero) do {} while (0) #define JEMALLOC_VALGRIND_MALLOC(cond, tsdn, ptr, usize, zero) do {} while (0)
#define JEMALLOC_VALGRIND_REALLOC(maybe_moved, ptr, usize, \ #define JEMALLOC_VALGRIND_REALLOC(maybe_moved, tsdn, ptr, usize, \
ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \ ptr_maybe_null, old_ptr, old_usize, old_rzsize, old_ptr_maybe_null, \
zero) do {} while (0) zero) do {} while (0)
#define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do {} while (0) #define JEMALLOC_VALGRIND_FREE(ptr, rzsize) do {} while (0)

View File

@ -0,0 +1,249 @@
/******************************************************************************/
#ifdef JEMALLOC_H_TYPES
typedef struct witness_s witness_t;
typedef unsigned witness_rank_t;
typedef ql_head(witness_t) witness_list_t;
typedef int witness_comp_t (const witness_t *, const witness_t *);
/*
* Lock ranks. Witnesses with rank WITNESS_RANK_OMIT are completely ignored by
* the witness machinery.
*/
#define WITNESS_RANK_OMIT 0U
#define WITNESS_RANK_INIT 1U
#define WITNESS_RANK_CTL 1U
#define WITNESS_RANK_ARENAS 2U
#define WITNESS_RANK_PROF_DUMP 3U
#define WITNESS_RANK_PROF_BT2GCTX 4U
#define WITNESS_RANK_PROF_TDATAS 5U
#define WITNESS_RANK_PROF_TDATA 6U
#define WITNESS_RANK_PROF_GCTX 7U
#define WITNESS_RANK_ARENA 8U
#define WITNESS_RANK_ARENA_CHUNKS 9U
#define WITNESS_RANK_ARENA_NODE_CACHE 10
#define WITNESS_RANK_BASE 11U
#define WITNESS_RANK_LEAF 0xffffffffU
#define WITNESS_RANK_ARENA_BIN WITNESS_RANK_LEAF
#define WITNESS_RANK_ARENA_HUGE WITNESS_RANK_LEAF
#define WITNESS_RANK_DSS WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_ACTIVE WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_DUMP_SEQ WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_GDUMP WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_NEXT_THR_UID WITNESS_RANK_LEAF
#define WITNESS_RANK_PROF_THREAD_ACTIVE_INIT WITNESS_RANK_LEAF
#define WITNESS_INITIALIZER(rank) {"initializer", rank, NULL, {NULL, NULL}}
#endif /* JEMALLOC_H_TYPES */
/******************************************************************************/
#ifdef JEMALLOC_H_STRUCTS
struct witness_s {
/* Name, used for printing lock order reversal messages. */
const char *name;
/*
* Witness rank, where 0 is lowest and UINT_MAX is highest. Witnesses
* must be acquired in order of increasing rank.
*/
witness_rank_t rank;
/*
* If two witnesses are of equal rank and they have the samp comp
* function pointer, it is called as a last attempt to differentiate
* between witnesses of equal rank.
*/
witness_comp_t *comp;
/* Linkage for thread's currently owned locks. */
ql_elm(witness_t) link;
};
#endif /* JEMALLOC_H_STRUCTS */
/******************************************************************************/
#ifdef JEMALLOC_H_EXTERNS
void witness_init(witness_t *witness, const char *name, witness_rank_t rank,
witness_comp_t *comp);
#ifdef JEMALLOC_JET
typedef void (witness_lock_error_t)(const witness_list_t *, const witness_t *);
extern witness_lock_error_t *witness_lock_error;
#else
void witness_lock_error(const witness_list_t *witnesses,
const witness_t *witness);
#endif
#ifdef JEMALLOC_JET
typedef void (witness_owner_error_t)(const witness_t *);
extern witness_owner_error_t *witness_owner_error;
#else
void witness_owner_error(const witness_t *witness);
#endif
#ifdef JEMALLOC_JET
typedef void (witness_not_owner_error_t)(const witness_t *);
extern witness_not_owner_error_t *witness_not_owner_error;
#else
void witness_not_owner_error(const witness_t *witness);
#endif
#ifdef JEMALLOC_JET
typedef void (witness_lockless_error_t)(const witness_list_t *);
extern witness_lockless_error_t *witness_lockless_error;
#else
void witness_lockless_error(const witness_list_t *witnesses);
#endif
void witnesses_cleanup(tsd_t *tsd);
void witness_fork_cleanup(tsd_t *tsd);
void witness_prefork(tsd_t *tsd);
void witness_postfork_parent(tsd_t *tsd);
void witness_postfork_child(tsd_t *tsd);
#endif /* JEMALLOC_H_EXTERNS */
/******************************************************************************/
#ifdef JEMALLOC_H_INLINES
#ifndef JEMALLOC_ENABLE_INLINE
void witness_assert_owner(tsdn_t *tsdn, const witness_t *witness);
void witness_assert_not_owner(tsdn_t *tsdn, const witness_t *witness);
void witness_assert_lockless(tsdn_t *tsdn);
void witness_lock(tsdn_t *tsdn, witness_t *witness);
void witness_unlock(tsdn_t *tsdn, witness_t *witness);
#endif
#if (defined(JEMALLOC_ENABLE_INLINE) || defined(JEMALLOC_MUTEX_C_))
JEMALLOC_INLINE void
witness_assert_owner(tsdn_t *tsdn, const witness_t *witness)
{
tsd_t *tsd;
witness_list_t *witnesses;
witness_t *w;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
if (witness->rank == WITNESS_RANK_OMIT)
return;
witnesses = tsd_witnessesp_get(tsd);
ql_foreach(w, witnesses, link) {
if (w == witness)
return;
}
witness_owner_error(witness);
}
JEMALLOC_INLINE void
witness_assert_not_owner(tsdn_t *tsdn, const witness_t *witness)
{
tsd_t *tsd;
witness_list_t *witnesses;
witness_t *w;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
if (witness->rank == WITNESS_RANK_OMIT)
return;
witnesses = tsd_witnessesp_get(tsd);
ql_foreach(w, witnesses, link) {
if (w == witness)
witness_not_owner_error(witness);
}
}
JEMALLOC_INLINE void
witness_assert_lockless(tsdn_t *tsdn)
{
tsd_t *tsd;
witness_list_t *witnesses;
witness_t *w;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
witnesses = tsd_witnessesp_get(tsd);
w = ql_last(witnesses, link);
if (w != NULL)
witness_lockless_error(witnesses);
}
JEMALLOC_INLINE void
witness_lock(tsdn_t *tsdn, witness_t *witness)
{
tsd_t *tsd;
witness_list_t *witnesses;
witness_t *w;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
if (witness->rank == WITNESS_RANK_OMIT)
return;
witness_assert_not_owner(tsdn, witness);
witnesses = tsd_witnessesp_get(tsd);
w = ql_last(witnesses, link);
if (w == NULL) {
/* No other locks; do nothing. */
} else if (tsd_witness_fork_get(tsd) && w->rank <= witness->rank) {
/* Forking, and relaxed ranking satisfied. */
} else if (w->rank > witness->rank) {
/* Not forking, rank order reversal. */
witness_lock_error(witnesses, witness);
} else if (w->rank == witness->rank && (w->comp == NULL || w->comp !=
witness->comp || w->comp(w, witness) > 0)) {
/*
* Missing/incompatible comparison function, or comparison
* function indicates rank order reversal.
*/
witness_lock_error(witnesses, witness);
}
ql_elm_new(witness, link);
ql_tail_insert(witnesses, witness, link);
}
JEMALLOC_INLINE void
witness_unlock(tsdn_t *tsdn, witness_t *witness)
{
tsd_t *tsd;
witness_list_t *witnesses;
if (!config_debug)
return;
if (tsdn_null(tsdn))
return;
tsd = tsdn_tsd(tsdn);
if (witness->rank == WITNESS_RANK_OMIT)
return;
witness_assert_owner(tsdn, witness);
witnesses = tsd_witnessesp_get(tsd);
ql_remove(witnesses, witness, link);
}
#endif
#endif /* JEMALLOC_H_INLINES */
/******************************************************************************/

View File

@ -13,11 +13,11 @@
# define MALLOCX_LG_ALIGN(la) ((int)(la)) # define MALLOCX_LG_ALIGN(la) ((int)(la))
# if LG_SIZEOF_PTR == 2 # if LG_SIZEOF_PTR == 2
# define MALLOCX_ALIGN(a) ((int)(ffs(a)-1)) # define MALLOCX_ALIGN(a) ((int)(ffs((int)(a))-1))
# else # else
# define MALLOCX_ALIGN(a) \ # define MALLOCX_ALIGN(a) \
((int)(((a) < (size_t)INT_MAX) ? ffs((int)(a))-1 : \ ((int)(((size_t)(a) < (size_t)INT_MAX) ? ffs((int)(a))-1 : \
ffs((int)((a)>>32))+31)) ffs((int)(((size_t)(a))>>32))+31))
# endif # endif
# define MALLOCX_ZERO ((int)0x40) # define MALLOCX_ZERO ((int)0x40)
/* /*
@ -29,7 +29,7 @@
/* /*
* Bias arena index bits so that 0 encodes "use an automatically chosen arena". * Bias arena index bits so that 0 encodes "use an automatically chosen arena".
*/ */
# define MALLOCX_ARENA(a) ((int)(((a)+1) << 20)) # define MALLOCX_ARENA(a) ((((int)(a))+1) << 20)
#if defined(__cplusplus) && defined(JEMALLOC_USE_CXX_THROW) #if defined(__cplusplus) && defined(JEMALLOC_USE_CXX_THROW)
# define JEMALLOC_CXX_THROW throw() # define JEMALLOC_CXX_THROW throw()

View File

@ -56,6 +56,7 @@
<ClInclude Include="..\..\..\..\include\jemalloc\internal\mutex.h" /> <ClInclude Include="..\..\..\..\include\jemalloc\internal\mutex.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\nstime.h" /> <ClInclude Include="..\..\..\..\include\jemalloc\internal\nstime.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\pages.h" /> <ClInclude Include="..\..\..\..\include\jemalloc\internal\pages.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ph.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\private_namespace.h" /> <ClInclude Include="..\..\..\..\include\jemalloc\internal\private_namespace.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\private_unnamespace.h" /> <ClInclude Include="..\..\..\..\include\jemalloc\internal\private_unnamespace.h" />
<ClInclude Include="..\..\..\..\include\jemalloc\internal\prng.h" /> <ClInclude Include="..\..\..\..\include\jemalloc\internal\prng.h" />
@ -250,7 +251,7 @@
<Optimization>Disabled</Optimization> <Optimization>Disabled</Optimization>
<PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> <AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<DisableSpecificWarnings>4090;4146;4244;4267;4334</DisableSpecificWarnings> <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName> <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile> </ClCompile>
<Link> <Link>
@ -267,7 +268,7 @@
<PreprocessorDefinitions>JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions> <PreprocessorDefinitions>JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> <AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary> <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
<DisableSpecificWarnings>4090;4146;4244;4267;4334</DisableSpecificWarnings> <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName> <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile> </ClCompile>
<Link> <Link>
@ -283,7 +284,7 @@
<Optimization>Disabled</Optimization> <Optimization>Disabled</Optimization>
<PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> <AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<DisableSpecificWarnings>4090;4146;4244;4267;4334</DisableSpecificWarnings> <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName> <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile> </ClCompile>
<Link> <Link>
@ -300,8 +301,9 @@
<PreprocessorDefinitions>JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions> <PreprocessorDefinitions>JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> <AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary> <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
<DisableSpecificWarnings>4090;4146;4244;4267;4334</DisableSpecificWarnings> <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName> <DebugInformationFormat>OldStyle</DebugInformationFormat>
<MinimalRebuild>false</MinimalRebuild>
</ClCompile> </ClCompile>
<Link> <Link>
<SubSystem>Windows</SubSystem> <SubSystem>Windows</SubSystem>
@ -318,7 +320,7 @@
<IntrinsicFunctions>true</IntrinsicFunctions> <IntrinsicFunctions>true</IntrinsicFunctions>
<PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> <AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<DisableSpecificWarnings>4090;4146;4244;4267;4334</DisableSpecificWarnings> <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName> <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile> </ClCompile>
<Link> <Link>
@ -339,7 +341,7 @@
<PreprocessorDefinitions>_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions> <PreprocessorDefinitions>_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> <AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary> <RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<DisableSpecificWarnings>4090;4146;4244;4267;4334</DisableSpecificWarnings> <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName> <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile> </ClCompile>
<Link> <Link>
@ -359,7 +361,7 @@
<IntrinsicFunctions>true</IntrinsicFunctions> <IntrinsicFunctions>true</IntrinsicFunctions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> <AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions> <PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<DisableSpecificWarnings>4090;4146;4244;4267;4334</DisableSpecificWarnings> <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName> <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>
</ClCompile> </ClCompile>
<Link> <Link>
@ -380,8 +382,8 @@
<PreprocessorDefinitions>_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions> <PreprocessorDefinitions>_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> <AdditionalIncludeDirectories>..\..\..\..\include;..\..\..\..\include\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary> <RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<DisableSpecificWarnings>4090;4146;4244;4267;4334</DisableSpecificWarnings> <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>
<ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName> <DebugInformationFormat>OldStyle</DebugInformationFormat>
</ClCompile> </ClCompile>
<Link> <Link>
<SubSystem>Windows</SubSystem> <SubSystem>Windows</SubSystem>

View File

@ -107,6 +107,9 @@
<ClInclude Include="..\..\..\..\include\jemalloc\internal\pages.h"> <ClInclude Include="..\..\..\..\include\jemalloc\internal\pages.h">
<Filter>Header Files\internal</Filter> <Filter>Header Files\internal</Filter>
</ClInclude> </ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\ph.h">
<Filter>Header Files\internal</Filter>
</ClInclude>
<ClInclude Include="..\..\..\..\include\jemalloc\internal\private_namespace.h"> <ClInclude Include="..\..\..\..\include\jemalloc\internal\private_namespace.h">
<Filter>Header Files\internal</Filter> <Filter>Header Files\internal</Filter>
</ClInclude> </ClInclude>

File diff suppressed because it is too large Load Diff

View File

@ -13,12 +13,13 @@ static size_t base_mapped;
/******************************************************************************/ /******************************************************************************/
/* base_mtx must be held. */
static extent_node_t * static extent_node_t *
base_node_try_alloc(void) base_node_try_alloc(tsdn_t *tsdn)
{ {
extent_node_t *node; extent_node_t *node;
malloc_mutex_assert_owner(tsdn, &base_mtx);
if (base_nodes == NULL) if (base_nodes == NULL)
return (NULL); return (NULL);
node = base_nodes; node = base_nodes;
@ -27,33 +28,34 @@ base_node_try_alloc(void)
return (node); return (node);
} }
/* base_mtx must be held. */
static void static void
base_node_dalloc(extent_node_t *node) base_node_dalloc(tsdn_t *tsdn, extent_node_t *node)
{ {
malloc_mutex_assert_owner(tsdn, &base_mtx);
JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(node, sizeof(extent_node_t)); JEMALLOC_VALGRIND_MAKE_MEM_UNDEFINED(node, sizeof(extent_node_t));
*(extent_node_t **)node = base_nodes; *(extent_node_t **)node = base_nodes;
base_nodes = node; base_nodes = node;
} }
/* base_mtx must be held. */
static extent_node_t * static extent_node_t *
base_chunk_alloc(size_t minsize) base_chunk_alloc(tsdn_t *tsdn, size_t minsize)
{ {
extent_node_t *node; extent_node_t *node;
size_t csize, nsize; size_t csize, nsize;
void *addr; void *addr;
malloc_mutex_assert_owner(tsdn, &base_mtx);
assert(minsize != 0); assert(minsize != 0);
node = base_node_try_alloc(); node = base_node_try_alloc(tsdn);
/* Allocate enough space to also carve a node out if necessary. */ /* Allocate enough space to also carve a node out if necessary. */
nsize = (node == NULL) ? CACHELINE_CEILING(sizeof(extent_node_t)) : 0; nsize = (node == NULL) ? CACHELINE_CEILING(sizeof(extent_node_t)) : 0;
csize = CHUNK_CEILING(minsize + nsize); csize = CHUNK_CEILING(minsize + nsize);
addr = chunk_alloc_base(csize); addr = chunk_alloc_base(csize);
if (addr == NULL) { if (addr == NULL) {
if (node != NULL) if (node != NULL)
base_node_dalloc(node); base_node_dalloc(tsdn, node);
return (NULL); return (NULL);
} }
base_mapped += csize; base_mapped += csize;
@ -76,7 +78,7 @@ base_chunk_alloc(size_t minsize)
* physical memory usage. * physical memory usage.
*/ */
void * void *
base_alloc(size_t size) base_alloc(tsdn_t *tsdn, size_t size)
{ {
void *ret; void *ret;
size_t csize, usize; size_t csize, usize;
@ -91,14 +93,14 @@ base_alloc(size_t size)
usize = s2u(csize); usize = s2u(csize);
extent_node_init(&key, NULL, NULL, usize, false, false); extent_node_init(&key, NULL, NULL, usize, false, false);
malloc_mutex_lock(&base_mtx); malloc_mutex_lock(tsdn, &base_mtx);
node = extent_tree_szad_nsearch(&base_avail_szad, &key); node = extent_tree_szad_nsearch(&base_avail_szad, &key);
if (node != NULL) { if (node != NULL) {
/* Use existing space. */ /* Use existing space. */
extent_tree_szad_remove(&base_avail_szad, node); extent_tree_szad_remove(&base_avail_szad, node);
} else { } else {
/* Try to allocate more space. */ /* Try to allocate more space. */
node = base_chunk_alloc(csize); node = base_chunk_alloc(tsdn, csize);
} }
if (node == NULL) { if (node == NULL) {
ret = NULL; ret = NULL;
@ -111,7 +113,7 @@ base_alloc(size_t size)
extent_node_size_set(node, extent_node_size_get(node) - csize); extent_node_size_set(node, extent_node_size_get(node) - csize);
extent_tree_szad_insert(&base_avail_szad, node); extent_tree_szad_insert(&base_avail_szad, node);
} else } else
base_node_dalloc(node); base_node_dalloc(tsdn, node);
if (config_stats) { if (config_stats) {
base_allocated += csize; base_allocated += csize;
/* /*
@ -123,28 +125,29 @@ base_alloc(size_t size)
} }
JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, csize); JEMALLOC_VALGRIND_MAKE_MEM_DEFINED(ret, csize);
label_return: label_return:
malloc_mutex_unlock(&base_mtx); malloc_mutex_unlock(tsdn, &base_mtx);
return (ret); return (ret);
} }
void void
base_stats_get(size_t *allocated, size_t *resident, size_t *mapped) base_stats_get(tsdn_t *tsdn, size_t *allocated, size_t *resident,
size_t *mapped)
{ {
malloc_mutex_lock(&base_mtx); malloc_mutex_lock(tsdn, &base_mtx);
assert(base_allocated <= base_resident); assert(base_allocated <= base_resident);
assert(base_resident <= base_mapped); assert(base_resident <= base_mapped);
*allocated = base_allocated; *allocated = base_allocated;
*resident = base_resident; *resident = base_resident;
*mapped = base_mapped; *mapped = base_mapped;
malloc_mutex_unlock(&base_mtx); malloc_mutex_unlock(tsdn, &base_mtx);
} }
bool bool
base_boot(void) base_boot(void)
{ {
if (malloc_mutex_init(&base_mtx)) if (malloc_mutex_init(&base_mtx, "base", WITNESS_RANK_BASE))
return (true); return (true);
extent_tree_szad_new(&base_avail_szad); extent_tree_szad_new(&base_avail_szad);
base_nodes = NULL; base_nodes = NULL;
@ -153,22 +156,22 @@ base_boot(void)
} }
void void
base_prefork(void) base_prefork(tsdn_t *tsdn)
{ {
malloc_mutex_prefork(&base_mtx); malloc_mutex_prefork(tsdn, &base_mtx);
} }
void void
base_postfork_parent(void) base_postfork_parent(tsdn_t *tsdn)
{ {
malloc_mutex_postfork_parent(&base_mtx); malloc_mutex_postfork_parent(tsdn, &base_mtx);
} }
void void
base_postfork_child(void) base_postfork_child(tsdn_t *tsdn)
{ {
malloc_mutex_postfork_child(&base_mtx); malloc_mutex_postfork_child(tsdn, &base_mtx);
} }

View File

@ -74,15 +74,11 @@ bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo)
void void
bitmap_info_init(bitmap_info_t *binfo, size_t nbits) bitmap_info_init(bitmap_info_t *binfo, size_t nbits)
{ {
size_t i;
assert(nbits > 0); assert(nbits > 0);
assert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS)); assert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS));
i = nbits >> LG_BITMAP_GROUP_NBITS; binfo->ngroups = BITMAP_BITS2GROUPS(nbits);
if (nbits % BITMAP_GROUP_NBITS != 0)
i++;
binfo->ngroups = i;
binfo->nbits = nbits; binfo->nbits = nbits;
} }
@ -99,9 +95,10 @@ bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo)
size_t extra; size_t extra;
memset(bitmap, 0xffU, bitmap_size(binfo)); memset(bitmap, 0xffU, bitmap_size(binfo));
extra = (binfo->nbits % (binfo->ngroups * BITMAP_GROUP_NBITS)); extra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK))
& BITMAP_GROUP_NBITS_MASK;
if (extra != 0) if (extra != 0)
bitmap[binfo->ngroups - 1] >>= (BITMAP_GROUP_NBITS - extra); bitmap[binfo->ngroups - 1] >>= extra;
} }
#endif /* USE_TREE */ #endif /* USE_TREE */

View File

@ -49,9 +49,10 @@ const chunk_hooks_t chunk_hooks_default = {
* definition. * definition.
*/ */
static void chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks, static void chunk_record(tsdn_t *tsdn, arena_t *arena,
extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache, chunk_hooks_t *chunk_hooks, extent_tree_t *chunks_szad,
void *chunk, size_t size, bool zeroed, bool committed); extent_tree_t *chunks_ad, bool cache, void *chunk, size_t size, bool zeroed,
bool committed);
/******************************************************************************/ /******************************************************************************/
@ -63,23 +64,23 @@ chunk_hooks_get_locked(arena_t *arena)
} }
chunk_hooks_t chunk_hooks_t
chunk_hooks_get(arena_t *arena) chunk_hooks_get(tsdn_t *tsdn, arena_t *arena)
{ {
chunk_hooks_t chunk_hooks; chunk_hooks_t chunk_hooks;
malloc_mutex_lock(&arena->chunks_mtx); malloc_mutex_lock(tsdn, &arena->chunks_mtx);
chunk_hooks = chunk_hooks_get_locked(arena); chunk_hooks = chunk_hooks_get_locked(arena);
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
return (chunk_hooks); return (chunk_hooks);
} }
chunk_hooks_t chunk_hooks_t
chunk_hooks_set(arena_t *arena, const chunk_hooks_t *chunk_hooks) chunk_hooks_set(tsdn_t *tsdn, arena_t *arena, const chunk_hooks_t *chunk_hooks)
{ {
chunk_hooks_t old_chunk_hooks; chunk_hooks_t old_chunk_hooks;
malloc_mutex_lock(&arena->chunks_mtx); malloc_mutex_lock(tsdn, &arena->chunks_mtx);
old_chunk_hooks = arena->chunk_hooks; old_chunk_hooks = arena->chunk_hooks;
/* /*
* Copy each field atomically so that it is impossible for readers to * Copy each field atomically so that it is impossible for readers to
@ -104,14 +105,14 @@ chunk_hooks_set(arena_t *arena, const chunk_hooks_t *chunk_hooks)
ATOMIC_COPY_HOOK(split); ATOMIC_COPY_HOOK(split);
ATOMIC_COPY_HOOK(merge); ATOMIC_COPY_HOOK(merge);
#undef ATOMIC_COPY_HOOK #undef ATOMIC_COPY_HOOK
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
return (old_chunk_hooks); return (old_chunk_hooks);
} }
static void static void
chunk_hooks_assure_initialized_impl(arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_hooks_assure_initialized_impl(tsdn_t *tsdn, arena_t *arena,
bool locked) chunk_hooks_t *chunk_hooks, bool locked)
{ {
static const chunk_hooks_t uninitialized_hooks = static const chunk_hooks_t uninitialized_hooks =
CHUNK_HOOKS_INITIALIZER; CHUNK_HOOKS_INITIALIZER;
@ -119,27 +120,28 @@ chunk_hooks_assure_initialized_impl(arena_t *arena, chunk_hooks_t *chunk_hooks,
if (memcmp(chunk_hooks, &uninitialized_hooks, sizeof(chunk_hooks_t)) == if (memcmp(chunk_hooks, &uninitialized_hooks, sizeof(chunk_hooks_t)) ==
0) { 0) {
*chunk_hooks = locked ? chunk_hooks_get_locked(arena) : *chunk_hooks = locked ? chunk_hooks_get_locked(arena) :
chunk_hooks_get(arena); chunk_hooks_get(tsdn, arena);
} }
} }
static void static void
chunk_hooks_assure_initialized_locked(arena_t *arena, chunk_hooks_assure_initialized_locked(tsdn_t *tsdn, arena_t *arena,
chunk_hooks_t *chunk_hooks) chunk_hooks_t *chunk_hooks)
{ {
chunk_hooks_assure_initialized_impl(arena, chunk_hooks, true); chunk_hooks_assure_initialized_impl(tsdn, arena, chunk_hooks, true);
} }
static void static void
chunk_hooks_assure_initialized(arena_t *arena, chunk_hooks_t *chunk_hooks) chunk_hooks_assure_initialized(tsdn_t *tsdn, arena_t *arena,
chunk_hooks_t *chunk_hooks)
{ {
chunk_hooks_assure_initialized_impl(arena, chunk_hooks, false); chunk_hooks_assure_initialized_impl(tsdn, arena, chunk_hooks, false);
} }
bool bool
chunk_register(const void *chunk, const extent_node_t *node) chunk_register(tsdn_t *tsdn, const void *chunk, const extent_node_t *node)
{ {
assert(extent_node_addr_get(node) == chunk); assert(extent_node_addr_get(node) == chunk);
@ -159,7 +161,7 @@ chunk_register(const void *chunk, const extent_node_t *node)
high = atomic_read_z(&highchunks); high = atomic_read_z(&highchunks);
} }
if (cur > high && prof_gdump_get_unlocked()) if (cur > high && prof_gdump_get_unlocked())
prof_gdump(); prof_gdump(tsdn);
} }
return (false); return (false);
@ -197,7 +199,7 @@ chunk_first_best_fit(arena_t *arena, extent_tree_t *chunks_szad,
} }
static void * static void *
chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_recycle(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache,
void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit, void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit,
bool dalloc_node) bool dalloc_node)
@ -219,8 +221,8 @@ chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks,
/* Beware size_t wrap-around. */ /* Beware size_t wrap-around. */
if (alloc_size < size) if (alloc_size < size)
return (NULL); return (NULL);
malloc_mutex_lock(&arena->chunks_mtx); malloc_mutex_lock(tsdn, &arena->chunks_mtx);
chunk_hooks_assure_initialized_locked(arena, chunk_hooks); chunk_hooks_assure_initialized_locked(tsdn, arena, chunk_hooks);
if (new_addr != NULL) { if (new_addr != NULL) {
extent_node_t key; extent_node_t key;
extent_node_init(&key, arena, new_addr, alloc_size, false, extent_node_init(&key, arena, new_addr, alloc_size, false,
@ -232,7 +234,7 @@ chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks,
} }
if (node == NULL || (new_addr != NULL && extent_node_size_get(node) < if (node == NULL || (new_addr != NULL && extent_node_size_get(node) <
size)) { size)) {
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
return (NULL); return (NULL);
} }
leadsize = ALIGNMENT_CEILING((uintptr_t)extent_node_addr_get(node), leadsize = ALIGNMENT_CEILING((uintptr_t)extent_node_addr_get(node),
@ -251,7 +253,7 @@ chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks,
if (leadsize != 0 && if (leadsize != 0 &&
chunk_hooks->split(extent_node_addr_get(node), chunk_hooks->split(extent_node_addr_get(node),
extent_node_size_get(node), leadsize, size, false, arena->ind)) { extent_node_size_get(node), leadsize, size, false, arena->ind)) {
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
return (NULL); return (NULL);
} }
/* Remove node from the tree. */ /* Remove node from the tree. */
@ -271,20 +273,21 @@ chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks,
if (chunk_hooks->split(ret, size + trailsize, size, if (chunk_hooks->split(ret, size + trailsize, size,
trailsize, false, arena->ind)) { trailsize, false, arena->ind)) {
if (dalloc_node && node != NULL) if (dalloc_node && node != NULL)
arena_node_dalloc(arena, node); arena_node_dalloc(tsdn, arena, node);
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
chunk_record(arena, chunk_hooks, chunks_szad, chunks_ad, chunk_record(tsdn, arena, chunk_hooks, chunks_szad,
cache, ret, size + trailsize, zeroed, committed); chunks_ad, cache, ret, size + trailsize, zeroed,
committed);
return (NULL); return (NULL);
} }
/* Insert the trailing space as a smaller chunk. */ /* Insert the trailing space as a smaller chunk. */
if (node == NULL) { if (node == NULL) {
node = arena_node_alloc(arena); node = arena_node_alloc(tsdn, arena);
if (node == NULL) { if (node == NULL) {
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
chunk_record(arena, chunk_hooks, chunks_szad, chunk_record(tsdn, arena, chunk_hooks,
chunks_ad, cache, ret, size + trailsize, chunks_szad, chunks_ad, cache, ret, size +
zeroed, committed); trailsize, zeroed, committed);
return (NULL); return (NULL);
} }
} }
@ -296,16 +299,16 @@ chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks,
node = NULL; node = NULL;
} }
if (!committed && chunk_hooks->commit(ret, size, 0, size, arena->ind)) { if (!committed && chunk_hooks->commit(ret, size, 0, size, arena->ind)) {
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
chunk_record(arena, chunk_hooks, chunks_szad, chunks_ad, cache, chunk_record(tsdn, arena, chunk_hooks, chunks_szad, chunks_ad,
ret, size, zeroed, committed); cache, ret, size, zeroed, committed);
return (NULL); return (NULL);
} }
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
assert(dalloc_node || node != NULL); assert(dalloc_node || node != NULL);
if (dalloc_node && node != NULL) if (dalloc_node && node != NULL)
arena_node_dalloc(arena, node); arena_node_dalloc(tsdn, arena, node);
if (*zero) { if (*zero) {
if (!zeroed) if (!zeroed)
memset(ret, 0, size); memset(ret, 0, size);
@ -328,8 +331,8 @@ chunk_recycle(arena_t *arena, chunk_hooks_t *chunk_hooks,
* them if they are returned. * them if they are returned.
*/ */
static void * static void *
chunk_alloc_core(arena_t *arena, void *new_addr, size_t size, size_t alignment, chunk_alloc_core(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size,
bool *zero, bool *commit, dss_prec_t dss_prec) size_t alignment, bool *zero, bool *commit, dss_prec_t dss_prec)
{ {
void *ret; void *ret;
@ -340,8 +343,8 @@ chunk_alloc_core(arena_t *arena, void *new_addr, size_t size, size_t alignment,
/* "primary" dss. */ /* "primary" dss. */
if (have_dss && dss_prec == dss_prec_primary && (ret = if (have_dss && dss_prec == dss_prec_primary && (ret =
chunk_alloc_dss(arena, new_addr, size, alignment, zero, commit)) != chunk_alloc_dss(tsdn, arena, new_addr, size, alignment, zero,
NULL) commit)) != NULL)
return (ret); return (ret);
/* mmap. */ /* mmap. */
if ((ret = chunk_alloc_mmap(new_addr, size, alignment, zero, commit)) != if ((ret = chunk_alloc_mmap(new_addr, size, alignment, zero, commit)) !=
@ -349,8 +352,8 @@ chunk_alloc_core(arena_t *arena, void *new_addr, size_t size, size_t alignment,
return (ret); return (ret);
/* "secondary" dss. */ /* "secondary" dss. */
if (have_dss && dss_prec == dss_prec_secondary && (ret = if (have_dss && dss_prec == dss_prec_secondary && (ret =
chunk_alloc_dss(arena, new_addr, size, alignment, zero, commit)) != chunk_alloc_dss(tsdn, arena, new_addr, size, alignment, zero,
NULL) commit)) != NULL)
return (ret); return (ret);
/* All strategies for allocation failed. */ /* All strategies for allocation failed. */
@ -380,8 +383,8 @@ chunk_alloc_base(size_t size)
} }
void * void *
chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr, chunk_alloc_cache(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
size_t size, size_t alignment, bool *zero, bool dalloc_node) void *new_addr, size_t size, size_t alignment, bool *zero, bool dalloc_node)
{ {
void *ret; void *ret;
bool commit; bool commit;
@ -392,9 +395,9 @@ chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr,
assert((alignment & chunksize_mask) == 0); assert((alignment & chunksize_mask) == 0);
commit = true; commit = true;
ret = chunk_recycle(arena, chunk_hooks, &arena->chunks_szad_cached, ret = chunk_recycle(tsdn, arena, chunk_hooks,
&arena->chunks_ad_cached, true, new_addr, size, alignment, zero, &arena->chunks_szad_cached, &arena->chunks_ad_cached, true,
&commit, dalloc_node); new_addr, size, alignment, zero, &commit, dalloc_node);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
assert(commit); assert(commit);
@ -404,11 +407,11 @@ chunk_alloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr,
} }
static arena_t * static arena_t *
chunk_arena_get(unsigned arena_ind) chunk_arena_get(tsdn_t *tsdn, unsigned arena_ind)
{ {
arena_t *arena; arena_t *arena;
arena = arena_get(arena_ind, false); arena = arena_get(tsdn, arena_ind, false);
/* /*
* The arena we're allocating on behalf of must have been initialized * The arena we're allocating on behalf of must have been initialized
* already. * already.
@ -422,11 +425,13 @@ chunk_alloc_default(void *new_addr, size_t size, size_t alignment, bool *zero,
bool *commit, unsigned arena_ind) bool *commit, unsigned arena_ind)
{ {
void *ret; void *ret;
tsdn_t *tsdn;
arena_t *arena; arena_t *arena;
arena = chunk_arena_get(arena_ind); tsdn = tsdn_fetch();
ret = chunk_alloc_core(arena, new_addr, size, alignment, zero, commit, arena = chunk_arena_get(tsdn, arena_ind);
arena->dss_prec); ret = chunk_alloc_core(tsdn, arena, new_addr, size, alignment, zero,
commit, arena->dss_prec);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
if (config_valgrind) if (config_valgrind)
@ -436,29 +441,35 @@ chunk_alloc_default(void *new_addr, size_t size, size_t alignment, bool *zero,
} }
static void * static void *
chunk_alloc_retained(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr, chunk_alloc_retained(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
size_t size, size_t alignment, bool *zero, bool *commit) void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit)
{ {
void *ret;
assert(size != 0); assert(size != 0);
assert((size & chunksize_mask) == 0); assert((size & chunksize_mask) == 0);
assert(alignment != 0); assert(alignment != 0);
assert((alignment & chunksize_mask) == 0); assert((alignment & chunksize_mask) == 0);
return (chunk_recycle(arena, chunk_hooks, &arena->chunks_szad_retained, ret = chunk_recycle(tsdn, arena, chunk_hooks,
&arena->chunks_ad_retained, false, new_addr, size, alignment, zero, &arena->chunks_szad_retained, &arena->chunks_ad_retained, false,
commit, true)); new_addr, size, alignment, zero, commit, true);
if (config_stats && ret != NULL)
arena->stats.retained -= size;
return (ret);
} }
void * void *
chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr, chunk_alloc_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
size_t size, size_t alignment, bool *zero, bool *commit) void *new_addr, size_t size, size_t alignment, bool *zero, bool *commit)
{ {
void *ret; void *ret;
chunk_hooks_assure_initialized(arena, chunk_hooks); chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks);
ret = chunk_alloc_retained(arena, chunk_hooks, new_addr, size, ret = chunk_alloc_retained(tsdn, arena, chunk_hooks, new_addr, size,
alignment, zero, commit); alignment, zero, commit);
if (ret == NULL) { if (ret == NULL) {
ret = chunk_hooks->alloc(new_addr, size, alignment, zero, ret = chunk_hooks->alloc(new_addr, size, alignment, zero,
@ -473,7 +484,7 @@ chunk_alloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *new_addr,
} }
static void static void
chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks, chunk_record(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache, extent_tree_t *chunks_szad, extent_tree_t *chunks_ad, bool cache,
void *chunk, size_t size, bool zeroed, bool committed) void *chunk, size_t size, bool zeroed, bool committed)
{ {
@ -485,8 +496,8 @@ chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks,
unzeroed = cache || !zeroed; unzeroed = cache || !zeroed;
JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size); JEMALLOC_VALGRIND_MAKE_MEM_NOACCESS(chunk, size);
malloc_mutex_lock(&arena->chunks_mtx); malloc_mutex_lock(tsdn, &arena->chunks_mtx);
chunk_hooks_assure_initialized_locked(arena, chunk_hooks); chunk_hooks_assure_initialized_locked(tsdn, arena, chunk_hooks);
extent_node_init(&key, arena, (void *)((uintptr_t)chunk + size), 0, extent_node_init(&key, arena, (void *)((uintptr_t)chunk + size), 0,
false, false); false, false);
node = extent_tree_ad_nsearch(chunks_ad, &key); node = extent_tree_ad_nsearch(chunks_ad, &key);
@ -511,7 +522,7 @@ chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks,
arena_chunk_cache_maybe_insert(arena, node, cache); arena_chunk_cache_maybe_insert(arena, node, cache);
} else { } else {
/* Coalescing forward failed, so insert a new node. */ /* Coalescing forward failed, so insert a new node. */
node = arena_node_alloc(arena); node = arena_node_alloc(tsdn, arena);
if (node == NULL) { if (node == NULL) {
/* /*
* Node allocation failed, which is an exceedingly * Node allocation failed, which is an exceedingly
@ -520,8 +531,8 @@ chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks,
* a virtual memory leak. * a virtual memory leak.
*/ */
if (cache) { if (cache) {
chunk_purge_wrapper(arena, chunk_hooks, chunk, chunk_purge_wrapper(tsdn, arena, chunk_hooks,
size, 0, size); chunk, size, 0, size);
} }
goto label_return; goto label_return;
} }
@ -557,16 +568,16 @@ chunk_record(arena_t *arena, chunk_hooks_t *chunk_hooks,
extent_tree_szad_insert(chunks_szad, node); extent_tree_szad_insert(chunks_szad, node);
arena_chunk_cache_maybe_insert(arena, node, cache); arena_chunk_cache_maybe_insert(arena, node, cache);
arena_node_dalloc(arena, prev); arena_node_dalloc(tsdn, arena, prev);
} }
label_return: label_return:
malloc_mutex_unlock(&arena->chunks_mtx); malloc_mutex_unlock(tsdn, &arena->chunks_mtx);
} }
void void
chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk, chunk_dalloc_cache(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
size_t size, bool committed) void *chunk, size_t size, bool committed)
{ {
assert(chunk != NULL); assert(chunk != NULL);
@ -574,9 +585,9 @@ chunk_dalloc_cache(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk,
assert(size != 0); assert(size != 0);
assert((size & chunksize_mask) == 0); assert((size & chunksize_mask) == 0);
chunk_record(arena, chunk_hooks, &arena->chunks_szad_cached, chunk_record(tsdn, arena, chunk_hooks, &arena->chunks_szad_cached,
&arena->chunks_ad_cached, true, chunk, size, false, committed); &arena->chunks_ad_cached, true, chunk, size, false, committed);
arena_maybe_purge(arena); arena_maybe_purge(tsdn, arena);
} }
static bool static bool
@ -584,14 +595,14 @@ chunk_dalloc_default(void *chunk, size_t size, bool committed,
unsigned arena_ind) unsigned arena_ind)
{ {
if (!have_dss || !chunk_in_dss(chunk)) if (!have_dss || !chunk_in_dss(tsdn_fetch(), chunk))
return (chunk_dalloc_mmap(chunk, size)); return (chunk_dalloc_mmap(chunk, size));
return (true); return (true);
} }
void void
chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk, chunk_dalloc_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
size_t size, bool zeroed, bool committed) void *chunk, size_t size, bool zeroed, bool committed)
{ {
assert(chunk != NULL); assert(chunk != NULL);
@ -599,7 +610,7 @@ chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk,
assert(size != 0); assert(size != 0);
assert((size & chunksize_mask) == 0); assert((size & chunksize_mask) == 0);
chunk_hooks_assure_initialized(arena, chunk_hooks); chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks);
/* Try to deallocate. */ /* Try to deallocate. */
if (!chunk_hooks->dalloc(chunk, size, committed, arena->ind)) if (!chunk_hooks->dalloc(chunk, size, committed, arena->ind))
return; return;
@ -610,8 +621,11 @@ chunk_dalloc_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk,
} }
zeroed = !committed || !chunk_hooks->purge(chunk, size, 0, size, zeroed = !committed || !chunk_hooks->purge(chunk, size, 0, size,
arena->ind); arena->ind);
chunk_record(arena, chunk_hooks, &arena->chunks_szad_retained, chunk_record(tsdn, arena, chunk_hooks, &arena->chunks_szad_retained,
&arena->chunks_ad_retained, false, chunk, size, zeroed, committed); &arena->chunks_ad_retained, false, chunk, size, zeroed, committed);
if (config_stats)
arena->stats.retained += size;
} }
static bool static bool
@ -648,11 +662,11 @@ chunk_purge_default(void *chunk, size_t size, size_t offset, size_t length,
} }
bool bool
chunk_purge_wrapper(arena_t *arena, chunk_hooks_t *chunk_hooks, void *chunk, chunk_purge_wrapper(tsdn_t *tsdn, arena_t *arena, chunk_hooks_t *chunk_hooks,
size_t size, size_t offset, size_t length) void *chunk, size_t size, size_t offset, size_t length)
{ {
chunk_hooks_assure_initialized(arena, chunk_hooks); chunk_hooks_assure_initialized(tsdn, arena, chunk_hooks);
return (chunk_hooks->purge(chunk, size, offset, length, arena->ind)); return (chunk_hooks->purge(chunk, size, offset, length, arena->ind));
} }
@ -673,8 +687,11 @@ chunk_merge_default(void *chunk_a, size_t size_a, void *chunk_b, size_t size_b,
if (!maps_coalesce) if (!maps_coalesce)
return (true); return (true);
if (have_dss && chunk_in_dss(chunk_a) != chunk_in_dss(chunk_b)) if (have_dss) {
return (true); tsdn_t *tsdn = tsdn_fetch();
if (chunk_in_dss(tsdn, chunk_a) != chunk_in_dss(tsdn, chunk_b))
return (true);
}
return (false); return (false);
} }
@ -683,7 +700,7 @@ static rtree_node_elm_t *
chunks_rtree_node_alloc(size_t nelms) chunks_rtree_node_alloc(size_t nelms)
{ {
return ((rtree_node_elm_t *)base_alloc(nelms * return ((rtree_node_elm_t *)base_alloc(tsdn_fetch(), nelms *
sizeof(rtree_node_elm_t))); sizeof(rtree_node_elm_t)));
} }
@ -730,22 +747,22 @@ chunk_boot(void)
} }
void void
chunk_prefork(void) chunk_prefork(tsdn_t *tsdn)
{ {
chunk_dss_prefork(); chunk_dss_prefork(tsdn);
} }
void void
chunk_postfork_parent(void) chunk_postfork_parent(tsdn_t *tsdn)
{ {
chunk_dss_postfork_parent(); chunk_dss_postfork_parent(tsdn);
} }
void void
chunk_postfork_child(void) chunk_postfork_child(tsdn_t *tsdn)
{ {
chunk_dss_postfork_child(); chunk_dss_postfork_child(tsdn);
} }

View File

@ -41,33 +41,33 @@ chunk_dss_sbrk(intptr_t increment)
} }
dss_prec_t dss_prec_t
chunk_dss_prec_get(void) chunk_dss_prec_get(tsdn_t *tsdn)
{ {
dss_prec_t ret; dss_prec_t ret;
if (!have_dss) if (!have_dss)
return (dss_prec_disabled); return (dss_prec_disabled);
malloc_mutex_lock(&dss_mtx); malloc_mutex_lock(tsdn, &dss_mtx);
ret = dss_prec_default; ret = dss_prec_default;
malloc_mutex_unlock(&dss_mtx); malloc_mutex_unlock(tsdn, &dss_mtx);
return (ret); return (ret);
} }
bool bool
chunk_dss_prec_set(dss_prec_t dss_prec) chunk_dss_prec_set(tsdn_t *tsdn, dss_prec_t dss_prec)
{ {
if (!have_dss) if (!have_dss)
return (dss_prec != dss_prec_disabled); return (dss_prec != dss_prec_disabled);
malloc_mutex_lock(&dss_mtx); malloc_mutex_lock(tsdn, &dss_mtx);
dss_prec_default = dss_prec; dss_prec_default = dss_prec;
malloc_mutex_unlock(&dss_mtx); malloc_mutex_unlock(tsdn, &dss_mtx);
return (false); return (false);
} }
void * void *
chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment, chunk_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size,
bool *zero, bool *commit) size_t alignment, bool *zero, bool *commit)
{ {
cassert(have_dss); cassert(have_dss);
assert(size > 0 && (size & chunksize_mask) == 0); assert(size > 0 && (size & chunksize_mask) == 0);
@ -80,7 +80,7 @@ chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment,
if ((intptr_t)size < 0) if ((intptr_t)size < 0)
return (NULL); return (NULL);
malloc_mutex_lock(&dss_mtx); malloc_mutex_lock(tsdn, &dss_mtx);
if (dss_prev != (void *)-1) { if (dss_prev != (void *)-1) {
/* /*
@ -122,7 +122,7 @@ chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment,
if ((uintptr_t)ret < (uintptr_t)dss_max || if ((uintptr_t)ret < (uintptr_t)dss_max ||
(uintptr_t)dss_next < (uintptr_t)dss_max) { (uintptr_t)dss_next < (uintptr_t)dss_max) {
/* Wrap-around. */ /* Wrap-around. */
malloc_mutex_unlock(&dss_mtx); malloc_mutex_unlock(tsdn, &dss_mtx);
return (NULL); return (NULL);
} }
incr = gap_size + cpad_size + size; incr = gap_size + cpad_size + size;
@ -130,11 +130,11 @@ chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment,
if (dss_prev == dss_max) { if (dss_prev == dss_max) {
/* Success. */ /* Success. */
dss_max = dss_next; dss_max = dss_next;
malloc_mutex_unlock(&dss_mtx); malloc_mutex_unlock(tsdn, &dss_mtx);
if (cpad_size != 0) { if (cpad_size != 0) {
chunk_hooks_t chunk_hooks = chunk_hooks_t chunk_hooks =
CHUNK_HOOKS_INITIALIZER; CHUNK_HOOKS_INITIALIZER;
chunk_dalloc_wrapper(arena, chunk_dalloc_wrapper(tsdn, arena,
&chunk_hooks, cpad, cpad_size, &chunk_hooks, cpad, cpad_size,
false, true); false, true);
} }
@ -149,25 +149,25 @@ chunk_alloc_dss(arena_t *arena, void *new_addr, size_t size, size_t alignment,
} }
} while (dss_prev != (void *)-1); } while (dss_prev != (void *)-1);
} }
malloc_mutex_unlock(&dss_mtx); malloc_mutex_unlock(tsdn, &dss_mtx);
return (NULL); return (NULL);
} }
bool bool
chunk_in_dss(void *chunk) chunk_in_dss(tsdn_t *tsdn, void *chunk)
{ {
bool ret; bool ret;
cassert(have_dss); cassert(have_dss);
malloc_mutex_lock(&dss_mtx); malloc_mutex_lock(tsdn, &dss_mtx);
if ((uintptr_t)chunk >= (uintptr_t)dss_base if ((uintptr_t)chunk >= (uintptr_t)dss_base
&& (uintptr_t)chunk < (uintptr_t)dss_max) && (uintptr_t)chunk < (uintptr_t)dss_max)
ret = true; ret = true;
else else
ret = false; ret = false;
malloc_mutex_unlock(&dss_mtx); malloc_mutex_unlock(tsdn, &dss_mtx);
return (ret); return (ret);
} }
@ -178,7 +178,7 @@ chunk_dss_boot(void)
cassert(have_dss); cassert(have_dss);
if (malloc_mutex_init(&dss_mtx)) if (malloc_mutex_init(&dss_mtx, "dss", WITNESS_RANK_DSS))
return (true); return (true);
dss_base = chunk_dss_sbrk(0); dss_base = chunk_dss_sbrk(0);
dss_prev = dss_base; dss_prev = dss_base;
@ -188,27 +188,27 @@ chunk_dss_boot(void)
} }
void void
chunk_dss_prefork(void) chunk_dss_prefork(tsdn_t *tsdn)
{ {
if (have_dss) if (have_dss)
malloc_mutex_prefork(&dss_mtx); malloc_mutex_prefork(tsdn, &dss_mtx);
} }
void void
chunk_dss_postfork_parent(void) chunk_dss_postfork_parent(tsdn_t *tsdn)
{ {
if (have_dss) if (have_dss)
malloc_mutex_postfork_parent(&dss_mtx); malloc_mutex_postfork_parent(tsdn, &dss_mtx);
} }
void void
chunk_dss_postfork_child(void) chunk_dss_postfork_child(tsdn_t *tsdn)
{ {
if (have_dss) if (have_dss)
malloc_mutex_postfork_child(&dss_mtx); malloc_mutex_postfork_child(tsdn, &dss_mtx);
} }
/******************************************************************************/ /******************************************************************************/

View File

@ -9,25 +9,23 @@ chunk_alloc_mmap_slow(size_t size, size_t alignment, bool *zero, bool *commit)
void *ret; void *ret;
size_t alloc_size; size_t alloc_size;
alloc_size = size + alignment - PAGE; alloc_size = size + alignment;
/* Beware size_t wrap-around. */ /* Beware size_t wrap-around. */
if (alloc_size < size) if (alloc_size < size)
return (NULL); return (NULL);
do { do {
void *pages; void *pages;
size_t leadsize; size_t leadsize;
pages = pages_map(NULL, alloc_size); pages = pages_map(NULL, alloc_size, commit);
if (pages == NULL) if (pages == NULL)
return (NULL); return (NULL);
leadsize = ALIGNMENT_CEILING((uintptr_t)pages, alignment) - leadsize = ALIGNMENT_CEILING((uintptr_t)pages, alignment) -
(uintptr_t)pages; (uintptr_t)pages;
ret = pages_trim(pages, alloc_size, leadsize, size); ret = pages_trim(pages, alloc_size, leadsize, size, commit);
} while (ret == NULL); } while (ret == NULL);
assert(ret != NULL); assert(ret != NULL);
*zero = true; *zero = true;
if (!*commit)
*commit = pages_decommit(ret, size);
return (ret); return (ret);
} }
@ -54,7 +52,7 @@ chunk_alloc_mmap(void *new_addr, size_t size, size_t alignment, bool *zero,
assert(alignment != 0); assert(alignment != 0);
assert((alignment & chunksize_mask) == 0); assert((alignment & chunksize_mask) == 0);
ret = pages_map(new_addr, size); ret = pages_map(new_addr, size, commit);
if (ret == NULL || ret == new_addr) if (ret == NULL || ret == new_addr)
return (ret); return (ret);
assert(new_addr == NULL); assert(new_addr == NULL);
@ -66,8 +64,6 @@ chunk_alloc_mmap(void *new_addr, size_t size, size_t alignment, bool *zero,
assert(ret != NULL); assert(ret != NULL);
*zero = true; *zero = true;
if (!*commit)
*commit = pages_decommit(ret, size);
return (ret); return (ret);
} }

View File

@ -40,8 +40,8 @@
/******************************************************************************/ /******************************************************************************/
/* Function prototypes for non-inline static functions. */ /* Function prototypes for non-inline static functions. */
static bool ckh_grow(tsd_t *tsd, ckh_t *ckh); static bool ckh_grow(tsdn_t *tsdn, ckh_t *ckh);
static void ckh_shrink(tsd_t *tsd, ckh_t *ckh); static void ckh_shrink(tsdn_t *tsdn, ckh_t *ckh);
/******************************************************************************/ /******************************************************************************/
@ -244,7 +244,7 @@ ckh_rebuild(ckh_t *ckh, ckhc_t *aTab)
} }
static bool static bool
ckh_grow(tsd_t *tsd, ckh_t *ckh) ckh_grow(tsdn_t *tsdn, ckh_t *ckh)
{ {
bool ret; bool ret;
ckhc_t *tab, *ttab; ckhc_t *tab, *ttab;
@ -270,8 +270,8 @@ ckh_grow(tsd_t *tsd, ckh_t *ckh)
ret = true; ret = true;
goto label_return; goto label_return;
} }
tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL, tab = (ckhc_t *)ipallocztm(tsdn, usize, CACHELINE, true, NULL,
true, NULL); true, arena_ichoose(tsdn, NULL));
if (tab == NULL) { if (tab == NULL) {
ret = true; ret = true;
goto label_return; goto label_return;
@ -283,12 +283,12 @@ ckh_grow(tsd_t *tsd, ckh_t *ckh)
ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS; ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;
if (!ckh_rebuild(ckh, tab)) { if (!ckh_rebuild(ckh, tab)) {
idalloctm(tsd, tab, tcache_get(tsd, false), true, true); idalloctm(tsdn, tab, NULL, true, true);
break; break;
} }
/* Rebuilding failed, so back out partially rebuilt table. */ /* Rebuilding failed, so back out partially rebuilt table. */
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true); idalloctm(tsdn, ckh->tab, NULL, true, true);
ckh->tab = tab; ckh->tab = tab;
ckh->lg_curbuckets = lg_prevbuckets; ckh->lg_curbuckets = lg_prevbuckets;
} }
@ -299,7 +299,7 @@ label_return:
} }
static void static void
ckh_shrink(tsd_t *tsd, ckh_t *ckh) ckh_shrink(tsdn_t *tsdn, ckh_t *ckh)
{ {
ckhc_t *tab, *ttab; ckhc_t *tab, *ttab;
size_t usize; size_t usize;
@ -314,8 +314,8 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE); usize = sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE);
if (unlikely(usize == 0 || usize > HUGE_MAXCLASS)) if (unlikely(usize == 0 || usize > HUGE_MAXCLASS))
return; return;
tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL, true, tab = (ckhc_t *)ipallocztm(tsdn, usize, CACHELINE, true, NULL, true,
NULL); arena_ichoose(tsdn, NULL));
if (tab == NULL) { if (tab == NULL) {
/* /*
* An OOM error isn't worth propagating, since it doesn't * An OOM error isn't worth propagating, since it doesn't
@ -330,7 +330,7 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS; ckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;
if (!ckh_rebuild(ckh, tab)) { if (!ckh_rebuild(ckh, tab)) {
idalloctm(tsd, tab, tcache_get(tsd, false), true, true); idalloctm(tsdn, tab, NULL, true, true);
#ifdef CKH_COUNT #ifdef CKH_COUNT
ckh->nshrinks++; ckh->nshrinks++;
#endif #endif
@ -338,7 +338,7 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
} }
/* Rebuilding failed, so back out partially rebuilt table. */ /* Rebuilding failed, so back out partially rebuilt table. */
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true); idalloctm(tsdn, ckh->tab, NULL, true, true);
ckh->tab = tab; ckh->tab = tab;
ckh->lg_curbuckets = lg_prevbuckets; ckh->lg_curbuckets = lg_prevbuckets;
#ifdef CKH_COUNT #ifdef CKH_COUNT
@ -347,7 +347,7 @@ ckh_shrink(tsd_t *tsd, ckh_t *ckh)
} }
bool bool
ckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *hash, ckh_new(tsdn_t *tsdn, ckh_t *ckh, size_t minitems, ckh_hash_t *hash,
ckh_keycomp_t *keycomp) ckh_keycomp_t *keycomp)
{ {
bool ret; bool ret;
@ -391,8 +391,8 @@ ckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *hash,
ret = true; ret = true;
goto label_return; goto label_return;
} }
ckh->tab = (ckhc_t *)ipallocztm(tsd, usize, CACHELINE, true, NULL, true, ckh->tab = (ckhc_t *)ipallocztm(tsdn, usize, CACHELINE, true, NULL,
NULL); true, arena_ichoose(tsdn, NULL));
if (ckh->tab == NULL) { if (ckh->tab == NULL) {
ret = true; ret = true;
goto label_return; goto label_return;
@ -404,7 +404,7 @@ label_return:
} }
void void
ckh_delete(tsd_t *tsd, ckh_t *ckh) ckh_delete(tsdn_t *tsdn, ckh_t *ckh)
{ {
assert(ckh != NULL); assert(ckh != NULL);
@ -421,9 +421,9 @@ ckh_delete(tsd_t *tsd, ckh_t *ckh)
(unsigned long long)ckh->nrelocs); (unsigned long long)ckh->nrelocs);
#endif #endif
idalloctm(tsd, ckh->tab, tcache_get(tsd, false), true, true); idalloctm(tsdn, ckh->tab, NULL, true, true);
if (config_debug) if (config_debug)
memset(ckh, 0x5a, sizeof(ckh_t)); memset(ckh, JEMALLOC_FREE_JUNK, sizeof(ckh_t));
} }
size_t size_t
@ -456,7 +456,7 @@ ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data)
} }
bool bool
ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data) ckh_insert(tsdn_t *tsdn, ckh_t *ckh, const void *key, const void *data)
{ {
bool ret; bool ret;
@ -468,7 +468,7 @@ ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data)
#endif #endif
while (ckh_try_insert(ckh, &key, &data)) { while (ckh_try_insert(ckh, &key, &data)) {
if (ckh_grow(tsd, ckh)) { if (ckh_grow(tsdn, ckh)) {
ret = true; ret = true;
goto label_return; goto label_return;
} }
@ -480,7 +480,7 @@ label_return:
} }
bool bool
ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key, ckh_remove(tsdn_t *tsdn, ckh_t *ckh, const void *searchkey, void **key,
void **data) void **data)
{ {
size_t cell; size_t cell;
@ -502,7 +502,7 @@ ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key,
+ LG_CKH_BUCKET_CELLS - 2)) && ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 2)) && ckh->lg_curbuckets
> ckh->lg_minbuckets) { > ckh->lg_minbuckets) {
/* Ignore error due to OOM. */ /* Ignore error due to OOM. */
ckh_shrink(tsd, ckh); ckh_shrink(tsdn, ckh);
} }
return (false); return (false);

482
src/ctl.c

File diff suppressed because it is too large Load Diff

View File

@ -15,12 +15,21 @@ huge_node_get(const void *ptr)
} }
static bool static bool
huge_node_set(const void *ptr, extent_node_t *node) huge_node_set(tsdn_t *tsdn, const void *ptr, extent_node_t *node)
{ {
assert(extent_node_addr_get(node) == ptr); assert(extent_node_addr_get(node) == ptr);
assert(!extent_node_achunk_get(node)); assert(!extent_node_achunk_get(node));
return (chunk_register(ptr, node)); return (chunk_register(tsdn, ptr, node));
}
static void
huge_node_reset(tsdn_t *tsdn, const void *ptr, extent_node_t *node)
{
bool err;
err = huge_node_set(tsdn, ptr, node);
assert(!err);
} }
static void static void
@ -31,18 +40,17 @@ huge_node_unset(const void *ptr, const extent_node_t *node)
} }
void * void *
huge_malloc(tsd_t *tsd, arena_t *arena, size_t usize, bool zero, huge_malloc(tsdn_t *tsdn, arena_t *arena, size_t usize, bool zero)
tcache_t *tcache)
{ {
assert(usize == s2u(usize)); assert(usize == s2u(usize));
return (huge_palloc(tsd, arena, usize, chunksize, zero, tcache)); return (huge_palloc(tsdn, arena, usize, chunksize, zero));
} }
void * void *
huge_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment, huge_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment,
bool zero, tcache_t *tcache) bool zero)
{ {
void *ret; void *ret;
size_t ausize; size_t ausize;
@ -51,14 +59,16 @@ huge_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment,
/* Allocate one or more contiguous chunks for this request. */ /* Allocate one or more contiguous chunks for this request. */
assert(!tsdn_null(tsdn) || arena != NULL);
ausize = sa2u(usize, alignment); ausize = sa2u(usize, alignment);
if (unlikely(ausize == 0 || ausize > HUGE_MAXCLASS)) if (unlikely(ausize == 0 || ausize > HUGE_MAXCLASS))
return (NULL); return (NULL);
assert(ausize >= chunksize); assert(ausize >= chunksize);
/* Allocate an extent node with which to track the chunk. */ /* Allocate an extent node with which to track the chunk. */
node = ipallocztm(tsd, CACHELINE_CEILING(sizeof(extent_node_t)), node = ipallocztm(tsdn, CACHELINE_CEILING(sizeof(extent_node_t)),
CACHELINE, false, tcache, true, arena); CACHELINE, false, NULL, true, arena_ichoose(tsdn, arena));
if (node == NULL) if (node == NULL)
return (NULL); return (NULL);
@ -67,34 +77,35 @@ huge_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment,
* it is possible to make correct junk/zero fill decisions below. * it is possible to make correct junk/zero fill decisions below.
*/ */
is_zeroed = zero; is_zeroed = zero;
arena = arena_choose(tsd, arena); if (likely(!tsdn_null(tsdn)))
if (unlikely(arena == NULL) || (ret = arena_chunk_alloc_huge(arena, arena = arena_choose(tsdn_tsd(tsdn), arena);
usize, alignment, &is_zeroed)) == NULL) { if (unlikely(arena == NULL) || (ret = arena_chunk_alloc_huge(tsdn,
idalloctm(tsd, node, tcache, true, true); arena, usize, alignment, &is_zeroed)) == NULL) {
idalloctm(tsdn, node, NULL, true, true);
return (NULL); return (NULL);
} }
extent_node_init(node, arena, ret, usize, is_zeroed, true); extent_node_init(node, arena, ret, usize, is_zeroed, true);
if (huge_node_set(ret, node)) { if (huge_node_set(tsdn, ret, node)) {
arena_chunk_dalloc_huge(arena, ret, usize); arena_chunk_dalloc_huge(tsdn, arena, ret, usize);
idalloctm(tsd, node, tcache, true, true); idalloctm(tsdn, node, NULL, true, true);
return (NULL); return (NULL);
} }
/* Insert node into huge. */ /* Insert node into huge. */
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
ql_elm_new(node, ql_link); ql_elm_new(node, ql_link);
ql_tail_insert(&arena->huge, node, ql_link); ql_tail_insert(&arena->huge, node, ql_link);
malloc_mutex_unlock(&arena->huge_mtx); malloc_mutex_unlock(tsdn, &arena->huge_mtx);
if (zero || (config_fill && unlikely(opt_zero))) { if (zero || (config_fill && unlikely(opt_zero))) {
if (!is_zeroed) if (!is_zeroed)
memset(ret, 0, usize); memset(ret, 0, usize);
} else if (config_fill && unlikely(opt_junk_alloc)) } else if (config_fill && unlikely(opt_junk_alloc))
memset(ret, 0xa5, usize); memset(ret, JEMALLOC_ALLOC_JUNK, usize);
arena_decay_tick(tsd, arena); arena_decay_tick(tsdn, arena);
return (ret); return (ret);
} }
@ -103,7 +114,7 @@ huge_palloc(tsd_t *tsd, arena_t *arena, size_t usize, size_t alignment,
#define huge_dalloc_junk JEMALLOC_N(huge_dalloc_junk_impl) #define huge_dalloc_junk JEMALLOC_N(huge_dalloc_junk_impl)
#endif #endif
static void static void
huge_dalloc_junk(void *ptr, size_t usize) huge_dalloc_junk(tsdn_t *tsdn, void *ptr, size_t usize)
{ {
if (config_fill && have_dss && unlikely(opt_junk_free)) { if (config_fill && have_dss && unlikely(opt_junk_free)) {
@ -111,8 +122,8 @@ huge_dalloc_junk(void *ptr, size_t usize)
* Only bother junk filling if the chunk isn't about to be * Only bother junk filling if the chunk isn't about to be
* unmapped. * unmapped.
*/ */
if (!config_munmap || (have_dss && chunk_in_dss(ptr))) if (!config_munmap || (have_dss && chunk_in_dss(tsdn, ptr)))
memset(ptr, 0x5a, usize); memset(ptr, JEMALLOC_FREE_JUNK, usize);
} }
} }
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
@ -122,8 +133,8 @@ huge_dalloc_junk_t *huge_dalloc_junk = JEMALLOC_N(huge_dalloc_junk_impl);
#endif #endif
static void static void
huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize_min, huge_ralloc_no_move_similar(tsdn_t *tsdn, void *ptr, size_t oldsize,
size_t usize_max, bool zero) size_t usize_min, size_t usize_max, bool zero)
{ {
size_t usize, usize_next; size_t usize, usize_next;
extent_node_t *node; extent_node_t *node;
@ -147,24 +158,28 @@ huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize_min,
if (oldsize > usize) { if (oldsize > usize) {
size_t sdiff = oldsize - usize; size_t sdiff = oldsize - usize;
if (config_fill && unlikely(opt_junk_free)) { if (config_fill && unlikely(opt_junk_free)) {
memset((void *)((uintptr_t)ptr + usize), 0x5a, sdiff); memset((void *)((uintptr_t)ptr + usize),
JEMALLOC_FREE_JUNK, sdiff);
post_zeroed = false; post_zeroed = false;
} else { } else {
post_zeroed = !chunk_purge_wrapper(arena, &chunk_hooks, post_zeroed = !chunk_purge_wrapper(tsdn, arena,
ptr, CHUNK_CEILING(oldsize), usize, sdiff); &chunk_hooks, ptr, CHUNK_CEILING(oldsize), usize,
sdiff);
} }
} else } else
post_zeroed = pre_zeroed; post_zeroed = pre_zeroed;
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
/* Update the size of the huge allocation. */ /* Update the size of the huge allocation. */
huge_node_unset(ptr, node);
assert(extent_node_size_get(node) != usize); assert(extent_node_size_get(node) != usize);
extent_node_size_set(node, usize); extent_node_size_set(node, usize);
huge_node_reset(tsdn, ptr, node);
/* Update zeroed. */ /* Update zeroed. */
extent_node_zeroed_set(node, post_zeroed); extent_node_zeroed_set(node, post_zeroed);
malloc_mutex_unlock(&arena->huge_mtx); malloc_mutex_unlock(tsdn, &arena->huge_mtx);
arena_chunk_ralloc_huge_similar(arena, ptr, oldsize, usize); arena_chunk_ralloc_huge_similar(tsdn, arena, ptr, oldsize, usize);
/* Fill if necessary (growing). */ /* Fill if necessary (growing). */
if (oldsize < usize) { if (oldsize < usize) {
@ -174,14 +189,15 @@ huge_ralloc_no_move_similar(void *ptr, size_t oldsize, size_t usize_min,
usize - oldsize); usize - oldsize);
} }
} else if (config_fill && unlikely(opt_junk_alloc)) { } else if (config_fill && unlikely(opt_junk_alloc)) {
memset((void *)((uintptr_t)ptr + oldsize), 0xa5, usize - memset((void *)((uintptr_t)ptr + oldsize),
oldsize); JEMALLOC_ALLOC_JUNK, usize - oldsize);
} }
} }
} }
static bool static bool
huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize) huge_ralloc_no_move_shrink(tsdn_t *tsdn, void *ptr, size_t oldsize,
size_t usize)
{ {
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
@ -192,7 +208,7 @@ huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize)
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
pre_zeroed = extent_node_zeroed_get(node); pre_zeroed = extent_node_zeroed_get(node);
chunk_hooks = chunk_hooks_get(arena); chunk_hooks = chunk_hooks_get(tsdn, arena);
assert(oldsize > usize); assert(oldsize > usize);
@ -205,42 +221,45 @@ huge_ralloc_no_move_shrink(void *ptr, size_t oldsize, size_t usize)
if (oldsize > usize) { if (oldsize > usize) {
size_t sdiff = oldsize - usize; size_t sdiff = oldsize - usize;
if (config_fill && unlikely(opt_junk_free)) { if (config_fill && unlikely(opt_junk_free)) {
huge_dalloc_junk((void *)((uintptr_t)ptr + usize), huge_dalloc_junk(tsdn, (void *)((uintptr_t)ptr + usize),
sdiff); sdiff);
post_zeroed = false; post_zeroed = false;
} else { } else {
post_zeroed = !chunk_purge_wrapper(arena, &chunk_hooks, post_zeroed = !chunk_purge_wrapper(tsdn, arena,
CHUNK_ADDR2BASE((uintptr_t)ptr + usize), &chunk_hooks, CHUNK_ADDR2BASE((uintptr_t)ptr +
CHUNK_CEILING(oldsize), usize), CHUNK_CEILING(oldsize),
CHUNK_ADDR2OFFSET((uintptr_t)ptr + usize), sdiff); CHUNK_ADDR2OFFSET((uintptr_t)ptr + usize), sdiff);
} }
} else } else
post_zeroed = pre_zeroed; post_zeroed = pre_zeroed;
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
/* Update the size of the huge allocation. */ /* Update the size of the huge allocation. */
huge_node_unset(ptr, node);
extent_node_size_set(node, usize); extent_node_size_set(node, usize);
huge_node_reset(tsdn, ptr, node);
/* Update zeroed. */ /* Update zeroed. */
extent_node_zeroed_set(node, post_zeroed); extent_node_zeroed_set(node, post_zeroed);
malloc_mutex_unlock(&arena->huge_mtx); malloc_mutex_unlock(tsdn, &arena->huge_mtx);
/* Zap the excess chunks. */ /* Zap the excess chunks. */
arena_chunk_ralloc_huge_shrink(arena, ptr, oldsize, usize); arena_chunk_ralloc_huge_shrink(tsdn, arena, ptr, oldsize, usize);
return (false); return (false);
} }
static bool static bool
huge_ralloc_no_move_expand(void *ptr, size_t oldsize, size_t usize, bool zero) { huge_ralloc_no_move_expand(tsdn_t *tsdn, void *ptr, size_t oldsize,
size_t usize, bool zero) {
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
bool is_zeroed_subchunk, is_zeroed_chunk; bool is_zeroed_subchunk, is_zeroed_chunk;
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
is_zeroed_subchunk = extent_node_zeroed_get(node); is_zeroed_subchunk = extent_node_zeroed_get(node);
malloc_mutex_unlock(&arena->huge_mtx); malloc_mutex_unlock(tsdn, &arena->huge_mtx);
/* /*
* Copy zero into is_zeroed_chunk and pass the copy to chunk_alloc(), so * Copy zero into is_zeroed_chunk and pass the copy to chunk_alloc(), so
@ -248,14 +267,16 @@ huge_ralloc_no_move_expand(void *ptr, size_t oldsize, size_t usize, bool zero) {
*/ */
is_zeroed_chunk = zero; is_zeroed_chunk = zero;
if (arena_chunk_ralloc_huge_expand(arena, ptr, oldsize, usize, if (arena_chunk_ralloc_huge_expand(tsdn, arena, ptr, oldsize, usize,
&is_zeroed_chunk)) &is_zeroed_chunk))
return (true); return (true);
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
/* Update the size of the huge allocation. */ /* Update the size of the huge allocation. */
huge_node_unset(ptr, node);
extent_node_size_set(node, usize); extent_node_size_set(node, usize);
malloc_mutex_unlock(&arena->huge_mtx); huge_node_reset(tsdn, ptr, node);
malloc_mutex_unlock(tsdn, &arena->huge_mtx);
if (zero || (config_fill && unlikely(opt_zero))) { if (zero || (config_fill && unlikely(opt_zero))) {
if (!is_zeroed_subchunk) { if (!is_zeroed_subchunk) {
@ -268,15 +289,15 @@ huge_ralloc_no_move_expand(void *ptr, size_t oldsize, size_t usize, bool zero) {
CHUNK_CEILING(oldsize)); CHUNK_CEILING(oldsize));
} }
} else if (config_fill && unlikely(opt_junk_alloc)) { } else if (config_fill && unlikely(opt_junk_alloc)) {
memset((void *)((uintptr_t)ptr + oldsize), 0xa5, usize - memset((void *)((uintptr_t)ptr + oldsize), JEMALLOC_ALLOC_JUNK,
oldsize); usize - oldsize);
} }
return (false); return (false);
} }
bool bool
huge_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, size_t usize_min, huge_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t usize_min,
size_t usize_max, bool zero) size_t usize_max, bool zero)
{ {
@ -290,16 +311,16 @@ huge_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, size_t usize_min,
if (CHUNK_CEILING(usize_max) > CHUNK_CEILING(oldsize)) { if (CHUNK_CEILING(usize_max) > CHUNK_CEILING(oldsize)) {
/* Attempt to expand the allocation in-place. */ /* Attempt to expand the allocation in-place. */
if (!huge_ralloc_no_move_expand(ptr, oldsize, usize_max, if (!huge_ralloc_no_move_expand(tsdn, ptr, oldsize, usize_max,
zero)) { zero)) {
arena_decay_tick(tsd, huge_aalloc(ptr)); arena_decay_tick(tsdn, huge_aalloc(ptr));
return (false); return (false);
} }
/* Try again, this time with usize_min. */ /* Try again, this time with usize_min. */
if (usize_min < usize_max && CHUNK_CEILING(usize_min) > if (usize_min < usize_max && CHUNK_CEILING(usize_min) >
CHUNK_CEILING(oldsize) && huge_ralloc_no_move_expand(ptr, CHUNK_CEILING(oldsize) && huge_ralloc_no_move_expand(tsdn,
oldsize, usize_min, zero)) { ptr, oldsize, usize_min, zero)) {
arena_decay_tick(tsd, huge_aalloc(ptr)); arena_decay_tick(tsdn, huge_aalloc(ptr));
return (false); return (false);
} }
} }
@ -310,16 +331,17 @@ huge_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, size_t usize_min,
*/ */
if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(usize_min) if (CHUNK_CEILING(oldsize) >= CHUNK_CEILING(usize_min)
&& CHUNK_CEILING(oldsize) <= CHUNK_CEILING(usize_max)) { && CHUNK_CEILING(oldsize) <= CHUNK_CEILING(usize_max)) {
huge_ralloc_no_move_similar(ptr, oldsize, usize_min, usize_max, huge_ralloc_no_move_similar(tsdn, ptr, oldsize, usize_min,
zero); usize_max, zero);
arena_decay_tick(tsd, huge_aalloc(ptr)); arena_decay_tick(tsdn, huge_aalloc(ptr));
return (false); return (false);
} }
/* Attempt to shrink the allocation in-place. */ /* Attempt to shrink the allocation in-place. */
if (CHUNK_CEILING(oldsize) > CHUNK_CEILING(usize_max)) { if (CHUNK_CEILING(oldsize) > CHUNK_CEILING(usize_max)) {
if (!huge_ralloc_no_move_shrink(ptr, oldsize, usize_max)) { if (!huge_ralloc_no_move_shrink(tsdn, ptr, oldsize,
arena_decay_tick(tsd, huge_aalloc(ptr)); usize_max)) {
arena_decay_tick(tsdn, huge_aalloc(ptr));
return (false); return (false);
} }
} }
@ -327,18 +349,18 @@ huge_ralloc_no_move(tsd_t *tsd, void *ptr, size_t oldsize, size_t usize_min,
} }
static void * static void *
huge_ralloc_move_helper(tsd_t *tsd, arena_t *arena, size_t usize, huge_ralloc_move_helper(tsdn_t *tsdn, arena_t *arena, size_t usize,
size_t alignment, bool zero, tcache_t *tcache) size_t alignment, bool zero)
{ {
if (alignment <= chunksize) if (alignment <= chunksize)
return (huge_malloc(tsd, arena, usize, zero, tcache)); return (huge_malloc(tsdn, arena, usize, zero));
return (huge_palloc(tsd, arena, usize, alignment, zero, tcache)); return (huge_palloc(tsdn, arena, usize, alignment, zero));
} }
void * void *
huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t usize, huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize,
size_t alignment, bool zero, tcache_t *tcache) size_t usize, size_t alignment, bool zero, tcache_t *tcache)
{ {
void *ret; void *ret;
size_t copysize; size_t copysize;
@ -347,7 +369,8 @@ huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t usize,
assert(usize > 0 && usize <= HUGE_MAXCLASS); assert(usize > 0 && usize <= HUGE_MAXCLASS);
/* Try to avoid moving the allocation. */ /* Try to avoid moving the allocation. */
if (!huge_ralloc_no_move(tsd, ptr, oldsize, usize, usize, zero)) if (!huge_ralloc_no_move(tsd_tsdn(tsd), ptr, oldsize, usize, usize,
zero))
return (ptr); return (ptr);
/* /*
@ -355,19 +378,19 @@ huge_ralloc(tsd_t *tsd, arena_t *arena, void *ptr, size_t oldsize, size_t usize,
* different size class. In that case, fall back to allocating new * different size class. In that case, fall back to allocating new
* space and copying. * space and copying.
*/ */
ret = huge_ralloc_move_helper(tsd, arena, usize, alignment, zero, ret = huge_ralloc_move_helper(tsd_tsdn(tsd), arena, usize, alignment,
tcache); zero);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
copysize = (usize < oldsize) ? usize : oldsize; copysize = (usize < oldsize) ? usize : oldsize;
memcpy(ret, ptr, copysize); memcpy(ret, ptr, copysize);
isqalloc(tsd, ptr, oldsize, tcache); isqalloc(tsd, ptr, oldsize, tcache, true);
return (ret); return (ret);
} }
void void
huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache) huge_dalloc(tsdn_t *tsdn, void *ptr)
{ {
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
@ -375,17 +398,17 @@ huge_dalloc(tsd_t *tsd, void *ptr, tcache_t *tcache)
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
huge_node_unset(ptr, node); huge_node_unset(ptr, node);
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
ql_remove(&arena->huge, node, ql_link); ql_remove(&arena->huge, node, ql_link);
malloc_mutex_unlock(&arena->huge_mtx); malloc_mutex_unlock(tsdn, &arena->huge_mtx);
huge_dalloc_junk(extent_node_addr_get(node), huge_dalloc_junk(tsdn, extent_node_addr_get(node),
extent_node_size_get(node)); extent_node_size_get(node));
arena_chunk_dalloc_huge(extent_node_arena_get(node), arena_chunk_dalloc_huge(tsdn, extent_node_arena_get(node),
extent_node_addr_get(node), extent_node_size_get(node)); extent_node_addr_get(node), extent_node_size_get(node));
idalloctm(tsd, node, tcache, true, true); idalloctm(tsdn, node, NULL, true, true);
arena_decay_tick(tsd, arena); arena_decay_tick(tsdn, arena);
} }
arena_t * arena_t *
@ -396,7 +419,7 @@ huge_aalloc(const void *ptr)
} }
size_t size_t
huge_salloc(const void *ptr) huge_salloc(tsdn_t *tsdn, const void *ptr)
{ {
size_t size; size_t size;
extent_node_t *node; extent_node_t *node;
@ -404,15 +427,15 @@ huge_salloc(const void *ptr)
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
size = extent_node_size_get(node); size = extent_node_size_get(node);
malloc_mutex_unlock(&arena->huge_mtx); malloc_mutex_unlock(tsdn, &arena->huge_mtx);
return (size); return (size);
} }
prof_tctx_t * prof_tctx_t *
huge_prof_tctx_get(const void *ptr) huge_prof_tctx_get(tsdn_t *tsdn, const void *ptr)
{ {
prof_tctx_t *tctx; prof_tctx_t *tctx;
extent_node_t *node; extent_node_t *node;
@ -420,29 +443,29 @@ huge_prof_tctx_get(const void *ptr)
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
tctx = extent_node_prof_tctx_get(node); tctx = extent_node_prof_tctx_get(node);
malloc_mutex_unlock(&arena->huge_mtx); malloc_mutex_unlock(tsdn, &arena->huge_mtx);
return (tctx); return (tctx);
} }
void void
huge_prof_tctx_set(const void *ptr, prof_tctx_t *tctx) huge_prof_tctx_set(tsdn_t *tsdn, const void *ptr, prof_tctx_t *tctx)
{ {
extent_node_t *node; extent_node_t *node;
arena_t *arena; arena_t *arena;
node = huge_node_get(ptr); node = huge_node_get(ptr);
arena = extent_node_arena_get(node); arena = extent_node_arena_get(node);
malloc_mutex_lock(&arena->huge_mtx); malloc_mutex_lock(tsdn, &arena->huge_mtx);
extent_node_prof_tctx_set(node, tctx); extent_node_prof_tctx_set(node, tctx);
malloc_mutex_unlock(&arena->huge_mtx); malloc_mutex_unlock(tsdn, &arena->huge_mtx);
} }
void void
huge_prof_tctx_reset(const void *ptr) huge_prof_tctx_reset(tsdn_t *tsdn, const void *ptr)
{ {
huge_prof_tctx_set(ptr, (prof_tctx_t *)(uintptr_t)1U); huge_prof_tctx_set(tsdn, ptr, (prof_tctx_t *)(uintptr_t)1U);
} }

File diff suppressed because it is too large Load Diff

View File

@ -69,7 +69,7 @@ JEMALLOC_EXPORT int _pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex,
#endif #endif
bool bool
malloc_mutex_init(malloc_mutex_t *mutex) malloc_mutex_init(malloc_mutex_t *mutex, const char *name, witness_rank_t rank)
{ {
#ifdef _WIN32 #ifdef _WIN32
@ -103,31 +103,34 @@ malloc_mutex_init(malloc_mutex_t *mutex)
} }
pthread_mutexattr_destroy(&attr); pthread_mutexattr_destroy(&attr);
#endif #endif
if (config_debug)
witness_init(&mutex->witness, name, rank, NULL);
return (false); return (false);
} }
void void
malloc_mutex_prefork(malloc_mutex_t *mutex) malloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex)
{ {
malloc_mutex_lock(mutex); malloc_mutex_lock(tsdn, mutex);
} }
void void
malloc_mutex_postfork_parent(malloc_mutex_t *mutex) malloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex)
{ {
malloc_mutex_unlock(mutex); malloc_mutex_unlock(tsdn, mutex);
} }
void void
malloc_mutex_postfork_child(malloc_mutex_t *mutex) malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex)
{ {
#ifdef JEMALLOC_MUTEX_INIT_CB #ifdef JEMALLOC_MUTEX_INIT_CB
malloc_mutex_unlock(mutex); malloc_mutex_unlock(tsdn, mutex);
#else #else
if (malloc_mutex_init(mutex)) { if (malloc_mutex_init(mutex, mutex->witness.name,
mutex->witness.rank)) {
malloc_printf("<jemalloc>: Error re-initializing mutex in " malloc_printf("<jemalloc>: Error re-initializing mutex in "
"child\n"); "child\n");
if (opt_abort) if (opt_abort)
@ -137,7 +140,7 @@ malloc_mutex_postfork_child(malloc_mutex_t *mutex)
} }
bool bool
mutex_boot(void) malloc_mutex_boot(void)
{ {
#ifdef JEMALLOC_MUTEX_INIT_CB #ifdef JEMALLOC_MUTEX_INIT_CB

View File

@ -99,7 +99,7 @@ nstime_divide(const nstime_t *time, const nstime_t *divisor)
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
#undef nstime_update #undef nstime_update
#define nstime_update JEMALLOC_N(nstime_update_impl) #define nstime_update JEMALLOC_N(n_nstime_update)
#endif #endif
bool bool
nstime_update(nstime_t *time) nstime_update(nstime_t *time)
@ -144,5 +144,5 @@ nstime_update(nstime_t *time)
#ifdef JEMALLOC_JET #ifdef JEMALLOC_JET
#undef nstime_update #undef nstime_update
#define nstime_update JEMALLOC_N(nstime_update) #define nstime_update JEMALLOC_N(nstime_update)
nstime_update_t *nstime_update = JEMALLOC_N(nstime_update_impl); nstime_update_t *nstime_update = JEMALLOC_N(n_nstime_update);
#endif #endif

View File

@ -1,29 +1,49 @@
#define JEMALLOC_PAGES_C_ #define JEMALLOC_PAGES_C_
#include "jemalloc/internal/jemalloc_internal.h" #include "jemalloc/internal/jemalloc_internal.h"
#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT
#include <sys/sysctl.h>
#endif
/******************************************************************************/
/* Data. */
#ifndef _WIN32
# define PAGES_PROT_COMMIT (PROT_READ | PROT_WRITE)
# define PAGES_PROT_DECOMMIT (PROT_NONE)
static int mmap_flags;
#endif
static bool os_overcommits;
/******************************************************************************/ /******************************************************************************/
void * void *
pages_map(void *addr, size_t size) pages_map(void *addr, size_t size, bool *commit)
{ {
void *ret; void *ret;
assert(size != 0); assert(size != 0);
if (os_overcommits)
*commit = true;
#ifdef _WIN32 #ifdef _WIN32
/* /*
* If VirtualAlloc can't allocate at the given address when one is * If VirtualAlloc can't allocate at the given address when one is
* given, it fails and returns NULL. * given, it fails and returns NULL.
*/ */
ret = VirtualAlloc(addr, size, MEM_COMMIT | MEM_RESERVE, ret = VirtualAlloc(addr, size, MEM_RESERVE | (*commit ? MEM_COMMIT : 0),
PAGE_READWRITE); PAGE_READWRITE);
#else #else
/* /*
* We don't use MAP_FIXED here, because it can cause the *replacement* * We don't use MAP_FIXED here, because it can cause the *replacement*
* of existing mappings, and we only want to create new mappings. * of existing mappings, and we only want to create new mappings.
*/ */
ret = mmap(addr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, {
-1, 0); int prot = *commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT;
ret = mmap(addr, size, prot, mmap_flags, -1, 0);
}
assert(ret != NULL); assert(ret != NULL);
if (ret == MAP_FAILED) if (ret == MAP_FAILED)
@ -67,7 +87,8 @@ pages_unmap(void *addr, size_t size)
} }
void * void *
pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size) pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size,
bool *commit)
{ {
void *ret = (void *)((uintptr_t)addr + leadsize); void *ret = (void *)((uintptr_t)addr + leadsize);
@ -77,7 +98,7 @@ pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size)
void *new_addr; void *new_addr;
pages_unmap(addr, alloc_size); pages_unmap(addr, alloc_size);
new_addr = pages_map(ret, size); new_addr = pages_map(ret, size, commit);
if (new_addr == ret) if (new_addr == ret)
return (ret); return (ret);
if (new_addr) if (new_addr)
@ -101,17 +122,17 @@ static bool
pages_commit_impl(void *addr, size_t size, bool commit) pages_commit_impl(void *addr, size_t size, bool commit)
{ {
#ifndef _WIN32 if (os_overcommits)
/* return (true);
* The following decommit/commit implementation is functional, but
* always disabled because it doesn't add value beyong improved #ifdef _WIN32
* debugging (at the cost of extra system calls) on systems that return (commit ? (addr != VirtualAlloc(addr, size, MEM_COMMIT,
* overcommit. PAGE_READWRITE)) : (!VirtualFree(addr, size, MEM_DECOMMIT)));
*/ #else
if (false) { {
int prot = commit ? (PROT_READ | PROT_WRITE) : PROT_NONE; int prot = commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT;
void *result = mmap(addr, size, prot, MAP_PRIVATE | MAP_ANON | void *result = mmap(addr, size, prot, mmap_flags | MAP_FIXED,
MAP_FIXED, -1, 0); -1, 0);
if (result == MAP_FAILED) if (result == MAP_FAILED)
return (true); return (true);
if (result != addr) { if (result != addr) {
@ -125,7 +146,6 @@ pages_commit_impl(void *addr, size_t size, bool commit)
return (false); return (false);
} }
#endif #endif
return (true);
} }
bool bool
@ -171,3 +191,63 @@ pages_purge(void *addr, size_t size)
return (unzeroed); return (unzeroed);
} }
#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT
static bool
os_overcommits_sysctl(void)
{
int vm_overcommit;
size_t sz;
sz = sizeof(vm_overcommit);
if (sysctlbyname("vm.overcommit", &vm_overcommit, &sz, NULL, 0) != 0)
return (false); /* Error. */
return ((vm_overcommit & 0x3) == 0);
}
#endif
#ifdef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY
static bool
os_overcommits_proc(void)
{
int fd;
char buf[1];
ssize_t nread;
fd = open("/proc/sys/vm/overcommit_memory", O_RDONLY);
if (fd == -1)
return (false); /* Error. */
nread = read(fd, &buf, sizeof(buf));
if (nread < 1)
return (false); /* Error. */
/*
* /proc/sys/vm/overcommit_memory meanings:
* 0: Heuristic overcommit.
* 1: Always overcommit.
* 2: Never overcommit.
*/
return (buf[0] == '0' || buf[0] == '1');
}
#endif
void
pages_boot(void)
{
#ifndef _WIN32
mmap_flags = MAP_PRIVATE | MAP_ANON;
#endif
#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT
os_overcommits = os_overcommits_sysctl();
#elif defined(JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY)
os_overcommits = os_overcommits_proc();
# ifdef MAP_NORESERVE
if (os_overcommits)
mmap_flags |= MAP_NORESERVE;
# endif
#else
os_overcommits = false;
#endif
}

File diff suppressed because it is too large Load Diff

View File

@ -13,24 +13,22 @@
/* Function prototypes for non-inline static functions. */ /* Function prototypes for non-inline static functions. */
static quarantine_t *quarantine_grow(tsd_t *tsd, quarantine_t *quarantine); static quarantine_t *quarantine_grow(tsd_t *tsd, quarantine_t *quarantine);
static void quarantine_drain_one(tsd_t *tsd, quarantine_t *quarantine); static void quarantine_drain_one(tsdn_t *tsdn, quarantine_t *quarantine);
static void quarantine_drain(tsd_t *tsd, quarantine_t *quarantine, static void quarantine_drain(tsdn_t *tsdn, quarantine_t *quarantine,
size_t upper_bound); size_t upper_bound);
/******************************************************************************/ /******************************************************************************/
static quarantine_t * static quarantine_t *
quarantine_init(tsd_t *tsd, size_t lg_maxobjs) quarantine_init(tsdn_t *tsdn, size_t lg_maxobjs)
{ {
quarantine_t *quarantine; quarantine_t *quarantine;
size_t size; size_t size;
assert(tsd_nominal(tsd));
size = offsetof(quarantine_t, objs) + ((ZU(1) << lg_maxobjs) * size = offsetof(quarantine_t, objs) + ((ZU(1) << lg_maxobjs) *
sizeof(quarantine_obj_t)); sizeof(quarantine_obj_t));
quarantine = (quarantine_t *)iallocztm(tsd, size, size2index(size), quarantine = (quarantine_t *)iallocztm(tsdn, size, size2index(size),
false, tcache_get(tsd, true), true, NULL, true); false, NULL, true, arena_get(TSDN_NULL, 0, true), true);
if (quarantine == NULL) if (quarantine == NULL)
return (NULL); return (NULL);
quarantine->curbytes = 0; quarantine->curbytes = 0;
@ -49,7 +47,7 @@ quarantine_alloc_hook_work(tsd_t *tsd)
if (!tsd_nominal(tsd)) if (!tsd_nominal(tsd))
return; return;
quarantine = quarantine_init(tsd, LG_MAXOBJS_INIT); quarantine = quarantine_init(tsd_tsdn(tsd), LG_MAXOBJS_INIT);
/* /*
* Check again whether quarantine has been initialized, because * Check again whether quarantine has been initialized, because
* quarantine_init() may have triggered recursive initialization. * quarantine_init() may have triggered recursive initialization.
@ -57,7 +55,7 @@ quarantine_alloc_hook_work(tsd_t *tsd)
if (tsd_quarantine_get(tsd) == NULL) if (tsd_quarantine_get(tsd) == NULL)
tsd_quarantine_set(tsd, quarantine); tsd_quarantine_set(tsd, quarantine);
else else
idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true); idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true);
} }
static quarantine_t * static quarantine_t *
@ -65,9 +63,9 @@ quarantine_grow(tsd_t *tsd, quarantine_t *quarantine)
{ {
quarantine_t *ret; quarantine_t *ret;
ret = quarantine_init(tsd, quarantine->lg_maxobjs + 1); ret = quarantine_init(tsd_tsdn(tsd), quarantine->lg_maxobjs + 1);
if (ret == NULL) { if (ret == NULL) {
quarantine_drain_one(tsd, quarantine); quarantine_drain_one(tsd_tsdn(tsd), quarantine);
return (quarantine); return (quarantine);
} }
@ -89,18 +87,18 @@ quarantine_grow(tsd_t *tsd, quarantine_t *quarantine)
memcpy(&ret->objs[ncopy_a], quarantine->objs, ncopy_b * memcpy(&ret->objs[ncopy_a], quarantine->objs, ncopy_b *
sizeof(quarantine_obj_t)); sizeof(quarantine_obj_t));
} }
idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true); idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true);
tsd_quarantine_set(tsd, ret); tsd_quarantine_set(tsd, ret);
return (ret); return (ret);
} }
static void static void
quarantine_drain_one(tsd_t *tsd, quarantine_t *quarantine) quarantine_drain_one(tsdn_t *tsdn, quarantine_t *quarantine)
{ {
quarantine_obj_t *obj = &quarantine->objs[quarantine->first]; quarantine_obj_t *obj = &quarantine->objs[quarantine->first];
assert(obj->usize == isalloc(obj->ptr, config_prof)); assert(obj->usize == isalloc(tsdn, obj->ptr, config_prof));
idalloctm(tsd, obj->ptr, NULL, false, true); idalloctm(tsdn, obj->ptr, NULL, false, true);
quarantine->curbytes -= obj->usize; quarantine->curbytes -= obj->usize;
quarantine->curobjs--; quarantine->curobjs--;
quarantine->first = (quarantine->first + 1) & ((ZU(1) << quarantine->first = (quarantine->first + 1) & ((ZU(1) <<
@ -108,24 +106,24 @@ quarantine_drain_one(tsd_t *tsd, quarantine_t *quarantine)
} }
static void static void
quarantine_drain(tsd_t *tsd, quarantine_t *quarantine, size_t upper_bound) quarantine_drain(tsdn_t *tsdn, quarantine_t *quarantine, size_t upper_bound)
{ {
while (quarantine->curbytes > upper_bound && quarantine->curobjs > 0) while (quarantine->curbytes > upper_bound && quarantine->curobjs > 0)
quarantine_drain_one(tsd, quarantine); quarantine_drain_one(tsdn, quarantine);
} }
void void
quarantine(tsd_t *tsd, void *ptr) quarantine(tsd_t *tsd, void *ptr)
{ {
quarantine_t *quarantine; quarantine_t *quarantine;
size_t usize = isalloc(ptr, config_prof); size_t usize = isalloc(tsd_tsdn(tsd), ptr, config_prof);
cassert(config_fill); cassert(config_fill);
assert(opt_quarantine); assert(opt_quarantine);
if ((quarantine = tsd_quarantine_get(tsd)) == NULL) { if ((quarantine = tsd_quarantine_get(tsd)) == NULL) {
idalloctm(tsd, ptr, NULL, false, true); idalloctm(tsd_tsdn(tsd), ptr, NULL, false, true);
return; return;
} }
/* /*
@ -135,7 +133,7 @@ quarantine(tsd_t *tsd, void *ptr)
if (quarantine->curbytes + usize > opt_quarantine) { if (quarantine->curbytes + usize > opt_quarantine) {
size_t upper_bound = (opt_quarantine >= usize) ? opt_quarantine size_t upper_bound = (opt_quarantine >= usize) ? opt_quarantine
- usize : 0; - usize : 0;
quarantine_drain(tsd, quarantine, upper_bound); quarantine_drain(tsd_tsdn(tsd), quarantine, upper_bound);
} }
/* Grow the quarantine ring buffer if it's full. */ /* Grow the quarantine ring buffer if it's full. */
if (quarantine->curobjs == (ZU(1) << quarantine->lg_maxobjs)) if (quarantine->curobjs == (ZU(1) << quarantine->lg_maxobjs))
@ -160,11 +158,11 @@ quarantine(tsd_t *tsd, void *ptr)
&& usize <= SMALL_MAXCLASS) && usize <= SMALL_MAXCLASS)
arena_quarantine_junk_small(ptr, usize); arena_quarantine_junk_small(ptr, usize);
else else
memset(ptr, 0x5a, usize); memset(ptr, JEMALLOC_FREE_JUNK, usize);
} }
} else { } else {
assert(quarantine->curbytes == 0); assert(quarantine->curbytes == 0);
idalloctm(tsd, ptr, NULL, false, true); idalloctm(tsd_tsdn(tsd), ptr, NULL, false, true);
} }
} }
@ -178,8 +176,8 @@ quarantine_cleanup(tsd_t *tsd)
quarantine = tsd_quarantine_get(tsd); quarantine = tsd_quarantine_get(tsd);
if (quarantine != NULL) { if (quarantine != NULL) {
quarantine_drain(tsd, quarantine, 0); quarantine_drain(tsd_tsdn(tsd), quarantine, 0);
idalloctm(tsd, quarantine, tcache_get(tsd, false), true, true); idalloctm(tsd_tsdn(tsd), quarantine, NULL, true, true);
tsd_quarantine_set(tsd, NULL); tsd_quarantine_set(tsd, NULL);
} }
} }

View File

@ -15,6 +15,8 @@ rtree_new(rtree_t *rtree, unsigned bits, rtree_node_alloc_t *alloc,
{ {
unsigned bits_in_leaf, height, i; unsigned bits_in_leaf, height, i;
assert(RTREE_HEIGHT_MAX == ((ZU(1) << (LG_SIZEOF_PTR+3)) /
RTREE_BITS_PER_LEVEL));
assert(bits > 0 && bits <= (sizeof(uintptr_t) << 3)); assert(bits > 0 && bits <= (sizeof(uintptr_t) << 3));
bits_in_leaf = (bits % RTREE_BITS_PER_LEVEL) == 0 ? RTREE_BITS_PER_LEVEL bits_in_leaf = (bits % RTREE_BITS_PER_LEVEL) == 0 ? RTREE_BITS_PER_LEVEL

View File

@ -259,7 +259,7 @@ stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque,
unsigned nthreads; unsigned nthreads;
const char *dss; const char *dss;
ssize_t lg_dirty_mult, decay_time; ssize_t lg_dirty_mult, decay_time;
size_t page, pactive, pdirty, mapped; size_t page, pactive, pdirty, mapped, retained;
size_t metadata_mapped, metadata_allocated; size_t metadata_mapped, metadata_allocated;
uint64_t npurge, nmadvise, purged; uint64_t npurge, nmadvise, purged;
size_t small_allocated; size_t small_allocated;
@ -349,6 +349,9 @@ stats_arena_print(void (*write_cb)(void *, const char *), void *cbopaque,
CTL_M2_GET("stats.arenas.0.mapped", i, &mapped, size_t); CTL_M2_GET("stats.arenas.0.mapped", i, &mapped, size_t);
malloc_cprintf(write_cb, cbopaque, malloc_cprintf(write_cb, cbopaque,
"mapped: %12zu\n", mapped); "mapped: %12zu\n", mapped);
CTL_M2_GET("stats.arenas.0.retained", i, &retained, size_t);
malloc_cprintf(write_cb, cbopaque,
"retained: %12zu\n", retained);
CTL_M2_GET("stats.arenas.0.metadata.mapped", i, &metadata_mapped, CTL_M2_GET("stats.arenas.0.metadata.mapped", i, &metadata_mapped,
size_t); size_t);
CTL_M2_GET("stats.arenas.0.metadata.allocated", i, &metadata_allocated, CTL_M2_GET("stats.arenas.0.metadata.allocated", i, &metadata_allocated,
@ -597,7 +600,7 @@ stats_print(void (*write_cb)(void *, const char *), void *cbopaque,
if (config_stats) { if (config_stats) {
size_t *cactive; size_t *cactive;
size_t allocated, active, metadata, resident, mapped; size_t allocated, active, metadata, resident, mapped, retained;
CTL_GET("stats.cactive", &cactive, size_t *); CTL_GET("stats.cactive", &cactive, size_t *);
CTL_GET("stats.allocated", &allocated, size_t); CTL_GET("stats.allocated", &allocated, size_t);
@ -605,10 +608,11 @@ stats_print(void (*write_cb)(void *, const char *), void *cbopaque,
CTL_GET("stats.metadata", &metadata, size_t); CTL_GET("stats.metadata", &metadata, size_t);
CTL_GET("stats.resident", &resident, size_t); CTL_GET("stats.resident", &resident, size_t);
CTL_GET("stats.mapped", &mapped, size_t); CTL_GET("stats.mapped", &mapped, size_t);
CTL_GET("stats.retained", &retained, size_t);
malloc_cprintf(write_cb, cbopaque, malloc_cprintf(write_cb, cbopaque,
"Allocated: %zu, active: %zu, metadata: %zu," "Allocated: %zu, active: %zu, metadata: %zu,"
" resident: %zu, mapped: %zu\n", " resident: %zu, mapped: %zu, retained: %zu\n",
allocated, active, metadata, resident, mapped); allocated, active, metadata, resident, mapped, retained);
malloc_cprintf(write_cb, cbopaque, malloc_cprintf(write_cb, cbopaque,
"Current active ceiling: %zu\n", "Current active ceiling: %zu\n",
atomic_read_z(cactive)); atomic_read_z(cactive));

View File

@ -23,10 +23,11 @@ static tcaches_t *tcaches_avail;
/******************************************************************************/ /******************************************************************************/
size_t tcache_salloc(const void *ptr) size_t
tcache_salloc(tsdn_t *tsdn, const void *ptr)
{ {
return (arena_salloc(ptr, false)); return (arena_salloc(tsdn, ptr, false));
} }
void void
@ -70,12 +71,12 @@ tcache_event_hard(tsd_t *tsd, tcache_t *tcache)
} }
void * void *
tcache_alloc_small_hard(tsd_t *tsd, arena_t *arena, tcache_t *tcache, tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache,
tcache_bin_t *tbin, szind_t binind, bool *tcache_success) tcache_bin_t *tbin, szind_t binind, bool *tcache_success)
{ {
void *ret; void *ret;
arena_tcache_fill_small(tsd, arena, tbin, binind, config_prof ? arena_tcache_fill_small(tsdn, arena, tbin, binind, config_prof ?
tcache->prof_accumbytes : 0); tcache->prof_accumbytes : 0);
if (config_prof) if (config_prof)
tcache->prof_accumbytes = 0; tcache->prof_accumbytes = 0;
@ -106,12 +107,13 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
arena_bin_t *bin = &bin_arena->bins[binind]; arena_bin_t *bin = &bin_arena->bins[binind];
if (config_prof && bin_arena == arena) { if (config_prof && bin_arena == arena) {
if (arena_prof_accum(arena, tcache->prof_accumbytes)) if (arena_prof_accum(tsd_tsdn(tsd), arena,
prof_idump(); tcache->prof_accumbytes))
prof_idump(tsd_tsdn(tsd));
tcache->prof_accumbytes = 0; tcache->prof_accumbytes = 0;
} }
malloc_mutex_lock(&bin->lock); malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock);
if (config_stats && bin_arena == arena) { if (config_stats && bin_arena == arena) {
assert(!merged_stats); assert(!merged_stats);
merged_stats = true; merged_stats = true;
@ -128,9 +130,9 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
size_t pageind = ((uintptr_t)ptr - size_t pageind = ((uintptr_t)ptr -
(uintptr_t)chunk) >> LG_PAGE; (uintptr_t)chunk) >> LG_PAGE;
arena_chunk_map_bits_t *bitselm = arena_chunk_map_bits_t *bitselm =
arena_bitselm_get(chunk, pageind); arena_bitselm_get_mutable(chunk, pageind);
arena_dalloc_bin_junked_locked(bin_arena, chunk, arena_dalloc_bin_junked_locked(tsd_tsdn(tsd),
ptr, bitselm); bin_arena, chunk, ptr, bitselm);
} else { } else {
/* /*
* This object was allocated via a different * This object was allocated via a different
@ -142,8 +144,8 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
ndeferred++; ndeferred++;
} }
} }
malloc_mutex_unlock(&bin->lock); malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock);
arena_decay_ticks(tsd, bin_arena, nflush - ndeferred); arena_decay_ticks(tsd_tsdn(tsd), bin_arena, nflush - ndeferred);
} }
if (config_stats && !merged_stats) { if (config_stats && !merged_stats) {
/* /*
@ -151,11 +153,11 @@ tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, tcache_bin_t *tbin,
* arena, so the stats didn't get merged. Manually do so now. * arena, so the stats didn't get merged. Manually do so now.
*/ */
arena_bin_t *bin = &arena->bins[binind]; arena_bin_t *bin = &arena->bins[binind];
malloc_mutex_lock(&bin->lock); malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock);
bin->stats.nflushes++; bin->stats.nflushes++;
bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.nrequests += tbin->tstats.nrequests;
tbin->tstats.nrequests = 0; tbin->tstats.nrequests = 0;
malloc_mutex_unlock(&bin->lock); malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock);
} }
memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem * memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem *
@ -188,7 +190,7 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
if (config_prof) if (config_prof)
idump = false; idump = false;
malloc_mutex_lock(&locked_arena->lock); malloc_mutex_lock(tsd_tsdn(tsd), &locked_arena->lock);
if ((config_prof || config_stats) && locked_arena == arena) { if ((config_prof || config_stats) && locked_arena == arena) {
if (config_prof) { if (config_prof) {
idump = arena_prof_accum_locked(arena, idump = arena_prof_accum_locked(arena,
@ -211,8 +213,8 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr); chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
if (extent_node_arena_get(&chunk->node) == if (extent_node_arena_get(&chunk->node) ==
locked_arena) { locked_arena) {
arena_dalloc_large_junked_locked(locked_arena, arena_dalloc_large_junked_locked(tsd_tsdn(tsd),
chunk, ptr); locked_arena, chunk, ptr);
} else { } else {
/* /*
* This object was allocated via a different * This object was allocated via a different
@ -224,22 +226,23 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
ndeferred++; ndeferred++;
} }
} }
malloc_mutex_unlock(&locked_arena->lock); malloc_mutex_unlock(tsd_tsdn(tsd), &locked_arena->lock);
if (config_prof && idump) if (config_prof && idump)
prof_idump(); prof_idump(tsd_tsdn(tsd));
arena_decay_ticks(tsd, locked_arena, nflush - ndeferred); arena_decay_ticks(tsd_tsdn(tsd), locked_arena, nflush -
ndeferred);
} }
if (config_stats && !merged_stats) { if (config_stats && !merged_stats) {
/* /*
* The flush loop didn't happen to flush to this thread's * The flush loop didn't happen to flush to this thread's
* arena, so the stats didn't get merged. Manually do so now. * arena, so the stats didn't get merged. Manually do so now.
*/ */
malloc_mutex_lock(&arena->lock); malloc_mutex_lock(tsd_tsdn(tsd), &arena->lock);
arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.nrequests_large += tbin->tstats.nrequests;
arena->stats.lstats[binind - NBINS].nrequests += arena->stats.lstats[binind - NBINS].nrequests +=
tbin->tstats.nrequests; tbin->tstats.nrequests;
tbin->tstats.nrequests = 0; tbin->tstats.nrequests = 0;
malloc_mutex_unlock(&arena->lock); malloc_mutex_unlock(tsd_tsdn(tsd), &arena->lock);
} }
memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem * memmove(tbin->avail - rem, tbin->avail - tbin->ncached, rem *
@ -249,34 +252,26 @@ tcache_bin_flush_large(tsd_t *tsd, tcache_bin_t *tbin, szind_t binind,
tbin->low_water = tbin->ncached; tbin->low_water = tbin->ncached;
} }
void static void
tcache_arena_associate(tcache_t *tcache, arena_t *arena) tcache_arena_associate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena)
{ {
if (config_stats) { if (config_stats) {
/* Link into list of extant tcaches. */ /* Link into list of extant tcaches. */
malloc_mutex_lock(&arena->lock); malloc_mutex_lock(tsdn, &arena->lock);
ql_elm_new(tcache, link); ql_elm_new(tcache, link);
ql_tail_insert(&arena->tcache_ql, tcache, link); ql_tail_insert(&arena->tcache_ql, tcache, link);
malloc_mutex_unlock(&arena->lock); malloc_mutex_unlock(tsdn, &arena->lock);
} }
} }
void static void
tcache_arena_reassociate(tcache_t *tcache, arena_t *oldarena, arena_t *newarena) tcache_arena_dissociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena)
{
tcache_arena_dissociate(tcache, oldarena);
tcache_arena_associate(tcache, newarena);
}
void
tcache_arena_dissociate(tcache_t *tcache, arena_t *arena)
{ {
if (config_stats) { if (config_stats) {
/* Unlink from list of extant tcaches. */ /* Unlink from list of extant tcaches. */
malloc_mutex_lock(&arena->lock); malloc_mutex_lock(tsdn, &arena->lock);
if (config_debug) { if (config_debug) {
bool in_ql = false; bool in_ql = false;
tcache_t *iter; tcache_t *iter;
@ -289,11 +284,20 @@ tcache_arena_dissociate(tcache_t *tcache, arena_t *arena)
assert(in_ql); assert(in_ql);
} }
ql_remove(&arena->tcache_ql, tcache, link); ql_remove(&arena->tcache_ql, tcache, link);
tcache_stats_merge(tcache, arena); tcache_stats_merge(tsdn, tcache, arena);
malloc_mutex_unlock(&arena->lock); malloc_mutex_unlock(tsdn, &arena->lock);
} }
} }
void
tcache_arena_reassociate(tsdn_t *tsdn, tcache_t *tcache, arena_t *oldarena,
arena_t *newarena)
{
tcache_arena_dissociate(tsdn, tcache, oldarena);
tcache_arena_associate(tsdn, tcache, newarena);
}
tcache_t * tcache_t *
tcache_get_hard(tsd_t *tsd) tcache_get_hard(tsd_t *tsd)
{ {
@ -307,11 +311,11 @@ tcache_get_hard(tsd_t *tsd)
arena = arena_choose(tsd, NULL); arena = arena_choose(tsd, NULL);
if (unlikely(arena == NULL)) if (unlikely(arena == NULL))
return (NULL); return (NULL);
return (tcache_create(tsd, arena)); return (tcache_create(tsd_tsdn(tsd), arena));
} }
tcache_t * tcache_t *
tcache_create(tsd_t *tsd, arena_t *arena) tcache_create(tsdn_t *tsdn, arena_t *arena)
{ {
tcache_t *tcache; tcache_t *tcache;
size_t size, stack_offset; size_t size, stack_offset;
@ -325,12 +329,12 @@ tcache_create(tsd_t *tsd, arena_t *arena)
/* Avoid false cacheline sharing. */ /* Avoid false cacheline sharing. */
size = sa2u(size, CACHELINE); size = sa2u(size, CACHELINE);
tcache = ipallocztm(tsd, size, CACHELINE, true, false, true, tcache = ipallocztm(tsdn, size, CACHELINE, true, NULL, true,
arena_get(0, false)); arena_get(TSDN_NULL, 0, true));
if (tcache == NULL) if (tcache == NULL)
return (NULL); return (NULL);
tcache_arena_associate(tcache, arena); tcache_arena_associate(tsdn, tcache, arena);
ticker_init(&tcache->gc_ticker, TCACHE_GC_INCR); ticker_init(&tcache->gc_ticker, TCACHE_GC_INCR);
@ -357,7 +361,7 @@ tcache_destroy(tsd_t *tsd, tcache_t *tcache)
unsigned i; unsigned i;
arena = arena_choose(tsd, NULL); arena = arena_choose(tsd, NULL);
tcache_arena_dissociate(tcache, arena); tcache_arena_dissociate(tsd_tsdn(tsd), tcache, arena);
for (i = 0; i < NBINS; i++) { for (i = 0; i < NBINS; i++) {
tcache_bin_t *tbin = &tcache->tbins[i]; tcache_bin_t *tbin = &tcache->tbins[i];
@ -365,9 +369,9 @@ tcache_destroy(tsd_t *tsd, tcache_t *tcache)
if (config_stats && tbin->tstats.nrequests != 0) { if (config_stats && tbin->tstats.nrequests != 0) {
arena_bin_t *bin = &arena->bins[i]; arena_bin_t *bin = &arena->bins[i];
malloc_mutex_lock(&bin->lock); malloc_mutex_lock(tsd_tsdn(tsd), &bin->lock);
bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.nrequests += tbin->tstats.nrequests;
malloc_mutex_unlock(&bin->lock); malloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock);
} }
} }
@ -376,19 +380,19 @@ tcache_destroy(tsd_t *tsd, tcache_t *tcache)
tcache_bin_flush_large(tsd, tbin, i, 0, tcache); tcache_bin_flush_large(tsd, tbin, i, 0, tcache);
if (config_stats && tbin->tstats.nrequests != 0) { if (config_stats && tbin->tstats.nrequests != 0) {
malloc_mutex_lock(&arena->lock); malloc_mutex_lock(tsd_tsdn(tsd), &arena->lock);
arena->stats.nrequests_large += tbin->tstats.nrequests; arena->stats.nrequests_large += tbin->tstats.nrequests;
arena->stats.lstats[i - NBINS].nrequests += arena->stats.lstats[i - NBINS].nrequests +=
tbin->tstats.nrequests; tbin->tstats.nrequests;
malloc_mutex_unlock(&arena->lock); malloc_mutex_unlock(tsd_tsdn(tsd), &arena->lock);
} }
} }
if (config_prof && tcache->prof_accumbytes > 0 && if (config_prof && tcache->prof_accumbytes > 0 &&
arena_prof_accum(arena, tcache->prof_accumbytes)) arena_prof_accum(tsd_tsdn(tsd), arena, tcache->prof_accumbytes))
prof_idump(); prof_idump(tsd_tsdn(tsd));
idalloctm(tsd, tcache, false, true, true); idalloctm(tsd_tsdn(tsd), tcache, NULL, true, true);
} }
void void
@ -412,21 +416,22 @@ tcache_enabled_cleanup(tsd_t *tsd)
/* Do nothing. */ /* Do nothing. */
} }
/* Caller must own arena->lock. */
void void
tcache_stats_merge(tcache_t *tcache, arena_t *arena) tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena)
{ {
unsigned i; unsigned i;
cassert(config_stats); cassert(config_stats);
malloc_mutex_assert_owner(tsdn, &arena->lock);
/* Merge and reset tcache stats. */ /* Merge and reset tcache stats. */
for (i = 0; i < NBINS; i++) { for (i = 0; i < NBINS; i++) {
arena_bin_t *bin = &arena->bins[i]; arena_bin_t *bin = &arena->bins[i];
tcache_bin_t *tbin = &tcache->tbins[i]; tcache_bin_t *tbin = &tcache->tbins[i];
malloc_mutex_lock(&bin->lock); malloc_mutex_lock(tsdn, &bin->lock);
bin->stats.nrequests += tbin->tstats.nrequests; bin->stats.nrequests += tbin->tstats.nrequests;
malloc_mutex_unlock(&bin->lock); malloc_mutex_unlock(tsdn, &bin->lock);
tbin->tstats.nrequests = 0; tbin->tstats.nrequests = 0;
} }
@ -440,13 +445,14 @@ tcache_stats_merge(tcache_t *tcache, arena_t *arena)
} }
bool bool
tcaches_create(tsd_t *tsd, unsigned *r_ind) tcaches_create(tsdn_t *tsdn, unsigned *r_ind)
{ {
arena_t *arena;
tcache_t *tcache; tcache_t *tcache;
tcaches_t *elm; tcaches_t *elm;
if (tcaches == NULL) { if (tcaches == NULL) {
tcaches = base_alloc(sizeof(tcache_t *) * tcaches = base_alloc(tsdn, sizeof(tcache_t *) *
(MALLOCX_TCACHE_MAX+1)); (MALLOCX_TCACHE_MAX+1));
if (tcaches == NULL) if (tcaches == NULL)
return (true); return (true);
@ -454,7 +460,10 @@ tcaches_create(tsd_t *tsd, unsigned *r_ind)
if (tcaches_avail == NULL && tcaches_past > MALLOCX_TCACHE_MAX) if (tcaches_avail == NULL && tcaches_past > MALLOCX_TCACHE_MAX)
return (true); return (true);
tcache = tcache_create(tsd, arena_get(0, false)); arena = arena_ichoose(tsdn, NULL);
if (unlikely(arena == NULL))
return (true);
tcache = tcache_create(tsdn, arena);
if (tcache == NULL) if (tcache == NULL)
return (true); return (true);
@ -500,7 +509,7 @@ tcaches_destroy(tsd_t *tsd, unsigned ind)
} }
bool bool
tcache_boot(void) tcache_boot(tsdn_t *tsdn)
{ {
unsigned i; unsigned i;
@ -518,7 +527,7 @@ tcache_boot(void)
nhbins = size2index(tcache_maxclass) + 1; nhbins = size2index(tcache_maxclass) + 1;
/* Initialize tcache_bin_info. */ /* Initialize tcache_bin_info. */
tcache_bin_info = (tcache_bin_info_t *)base_alloc(nhbins * tcache_bin_info = (tcache_bin_info_t *)base_alloc(tsdn, nhbins *
sizeof(tcache_bin_info_t)); sizeof(tcache_bin_info_t));
if (tcache_bin_info == NULL) if (tcache_bin_info == NULL)
return (true); return (true);

View File

@ -77,7 +77,7 @@ tsd_cleanup(void *arg)
/* Do nothing. */ /* Do nothing. */
break; break;
case tsd_state_nominal: case tsd_state_nominal:
#define O(n, t) \ #define O(n, t) \
n##_cleanup(tsd); n##_cleanup(tsd);
MALLOC_TSD MALLOC_TSD
#undef O #undef O
@ -106,15 +106,17 @@ MALLOC_TSD
} }
} }
bool tsd_t *
malloc_tsd_boot0(void) malloc_tsd_boot0(void)
{ {
tsd_t *tsd;
ncleanups = 0; ncleanups = 0;
if (tsd_boot0()) if (tsd_boot0())
return (true); return (NULL);
*tsd_arenas_tdata_bypassp_get(tsd_fetch()) = true; tsd = tsd_fetch();
return (false); *tsd_arenas_tdata_bypassp_get(tsd) = true;
return (tsd);
} }
void void
@ -169,10 +171,10 @@ tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block)
tsd_init_block_t *iter; tsd_init_block_t *iter;
/* Check whether this thread has already inserted into the list. */ /* Check whether this thread has already inserted into the list. */
malloc_mutex_lock(&head->lock); malloc_mutex_lock(NULL, &head->lock);
ql_foreach(iter, &head->blocks, link) { ql_foreach(iter, &head->blocks, link) {
if (iter->thread == self) { if (iter->thread == self) {
malloc_mutex_unlock(&head->lock); malloc_mutex_unlock(NULL, &head->lock);
return (iter->data); return (iter->data);
} }
} }
@ -180,7 +182,7 @@ tsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block)
ql_elm_new(block, link); ql_elm_new(block, link);
block->thread = self; block->thread = self;
ql_tail_insert(&head->blocks, block, link); ql_tail_insert(&head->blocks, block, link);
malloc_mutex_unlock(&head->lock); malloc_mutex_unlock(NULL, &head->lock);
return (NULL); return (NULL);
} }
@ -188,8 +190,8 @@ void
tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block) tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block)
{ {
malloc_mutex_lock(&head->lock); malloc_mutex_lock(NULL, &head->lock);
ql_remove(&head->blocks, block, link); ql_remove(&head->blocks, block, link);
malloc_mutex_unlock(&head->lock); malloc_mutex_unlock(NULL, &head->lock);
} }
#endif #endif

View File

@ -14,6 +14,7 @@
malloc_write("<jemalloc>: Unreachable code reached\n"); \ malloc_write("<jemalloc>: Unreachable code reached\n"); \
abort(); \ abort(); \
} \ } \
unreachable(); \
} while (0) } while (0)
#define not_implemented() do { \ #define not_implemented() do { \
@ -314,10 +315,9 @@ x2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p)
return (s); return (s);
} }
int size_t
malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
{ {
int ret;
size_t i; size_t i;
const char *f; const char *f;
@ -408,6 +408,8 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
int prec = -1; int prec = -1;
int width = -1; int width = -1;
unsigned char len = '?'; unsigned char len = '?';
char *s;
size_t slen;
f++; f++;
/* Flags. */ /* Flags. */
@ -498,8 +500,6 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
} }
/* Conversion specifier. */ /* Conversion specifier. */
switch (*f) { switch (*f) {
char *s;
size_t slen;
case '%': case '%':
/* %% */ /* %% */
APPEND_C(*f); APPEND_C(*f);
@ -585,21 +585,19 @@ malloc_vsnprintf(char *str, size_t size, const char *format, va_list ap)
str[i] = '\0'; str[i] = '\0';
else else
str[size - 1] = '\0'; str[size - 1] = '\0';
assert(i < INT_MAX);
ret = (int)i;
#undef APPEND_C #undef APPEND_C
#undef APPEND_S #undef APPEND_S
#undef APPEND_PADDED_S #undef APPEND_PADDED_S
#undef GET_ARG_NUMERIC #undef GET_ARG_NUMERIC
return (ret); return (i);
} }
JEMALLOC_FORMAT_PRINTF(3, 4) JEMALLOC_FORMAT_PRINTF(3, 4)
int size_t
malloc_snprintf(char *str, size_t size, const char *format, ...) malloc_snprintf(char *str, size_t size, const char *format, ...)
{ {
int ret; size_t ret;
va_list ap; va_list ap;
va_start(ap, format); va_start(ap, format);

136
src/witness.c Normal file
View File

@ -0,0 +1,136 @@
#define JEMALLOC_WITNESS_C_
#include "jemalloc/internal/jemalloc_internal.h"
void
witness_init(witness_t *witness, const char *name, witness_rank_t rank,
witness_comp_t *comp)
{
witness->name = name;
witness->rank = rank;
witness->comp = comp;
}
#ifdef JEMALLOC_JET
#undef witness_lock_error
#define witness_lock_error JEMALLOC_N(n_witness_lock_error)
#endif
void
witness_lock_error(const witness_list_t *witnesses, const witness_t *witness)
{
witness_t *w;
malloc_printf("<jemalloc>: Lock rank order reversal:");
ql_foreach(w, witnesses, link) {
malloc_printf(" %s(%u)", w->name, w->rank);
}
malloc_printf(" %s(%u)\n", witness->name, witness->rank);
abort();
}
#ifdef JEMALLOC_JET
#undef witness_lock_error
#define witness_lock_error JEMALLOC_N(witness_lock_error)
witness_lock_error_t *witness_lock_error = JEMALLOC_N(n_witness_lock_error);
#endif
#ifdef JEMALLOC_JET
#undef witness_owner_error
#define witness_owner_error JEMALLOC_N(n_witness_owner_error)
#endif
void
witness_owner_error(const witness_t *witness)
{
malloc_printf("<jemalloc>: Should own %s(%u)\n", witness->name,
witness->rank);
abort();
}
#ifdef JEMALLOC_JET
#undef witness_owner_error
#define witness_owner_error JEMALLOC_N(witness_owner_error)
witness_owner_error_t *witness_owner_error = JEMALLOC_N(n_witness_owner_error);
#endif
#ifdef JEMALLOC_JET
#undef witness_not_owner_error
#define witness_not_owner_error JEMALLOC_N(n_witness_not_owner_error)
#endif
void
witness_not_owner_error(const witness_t *witness)
{
malloc_printf("<jemalloc>: Should not own %s(%u)\n", witness->name,
witness->rank);
abort();
}
#ifdef JEMALLOC_JET
#undef witness_not_owner_error
#define witness_not_owner_error JEMALLOC_N(witness_not_owner_error)
witness_not_owner_error_t *witness_not_owner_error =
JEMALLOC_N(n_witness_not_owner_error);
#endif
#ifdef JEMALLOC_JET
#undef witness_lockless_error
#define witness_lockless_error JEMALLOC_N(n_witness_lockless_error)
#endif
void
witness_lockless_error(const witness_list_t *witnesses)
{
witness_t *w;
malloc_printf("<jemalloc>: Should not own any locks:");
ql_foreach(w, witnesses, link) {
malloc_printf(" %s(%u)", w->name, w->rank);
}
malloc_printf("\n");
abort();
}
#ifdef JEMALLOC_JET
#undef witness_lockless_error
#define witness_lockless_error JEMALLOC_N(witness_lockless_error)
witness_lockless_error_t *witness_lockless_error =
JEMALLOC_N(n_witness_lockless_error);
#endif
void
witnesses_cleanup(tsd_t *tsd)
{
witness_assert_lockless(tsd_tsdn(tsd));
/* Do nothing. */
}
void
witness_fork_cleanup(tsd_t *tsd)
{
/* Do nothing. */
}
void
witness_prefork(tsd_t *tsd)
{
tsd_witness_fork_set(tsd, true);
}
void
witness_postfork_parent(tsd_t *tsd)
{
tsd_witness_fork_set(tsd, false);
}
void
witness_postfork_child(tsd_t *tsd)
{
#ifndef JEMALLOC_MUTEX_INIT_CB
witness_list_t *witnesses;
witnesses = tsd_witnessesp_get(tsd);
ql_new(witnesses);
#endif
tsd_witness_fork_set(tsd, false);
}

View File

@ -56,7 +56,7 @@ zone_size(malloc_zone_t *zone, void *ptr)
* not work in practice, we must check all pointers to assure that they * not work in practice, we must check all pointers to assure that they
* reside within a mapped chunk before determining size. * reside within a mapped chunk before determining size.
*/ */
return (ivsalloc(ptr, config_prof)); return (ivsalloc(tsdn_fetch(), ptr, config_prof));
} }
static void * static void *
@ -87,7 +87,7 @@ static void
zone_free(malloc_zone_t *zone, void *ptr) zone_free(malloc_zone_t *zone, void *ptr)
{ {
if (ivsalloc(ptr, config_prof) != 0) { if (ivsalloc(tsdn_fetch(), ptr, config_prof) != 0) {
je_free(ptr); je_free(ptr);
return; return;
} }
@ -99,7 +99,7 @@ static void *
zone_realloc(malloc_zone_t *zone, void *ptr, size_t size) zone_realloc(malloc_zone_t *zone, void *ptr, size_t size)
{ {
if (ivsalloc(ptr, config_prof) != 0) if (ivsalloc(tsdn_fetch(), ptr, config_prof) != 0)
return (je_realloc(ptr, size)); return (je_realloc(ptr, size));
return (realloc(ptr, size)); return (realloc(ptr, size));
@ -123,7 +123,7 @@ zone_free_definite_size(malloc_zone_t *zone, void *ptr, size_t size)
{ {
size_t alloc_size; size_t alloc_size;
alloc_size = ivsalloc(ptr, config_prof); alloc_size = ivsalloc(tsdn_fetch(), ptr, config_prof);
if (alloc_size != 0) { if (alloc_size != 0) {
assert(alloc_size == size); assert(alloc_size == size);
je_free(ptr); je_free(ptr);

View File

@ -19,39 +19,6 @@
# include <pthread.h> # include <pthread.h>
#endif #endif
/******************************************************************************/
/*
* Define always-enabled assertion macros, so that test assertions execute even
* if assertions are disabled in the library code. These definitions must
* exist prior to including "jemalloc/internal/util.h".
*/
#define assert(e) do { \
if (!(e)) { \
malloc_printf( \
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
__FILE__, __LINE__, #e); \
abort(); \
} \
} while (0)
#define not_reached() do { \
malloc_printf( \
"<jemalloc>: %s:%d: Unreachable code reached\n", \
__FILE__, __LINE__); \
abort(); \
} while (0)
#define not_implemented() do { \
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
__FILE__, __LINE__); \
abort(); \
} while (0)
#define assert_not_implemented(e) do { \
if (!(e)) \
not_implemented(); \
} while (0)
#include "test/jemalloc_test_defs.h" #include "test/jemalloc_test_defs.h"
#ifdef JEMALLOC_OSSPIN #ifdef JEMALLOC_OSSPIN
@ -86,6 +53,14 @@
# include "jemalloc/internal/jemalloc_internal_defs.h" # include "jemalloc/internal/jemalloc_internal_defs.h"
# include "jemalloc/internal/jemalloc_internal_macros.h" # include "jemalloc/internal/jemalloc_internal_macros.h"
static const bool config_debug =
#ifdef JEMALLOC_DEBUG
true
#else
false
#endif
;
# define JEMALLOC_N(n) @private_namespace@##n # define JEMALLOC_N(n) @private_namespace@##n
# include "jemalloc/internal/private_namespace.h" # include "jemalloc/internal/private_namespace.h"
@ -149,3 +124,40 @@
#include "test/thd.h" #include "test/thd.h"
#define MEXP 19937 #define MEXP 19937
#include "test/SFMT.h" #include "test/SFMT.h"
/******************************************************************************/
/*
* Define always-enabled assertion macros, so that test assertions execute even
* if assertions are disabled in the library code.
*/
#undef assert
#undef not_reached
#undef not_implemented
#undef assert_not_implemented
#define assert(e) do { \
if (!(e)) { \
malloc_printf( \
"<jemalloc>: %s:%d: Failed assertion: \"%s\"\n", \
__FILE__, __LINE__, #e); \
abort(); \
} \
} while (0)
#define not_reached() do { \
malloc_printf( \
"<jemalloc>: %s:%d: Unreachable code reached\n", \
__FILE__, __LINE__); \
abort(); \
} while (0)
#define not_implemented() do { \
malloc_printf("<jemalloc>: %s:%d: Not implemented\n", \
__FILE__, __LINE__); \
abort(); \
} while (0)
#define assert_not_implemented(e) do { \
if (!(e)) \
not_implemented(); \
} while (0)

View File

@ -311,6 +311,9 @@ label_test_end: \
#define test(...) \ #define test(...) \
p_test(__VA_ARGS__, NULL) p_test(__VA_ARGS__, NULL)
#define test_no_malloc_init(...) \
p_test_no_malloc_init(__VA_ARGS__, NULL)
#define test_skip_if(e) do { \ #define test_skip_if(e) do { \
if (e) { \ if (e) { \
test_skip("%s:%s:%d: Test skipped: (%s)", \ test_skip("%s:%s:%d: Test skipped: (%s)", \
@ -324,6 +327,7 @@ void test_fail(const char *format, ...) JEMALLOC_FORMAT_PRINTF(1, 2);
/* For private use by macros. */ /* For private use by macros. */
test_status_t p_test(test_t *t, ...); test_status_t p_test(test_t *t, ...);
test_status_t p_test_no_malloc_init(test_t *t, ...);
void p_test_init(const char *name); void p_test_init(const char *name);
void p_test_fini(void); void p_test_fini(void);
void p_test_fail(const char *prefix, const char *message); void p_test_fail(const char *prefix, const char *message);

View File

@ -1,5 +1,9 @@
#include "test/jemalloc_test.h" #include "test/jemalloc_test.h"
#ifdef JEMALLOC_FILL
const char *malloc_conf = "junk:false";
#endif
static unsigned static unsigned
get_nsizes_impl(const char *cmd) get_nsizes_impl(const char *cmd)
{ {
@ -69,7 +73,7 @@ TEST_END
TEST_BEGIN(test_oom) TEST_BEGIN(test_oom)
{ {
size_t hugemax, size, alignment; size_t hugemax;
bool oom; bool oom;
void *ptrs[3]; void *ptrs[3];
unsigned i; unsigned i;
@ -94,15 +98,16 @@ TEST_BEGIN(test_oom)
} }
#if LG_SIZEOF_PTR == 3 #if LG_SIZEOF_PTR == 3
size = ZU(0x8000000000000000); assert_ptr_null(mallocx(0x8000000000000000ULL,
alignment = ZU(0x8000000000000000); MALLOCX_ALIGN(0x8000000000000000ULL)),
"Expected OOM for mallocx()");
assert_ptr_null(mallocx(0x8000000000000000ULL,
MALLOCX_ALIGN(0x80000000)),
"Expected OOM for mallocx()");
#else #else
size = ZU(0x80000000); assert_ptr_null(mallocx(0x80000000UL, MALLOCX_ALIGN(0x80000000UL)),
alignment = ZU(0x80000000); "Expected OOM for mallocx()");
#endif #endif
assert_ptr_null(mallocx(size, MALLOCX_ALIGN(alignment)),
"Expected OOM for mallocx(size=%#zx, MALLOCX_ALIGN(%#zx)", size,
alignment);
} }
TEST_END TEST_END

View File

@ -1,5 +1,9 @@
#include "test/jemalloc_test.h" #include "test/jemalloc_test.h"
#ifdef JEMALLOC_FILL
const char *malloc_conf = "junk:false";
#endif
/* /*
* Use a separate arena for xallocx() extension/contraction tests so that * Use a separate arena for xallocx() extension/contraction tests so that
* internal allocation e.g. by heap profiling can't interpose allocations where * internal allocation e.g. by heap profiling can't interpose allocations where

View File

@ -60,32 +60,30 @@ p_test_fini(void)
malloc_printf("%s: %s\n", test_name, test_status_string(test_status)); malloc_printf("%s: %s\n", test_name, test_status_string(test_status));
} }
test_status_t static test_status_t
p_test(test_t *t, ...) p_test_impl(bool do_malloc_init, test_t *t, va_list ap)
{ {
test_status_t ret; test_status_t ret;
va_list ap;
/* if (do_malloc_init) {
* Make sure initialization occurs prior to running tests. Tests are /*
* special because they may use internal facilities prior to triggering * Make sure initialization occurs prior to running tests.
* initialization as a side effect of calling into the public API. This * Tests are special because they may use internal facilities
* is a final safety that works even if jemalloc_constructor() doesn't * prior to triggering initialization as a side effect of
* run, as for MSVC builds. * calling into the public API.
*/ */
if (nallocx(1, 0) == 0) { if (nallocx(1, 0) == 0) {
malloc_printf("Initialization error"); malloc_printf("Initialization error");
return (test_status_fail); return (test_status_fail);
}
} }
ret = test_status_pass; ret = test_status_pass;
va_start(ap, t);
for (; t != NULL; t = va_arg(ap, test_t *)) { for (; t != NULL; t = va_arg(ap, test_t *)) {
t(); t();
if (test_status > ret) if (test_status > ret)
ret = test_status; ret = test_status;
} }
va_end(ap);
malloc_printf("--- %s: %u/%u, %s: %u/%u, %s: %u/%u ---\n", malloc_printf("--- %s: %u/%u, %s: %u/%u, %s: %u/%u ---\n",
test_status_string(test_status_pass), test_status_string(test_status_pass),
@ -98,6 +96,34 @@ p_test(test_t *t, ...)
return (ret); return (ret);
} }
test_status_t
p_test(test_t *t, ...)
{
test_status_t ret;
va_list ap;
ret = test_status_pass;
va_start(ap, t);
ret = p_test_impl(true, t, ap);
va_end(ap);
return (ret);
}
test_status_t
p_test_no_malloc_init(test_t *t, ...)
{
test_status_t ret;
va_list ap;
ret = test_status_pass;
va_start(ap, t);
ret = p_test_impl(false, t, ap);
va_end(ap);
return (ret);
}
void void
p_test_fail(const char *prefix, const char *message) p_test_fail(const char *prefix, const char *message)
{ {

View File

@ -32,9 +32,8 @@ timer_ratio(timedelta_t *a, timedelta_t *b, char *buf, size_t buflen)
uint64_t t0 = timer_usec(a); uint64_t t0 = timer_usec(a);
uint64_t t1 = timer_usec(b); uint64_t t1 = timer_usec(b);
uint64_t mult; uint64_t mult;
unsigned i = 0; size_t i = 0;
unsigned j; size_t j, n;
int n;
/* Whole. */ /* Whole. */
n = malloc_snprintf(&buf[i], buflen-i, "%"FMTu64, t0 / t1); n = malloc_snprintf(&buf[i], buflen-i, "%"FMTu64, t0 / t1);

View File

@ -1,7 +1,8 @@
#include "test/jemalloc_test.h" #include "test/jemalloc_test.h"
JEMALLOC_INLINE_C void JEMALLOC_INLINE_C void
time_func(timedelta_t *timer, uint64_t nwarmup, uint64_t niter, void (*func)(void)) time_func(timedelta_t *timer, uint64_t nwarmup, uint64_t niter,
void (*func)(void))
{ {
uint64_t i; uint64_t i;

19
test/unit/a0.c Normal file
View File

@ -0,0 +1,19 @@
#include "test/jemalloc_test.h"
TEST_BEGIN(test_a0)
{
void *p;
p = a0malloc(1);
assert_ptr_not_null(p, "Unexpected a0malloc() error");
a0dalloc(p);
}
TEST_END
int
main(void)
{
return (test_no_malloc_init(
test_a0));
}

159
test/unit/arena_reset.c Normal file
View File

@ -0,0 +1,159 @@
#include "test/jemalloc_test.h"
#ifdef JEMALLOC_PROF
const char *malloc_conf = "prof:true,lg_prof_sample:0";
#endif
static unsigned
get_nsizes_impl(const char *cmd)
{
unsigned ret;
size_t z;
z = sizeof(unsigned);
assert_d_eq(mallctl(cmd, &ret, &z, NULL, 0), 0,
"Unexpected mallctl(\"%s\", ...) failure", cmd);
return (ret);
}
static unsigned
get_nsmall(void)
{
return (get_nsizes_impl("arenas.nbins"));
}
static unsigned
get_nlarge(void)
{
return (get_nsizes_impl("arenas.nlruns"));
}
static unsigned
get_nhuge(void)
{
return (get_nsizes_impl("arenas.nhchunks"));
}
static size_t
get_size_impl(const char *cmd, size_t ind)
{
size_t ret;
size_t z;
size_t mib[4];
size_t miblen = 4;
z = sizeof(size_t);
assert_d_eq(mallctlnametomib(cmd, mib, &miblen),
0, "Unexpected mallctlnametomib(\"%s\", ...) failure", cmd);
mib[2] = ind;
z = sizeof(size_t);
assert_d_eq(mallctlbymib(mib, miblen, &ret, &z, NULL, 0),
0, "Unexpected mallctlbymib([\"%s\", %zu], ...) failure", cmd, ind);
return (ret);
}
static size_t
get_small_size(size_t ind)
{
return (get_size_impl("arenas.bin.0.size", ind));
}
static size_t
get_large_size(size_t ind)
{
return (get_size_impl("arenas.lrun.0.size", ind));
}
static size_t
get_huge_size(size_t ind)
{
return (get_size_impl("arenas.hchunk.0.size", ind));
}
TEST_BEGIN(test_arena_reset)
{
#define NHUGE 4
unsigned arena_ind, nsmall, nlarge, nhuge, nptrs, i;
size_t sz, miblen;
void **ptrs;
int flags;
size_t mib[3];
tsdn_t *tsdn;
test_skip_if((config_valgrind && unlikely(in_valgrind)) || (config_fill
&& unlikely(opt_quarantine)));
sz = sizeof(unsigned);
assert_d_eq(mallctl("arenas.extend", &arena_ind, &sz, NULL, 0), 0,
"Unexpected mallctl() failure");
flags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE;
nsmall = get_nsmall();
nlarge = get_nlarge();
nhuge = get_nhuge() > NHUGE ? NHUGE : get_nhuge();
nptrs = nsmall + nlarge + nhuge;
ptrs = (void **)malloc(nptrs * sizeof(void *));
assert_ptr_not_null(ptrs, "Unexpected malloc() failure");
/* Allocate objects with a wide range of sizes. */
for (i = 0; i < nsmall; i++) {
sz = get_small_size(i);
ptrs[i] = mallocx(sz, flags);
assert_ptr_not_null(ptrs[i],
"Unexpected mallocx(%zu, %#x) failure", sz, flags);
}
for (i = 0; i < nlarge; i++) {
sz = get_large_size(i);
ptrs[nsmall + i] = mallocx(sz, flags);
assert_ptr_not_null(ptrs[i],
"Unexpected mallocx(%zu, %#x) failure", sz, flags);
}
for (i = 0; i < nhuge; i++) {
sz = get_huge_size(i);
ptrs[nsmall + nlarge + i] = mallocx(sz, flags);
assert_ptr_not_null(ptrs[i],
"Unexpected mallocx(%zu, %#x) failure", sz, flags);
}
tsdn = tsdn_fetch();
/* Verify allocations. */
for (i = 0; i < nptrs; i++) {
assert_zu_gt(ivsalloc(tsdn, ptrs[i], false), 0,
"Allocation should have queryable size");
}
/* Reset. */
miblen = sizeof(mib)/sizeof(size_t);
assert_d_eq(mallctlnametomib("arena.0.reset", mib, &miblen), 0,
"Unexpected mallctlnametomib() failure");
mib[1] = (size_t)arena_ind;
assert_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,
"Unexpected mallctlbymib() failure");
/* Verify allocations no longer exist. */
for (i = 0; i < nptrs; i++) {
assert_zu_eq(ivsalloc(tsdn, ptrs[i], false), 0,
"Allocation should no longer exist");
}
free(ptrs);
}
TEST_END
int
main(void)
{
return (test(
test_arena_reset));
}

View File

@ -101,7 +101,7 @@ TEST_BEGIN(test_bitmap_sfu)
bitmap_info_t binfo; bitmap_info_t binfo;
bitmap_info_init(&binfo, i); bitmap_info_init(&binfo, i);
{ {
ssize_t j; size_t j;
bitmap_t *bitmap = (bitmap_t *)malloc( bitmap_t *bitmap = (bitmap_t *)malloc(
bitmap_size(&binfo)); bitmap_size(&binfo));
bitmap_init(bitmap, &binfo); bitmap_init(bitmap, &binfo);
@ -119,7 +119,7 @@ TEST_BEGIN(test_bitmap_sfu)
* Iteratively unset bits starting at the end, and * Iteratively unset bits starting at the end, and
* verify that bitmap_sfu() reaches the unset bits. * verify that bitmap_sfu() reaches the unset bits.
*/ */
for (j = i - 1; j >= 0; j--) { for (j = i - 1; j < i; j--) { /* (i..0] */
bitmap_unset(bitmap, &binfo, j); bitmap_unset(bitmap, &binfo, j);
assert_zd_eq(bitmap_sfu(bitmap, &binfo), j, assert_zd_eq(bitmap_sfu(bitmap, &binfo), j,
"First unset bit should the bit previously " "First unset bit should the bit previously "

View File

@ -2,24 +2,24 @@
TEST_BEGIN(test_new_delete) TEST_BEGIN(test_new_delete)
{ {
tsd_t *tsd; tsdn_t *tsdn;
ckh_t ckh; ckh_t ckh;
tsd = tsd_fetch(); tsdn = tsdn_fetch();
assert_false(ckh_new(tsd, &ckh, 2, ckh_string_hash, ckh_string_keycomp), assert_false(ckh_new(tsdn, &ckh, 2, ckh_string_hash,
"Unexpected ckh_new() error"); ckh_string_keycomp), "Unexpected ckh_new() error");
ckh_delete(tsd, &ckh); ckh_delete(tsdn, &ckh);
assert_false(ckh_new(tsd, &ckh, 3, ckh_pointer_hash, assert_false(ckh_new(tsdn, &ckh, 3, ckh_pointer_hash,
ckh_pointer_keycomp), "Unexpected ckh_new() error"); ckh_pointer_keycomp), "Unexpected ckh_new() error");
ckh_delete(tsd, &ckh); ckh_delete(tsdn, &ckh);
} }
TEST_END TEST_END
TEST_BEGIN(test_count_insert_search_remove) TEST_BEGIN(test_count_insert_search_remove)
{ {
tsd_t *tsd; tsdn_t *tsdn;
ckh_t ckh; ckh_t ckh;
const char *strs[] = { const char *strs[] = {
"a string", "a string",
@ -30,17 +30,17 @@ TEST_BEGIN(test_count_insert_search_remove)
const char *missing = "A string not in the hash table."; const char *missing = "A string not in the hash table.";
size_t i; size_t i;
tsd = tsd_fetch(); tsdn = tsdn_fetch();
assert_false(ckh_new(tsd, &ckh, 2, ckh_string_hash, ckh_string_keycomp), assert_false(ckh_new(tsdn, &ckh, 2, ckh_string_hash,
"Unexpected ckh_new() error"); ckh_string_keycomp), "Unexpected ckh_new() error");
assert_zu_eq(ckh_count(&ckh), 0, assert_zu_eq(ckh_count(&ckh), 0,
"ckh_count() should return %zu, but it returned %zu", ZU(0), "ckh_count() should return %zu, but it returned %zu", ZU(0),
ckh_count(&ckh)); ckh_count(&ckh));
/* Insert. */ /* Insert. */
for (i = 0; i < sizeof(strs)/sizeof(const char *); i++) { for (i = 0; i < sizeof(strs)/sizeof(const char *); i++) {
ckh_insert(tsd, &ckh, strs[i], strs[i]); ckh_insert(tsdn, &ckh, strs[i], strs[i]);
assert_zu_eq(ckh_count(&ckh), i+1, assert_zu_eq(ckh_count(&ckh), i+1,
"ckh_count() should return %zu, but it returned %zu", i+1, "ckh_count() should return %zu, but it returned %zu", i+1,
ckh_count(&ckh)); ckh_count(&ckh));
@ -85,7 +85,7 @@ TEST_BEGIN(test_count_insert_search_remove)
vp = (i & 2) ? &v.p : NULL; vp = (i & 2) ? &v.p : NULL;
k.p = NULL; k.p = NULL;
v.p = NULL; v.p = NULL;
assert_false(ckh_remove(tsd, &ckh, strs[i], kp, vp), assert_false(ckh_remove(tsdn, &ckh, strs[i], kp, vp),
"Unexpected ckh_remove() error"); "Unexpected ckh_remove() error");
ks = (i & 1) ? strs[i] : (const char *)NULL; ks = (i & 1) ? strs[i] : (const char *)NULL;
@ -101,22 +101,22 @@ TEST_BEGIN(test_count_insert_search_remove)
ckh_count(&ckh)); ckh_count(&ckh));
} }
ckh_delete(tsd, &ckh); ckh_delete(tsdn, &ckh);
} }
TEST_END TEST_END
TEST_BEGIN(test_insert_iter_remove) TEST_BEGIN(test_insert_iter_remove)
{ {
#define NITEMS ZU(1000) #define NITEMS ZU(1000)
tsd_t *tsd; tsdn_t *tsdn;
ckh_t ckh; ckh_t ckh;
void **p[NITEMS]; void **p[NITEMS];
void *q, *r; void *q, *r;
size_t i; size_t i;
tsd = tsd_fetch(); tsdn = tsdn_fetch();
assert_false(ckh_new(tsd, &ckh, 2, ckh_pointer_hash, assert_false(ckh_new(tsdn, &ckh, 2, ckh_pointer_hash,
ckh_pointer_keycomp), "Unexpected ckh_new() error"); ckh_pointer_keycomp), "Unexpected ckh_new() error");
for (i = 0; i < NITEMS; i++) { for (i = 0; i < NITEMS; i++) {
@ -128,7 +128,7 @@ TEST_BEGIN(test_insert_iter_remove)
size_t j; size_t j;
for (j = i; j < NITEMS; j++) { for (j = i; j < NITEMS; j++) {
assert_false(ckh_insert(tsd, &ckh, p[j], p[j]), assert_false(ckh_insert(tsdn, &ckh, p[j], p[j]),
"Unexpected ckh_insert() failure"); "Unexpected ckh_insert() failure");
assert_false(ckh_search(&ckh, p[j], &q, &r), assert_false(ckh_search(&ckh, p[j], &q, &r),
"Unexpected ckh_search() failure"); "Unexpected ckh_search() failure");
@ -143,13 +143,13 @@ TEST_BEGIN(test_insert_iter_remove)
for (j = i + 1; j < NITEMS; j++) { for (j = i + 1; j < NITEMS; j++) {
assert_false(ckh_search(&ckh, p[j], NULL, NULL), assert_false(ckh_search(&ckh, p[j], NULL, NULL),
"Unexpected ckh_search() failure"); "Unexpected ckh_search() failure");
assert_false(ckh_remove(tsd, &ckh, p[j], &q, &r), assert_false(ckh_remove(tsdn, &ckh, p[j], &q, &r),
"Unexpected ckh_remove() failure"); "Unexpected ckh_remove() failure");
assert_ptr_eq(p[j], q, "Key pointer mismatch"); assert_ptr_eq(p[j], q, "Key pointer mismatch");
assert_ptr_eq(p[j], r, "Value pointer mismatch"); assert_ptr_eq(p[j], r, "Value pointer mismatch");
assert_true(ckh_search(&ckh, p[j], NULL, NULL), assert_true(ckh_search(&ckh, p[j], NULL, NULL),
"Unexpected ckh_search() success"); "Unexpected ckh_search() success");
assert_true(ckh_remove(tsd, &ckh, p[j], &q, &r), assert_true(ckh_remove(tsdn, &ckh, p[j], &q, &r),
"Unexpected ckh_remove() success"); "Unexpected ckh_remove() success");
} }
@ -184,13 +184,13 @@ TEST_BEGIN(test_insert_iter_remove)
for (i = 0; i < NITEMS; i++) { for (i = 0; i < NITEMS; i++) {
assert_false(ckh_search(&ckh, p[i], NULL, NULL), assert_false(ckh_search(&ckh, p[i], NULL, NULL),
"Unexpected ckh_search() failure"); "Unexpected ckh_search() failure");
assert_false(ckh_remove(tsd, &ckh, p[i], &q, &r), assert_false(ckh_remove(tsdn, &ckh, p[i], &q, &r),
"Unexpected ckh_remove() failure"); "Unexpected ckh_remove() failure");
assert_ptr_eq(p[i], q, "Key pointer mismatch"); assert_ptr_eq(p[i], q, "Key pointer mismatch");
assert_ptr_eq(p[i], r, "Value pointer mismatch"); assert_ptr_eq(p[i], r, "Value pointer mismatch");
assert_true(ckh_search(&ckh, p[i], NULL, NULL), assert_true(ckh_search(&ckh, p[i], NULL, NULL),
"Unexpected ckh_search() success"); "Unexpected ckh_search() success");
assert_true(ckh_remove(tsd, &ckh, p[i], &q, &r), assert_true(ckh_remove(tsdn, &ckh, p[i], &q, &r),
"Unexpected ckh_remove() success"); "Unexpected ckh_remove() success");
dallocx(p[i], 0); dallocx(p[i], 0);
} }
@ -198,7 +198,7 @@ TEST_BEGIN(test_insert_iter_remove)
assert_zu_eq(ckh_count(&ckh), 0, assert_zu_eq(ckh_count(&ckh), 0,
"ckh_count() should return %zu, but it returned %zu", "ckh_count() should return %zu, but it returned %zu",
ZU(0), ckh_count(&ckh)); ZU(0), ckh_count(&ckh));
ckh_delete(tsd, &ckh); ckh_delete(tsdn, &ckh);
#undef NITEMS #undef NITEMS
} }
TEST_END TEST_END

View File

@ -14,6 +14,13 @@ TEST_BEGIN(test_fork)
assert_ptr_not_null(p, "Unexpected malloc() failure"); assert_ptr_not_null(p, "Unexpected malloc() failure");
pid = fork(); pid = fork();
free(p);
p = malloc(64);
assert_ptr_not_null(p, "Unexpected malloc() failure");
free(p);
if (pid == -1) { if (pid == -1) {
/* Error. */ /* Error. */
test_fail("Unexpected fork() failure"); test_fail("Unexpected fork() failure");
@ -24,11 +31,23 @@ TEST_BEGIN(test_fork)
int status; int status;
/* Parent. */ /* Parent. */
free(p); while (true) {
do {
if (waitpid(pid, &status, 0) == -1) if (waitpid(pid, &status, 0) == -1)
test_fail("Unexpected waitpid() failure"); test_fail("Unexpected waitpid() failure");
} while (!WIFEXITED(status) && !WIFSIGNALED(status)); if (WIFSIGNALED(status)) {
test_fail("Unexpected child termination due to "
"signal %d", WTERMSIG(status));
break;
}
if (WIFEXITED(status)) {
if (WEXITSTATUS(status) != 0) {
test_fail(
"Unexpected child exit value %d",
WEXITSTATUS(status));
}
break;
}
}
} }
#else #else
test_skip("fork(2) is irrelevant to Windows"); test_skip("fork(2) is irrelevant to Windows");

View File

@ -29,7 +29,7 @@ arena_dalloc_junk_small_intercept(void *ptr, arena_bin_info_t *bin_info)
arena_dalloc_junk_small_orig(ptr, bin_info); arena_dalloc_junk_small_orig(ptr, bin_info);
for (i = 0; i < bin_info->reg_size; i++) { for (i = 0; i < bin_info->reg_size; i++) {
assert_c_eq(((char *)ptr)[i], 0x5a, assert_u_eq(((uint8_t *)ptr)[i], JEMALLOC_FREE_JUNK,
"Missing junk fill for byte %zu/%zu of deallocated region", "Missing junk fill for byte %zu/%zu of deallocated region",
i, bin_info->reg_size); i, bin_info->reg_size);
} }
@ -44,7 +44,7 @@ arena_dalloc_junk_large_intercept(void *ptr, size_t usize)
arena_dalloc_junk_large_orig(ptr, usize); arena_dalloc_junk_large_orig(ptr, usize);
for (i = 0; i < usize; i++) { for (i = 0; i < usize; i++) {
assert_c_eq(((char *)ptr)[i], 0x5a, assert_u_eq(((uint8_t *)ptr)[i], JEMALLOC_FREE_JUNK,
"Missing junk fill for byte %zu/%zu of deallocated region", "Missing junk fill for byte %zu/%zu of deallocated region",
i, usize); i, usize);
} }
@ -53,10 +53,10 @@ arena_dalloc_junk_large_intercept(void *ptr, size_t usize)
} }
static void static void
huge_dalloc_junk_intercept(void *ptr, size_t usize) huge_dalloc_junk_intercept(tsdn_t *tsdn, void *ptr, size_t usize)
{ {
huge_dalloc_junk_orig(ptr, usize); huge_dalloc_junk_orig(tsdn, ptr, usize);
/* /*
* The conditions under which junk filling actually occurs are nuanced * The conditions under which junk filling actually occurs are nuanced
* enough that it doesn't make sense to duplicate the decision logic in * enough that it doesn't make sense to duplicate the decision logic in
@ -69,7 +69,7 @@ huge_dalloc_junk_intercept(void *ptr, size_t usize)
static void static void
test_junk(size_t sz_min, size_t sz_max) test_junk(size_t sz_min, size_t sz_max)
{ {
char *s; uint8_t *s;
size_t sz_prev, sz, i; size_t sz_prev, sz, i;
if (opt_junk_free) { if (opt_junk_free) {
@ -82,23 +82,23 @@ test_junk(size_t sz_min, size_t sz_max)
} }
sz_prev = 0; sz_prev = 0;
s = (char *)mallocx(sz_min, 0); s = (uint8_t *)mallocx(sz_min, 0);
assert_ptr_not_null((void *)s, "Unexpected mallocx() failure"); assert_ptr_not_null((void *)s, "Unexpected mallocx() failure");
for (sz = sallocx(s, 0); sz <= sz_max; for (sz = sallocx(s, 0); sz <= sz_max;
sz_prev = sz, sz = sallocx(s, 0)) { sz_prev = sz, sz = sallocx(s, 0)) {
if (sz_prev > 0) { if (sz_prev > 0) {
assert_c_eq(s[0], 'a', assert_u_eq(s[0], 'a',
"Previously allocated byte %zu/%zu is corrupted", "Previously allocated byte %zu/%zu is corrupted",
ZU(0), sz_prev); ZU(0), sz_prev);
assert_c_eq(s[sz_prev-1], 'a', assert_u_eq(s[sz_prev-1], 'a',
"Previously allocated byte %zu/%zu is corrupted", "Previously allocated byte %zu/%zu is corrupted",
sz_prev-1, sz_prev); sz_prev-1, sz_prev);
} }
for (i = sz_prev; i < sz; i++) { for (i = sz_prev; i < sz; i++) {
if (opt_junk_alloc) { if (opt_junk_alloc) {
assert_c_eq(s[i], 0xa5, assert_u_eq(s[i], JEMALLOC_ALLOC_JUNK,
"Newly allocated byte %zu/%zu isn't " "Newly allocated byte %zu/%zu isn't "
"junk-filled", i, sz); "junk-filled", i, sz);
} }
@ -107,7 +107,7 @@ test_junk(size_t sz_min, size_t sz_max)
if (xallocx(s, sz+1, 0, 0) == sz) { if (xallocx(s, sz+1, 0, 0) == sz) {
watch_junking(s); watch_junking(s);
s = (char *)rallocx(s, sz+1, 0); s = (uint8_t *)rallocx(s, sz+1, 0);
assert_ptr_not_null((void *)s, assert_ptr_not_null((void *)s,
"Unexpected rallocx() failure"); "Unexpected rallocx() failure");
assert_true(!opt_junk_free || saw_junking, assert_true(!opt_junk_free || saw_junking,
@ -244,7 +244,6 @@ int
main(void) main(void)
{ {
assert(!config_fill || opt_junk_alloc || opt_junk_free);
return (test( return (test(
test_junk_small, test_junk_small,
test_junk_large, test_junk_large,

View File

@ -1,3 +1,3 @@
#define JEMALLOC_TEST_JUNK_OPT "junk:alloc" #define JEMALLOC_TEST_JUNK_OPT "junk:alloc"
#include "junk.c" #include "junk.c"
#undef JEMALLOC_TEST_JUNK_OPT #undef JEMALLOC_TEST_JUNK_OPT

View File

@ -1,3 +1,3 @@
#define JEMALLOC_TEST_JUNK_OPT "junk:free" #define JEMALLOC_TEST_JUNK_OPT "junk:free"
#include "junk.c" #include "junk.c"
#undef JEMALLOC_TEST_JUNK_OPT #undef JEMALLOC_TEST_JUNK_OPT

290
test/unit/ph.c Normal file
View File

@ -0,0 +1,290 @@
#include "test/jemalloc_test.h"
typedef struct node_s node_t;
struct node_s {
#define NODE_MAGIC 0x9823af7e
uint32_t magic;
phn(node_t) link;
uint64_t key;
};
static int
node_cmp(const node_t *a, const node_t *b)
{
int ret;
ret = (a->key > b->key) - (a->key < b->key);
if (ret == 0) {
/*
* Duplicates are not allowed in the heap, so force an
* arbitrary ordering for non-identical items with equal keys.
*/
ret = (((uintptr_t)a) > ((uintptr_t)b))
- (((uintptr_t)a) < ((uintptr_t)b));
}
return (ret);
}
static int
node_cmp_magic(const node_t *a, const node_t *b) {
assert_u32_eq(a->magic, NODE_MAGIC, "Bad magic");
assert_u32_eq(b->magic, NODE_MAGIC, "Bad magic");
return (node_cmp(a, b));
}
typedef ph(node_t) heap_t;
ph_gen(static, heap_, heap_t, node_t, link, node_cmp_magic);
static void
node_print(const node_t *node, unsigned depth)
{
unsigned i;
node_t *leftmost_child, *sibling;
for (i = 0; i < depth; i++)
malloc_printf("\t");
malloc_printf("%2"FMTu64"\n", node->key);
leftmost_child = phn_lchild_get(node_t, link, node);
if (leftmost_child == NULL)
return;
node_print(leftmost_child, depth + 1);
for (sibling = phn_next_get(node_t, link, leftmost_child); sibling !=
NULL; sibling = phn_next_get(node_t, link, sibling)) {
node_print(sibling, depth + 1);
}
}
static void
heap_print(const heap_t *heap)
{
node_t *auxelm;
malloc_printf("vvv heap %p vvv\n", heap);
if (heap->ph_root == NULL)
goto label_return;
node_print(heap->ph_root, 0);
for (auxelm = phn_next_get(node_t, link, heap->ph_root); auxelm != NULL;
auxelm = phn_next_get(node_t, link, auxelm)) {
assert_ptr_eq(phn_next_get(node_t, link, phn_prev_get(node_t,
link, auxelm)), auxelm,
"auxelm's prev doesn't link to auxelm");
node_print(auxelm, 0);
}
label_return:
malloc_printf("^^^ heap %p ^^^\n", heap);
}
static unsigned
node_validate(const node_t *node, const node_t *parent)
{
unsigned nnodes = 1;
node_t *leftmost_child, *sibling;
if (parent != NULL) {
assert_d_ge(node_cmp_magic(node, parent), 0,
"Child is less than parent");
}
leftmost_child = phn_lchild_get(node_t, link, node);
if (leftmost_child == NULL)
return (nnodes);
assert_ptr_eq((void *)phn_prev_get(node_t, link, leftmost_child),
(void *)node, "Leftmost child does not link to node");
nnodes += node_validate(leftmost_child, node);
for (sibling = phn_next_get(node_t, link, leftmost_child); sibling !=
NULL; sibling = phn_next_get(node_t, link, sibling)) {
assert_ptr_eq(phn_next_get(node_t, link, phn_prev_get(node_t,
link, sibling)), sibling,
"sibling's prev doesn't link to sibling");
nnodes += node_validate(sibling, node);
}
return (nnodes);
}
static unsigned
heap_validate(const heap_t *heap)
{
unsigned nnodes = 0;
node_t *auxelm;
if (heap->ph_root == NULL)
goto label_return;
nnodes += node_validate(heap->ph_root, NULL);
for (auxelm = phn_next_get(node_t, link, heap->ph_root); auxelm != NULL;
auxelm = phn_next_get(node_t, link, auxelm)) {
assert_ptr_eq(phn_next_get(node_t, link, phn_prev_get(node_t,
link, auxelm)), auxelm,
"auxelm's prev doesn't link to auxelm");
nnodes += node_validate(auxelm, NULL);
}
label_return:
if (false)
heap_print(heap);
return (nnodes);
}
TEST_BEGIN(test_ph_empty)
{
heap_t heap;
heap_new(&heap);
assert_true(heap_empty(&heap), "Heap should be empty");
assert_ptr_null(heap_first(&heap), "Unexpected node");
}
TEST_END
static void
node_remove(heap_t *heap, node_t *node)
{
heap_remove(heap, node);
node->magic = 0;
}
static node_t *
node_remove_first(heap_t *heap)
{
node_t *node = heap_remove_first(heap);
node->magic = 0;
return (node);
}
TEST_BEGIN(test_ph_random)
{
#define NNODES 25
#define NBAGS 250
#define SEED 42
sfmt_t *sfmt;
uint64_t bag[NNODES];
heap_t heap;
node_t nodes[NNODES];
unsigned i, j, k;
sfmt = init_gen_rand(SEED);
for (i = 0; i < NBAGS; i++) {
switch (i) {
case 0:
/* Insert in order. */
for (j = 0; j < NNODES; j++)
bag[j] = j;
break;
case 1:
/* Insert in reverse order. */
for (j = 0; j < NNODES; j++)
bag[j] = NNODES - j - 1;
break;
default:
for (j = 0; j < NNODES; j++)
bag[j] = gen_rand64_range(sfmt, NNODES);
}
for (j = 1; j <= NNODES; j++) {
/* Initialize heap and nodes. */
heap_new(&heap);
assert_u_eq(heap_validate(&heap), 0,
"Incorrect node count");
for (k = 0; k < j; k++) {
nodes[k].magic = NODE_MAGIC;
nodes[k].key = bag[k];
}
/* Insert nodes. */
for (k = 0; k < j; k++) {
heap_insert(&heap, &nodes[k]);
if (i % 13 == 12) {
/* Trigger merging. */
assert_ptr_not_null(heap_first(&heap),
"Heap should not be empty");
}
assert_u_eq(heap_validate(&heap), k + 1,
"Incorrect node count");
}
assert_false(heap_empty(&heap),
"Heap should not be empty");
/* Remove nodes. */
switch (i % 4) {
case 0:
for (k = 0; k < j; k++) {
assert_u_eq(heap_validate(&heap), j - k,
"Incorrect node count");
node_remove(&heap, &nodes[k]);
assert_u_eq(heap_validate(&heap), j - k
- 1, "Incorrect node count");
}
break;
case 1:
for (k = j; k > 0; k--) {
node_remove(&heap, &nodes[k-1]);
assert_u_eq(heap_validate(&heap), k - 1,
"Incorrect node count");
}
break;
case 2: {
node_t *prev = NULL;
for (k = 0; k < j; k++) {
node_t *node = node_remove_first(&heap);
assert_u_eq(heap_validate(&heap), j - k
- 1, "Incorrect node count");
if (prev != NULL) {
assert_d_ge(node_cmp(node,
prev), 0,
"Bad removal order");
}
prev = node;
}
break;
} case 3: {
node_t *prev = NULL;
for (k = 0; k < j; k++) {
node_t *node = heap_first(&heap);
assert_u_eq(heap_validate(&heap), j - k,
"Incorrect node count");
if (prev != NULL) {
assert_d_ge(node_cmp(node,
prev), 0,
"Bad removal order");
}
node_remove(&heap, node);
assert_u_eq(heap_validate(&heap), j - k
- 1, "Incorrect node count");
prev = node;
}
break;
} default:
not_reached();
}
assert_ptr_null(heap_first(&heap),
"Heap should be empty");
assert_true(heap_empty(&heap), "Heap should be empty");
}
}
fini_gen_rand(sfmt);
#undef NNODES
#undef SEED
}
TEST_END
int
main(void)
{
return (test(
test_ph_empty,
test_ph_random));
}

View File

@ -94,7 +94,8 @@ TEST_END
bool prof_dump_header_intercepted = false; bool prof_dump_header_intercepted = false;
prof_cnt_t cnt_all_copy = {0, 0, 0, 0}; prof_cnt_t cnt_all_copy = {0, 0, 0, 0};
static bool static bool
prof_dump_header_intercept(bool propagate_err, const prof_cnt_t *cnt_all) prof_dump_header_intercept(tsdn_t *tsdn, bool propagate_err,
const prof_cnt_t *cnt_all)
{ {
prof_dump_header_intercepted = true; prof_dump_header_intercepted = true;

View File

@ -220,11 +220,11 @@ TEST_BEGIN(test_stats_arenas_large)
if (config_stats) { if (config_stats) {
assert_zu_gt(allocated, 0, assert_zu_gt(allocated, 0,
"allocated should be greater than zero"); "allocated should be greater than zero");
assert_zu_gt(nmalloc, 0, assert_u64_gt(nmalloc, 0,
"nmalloc should be greater than zero"); "nmalloc should be greater than zero");
assert_zu_ge(nmalloc, ndalloc, assert_u64_ge(nmalloc, ndalloc,
"nmalloc should be at least as large as ndalloc"); "nmalloc should be at least as large as ndalloc");
assert_zu_gt(nrequests, 0, assert_u64_gt(nrequests, 0,
"nrequests should be greater than zero"); "nrequests should be greater than zero");
} }
@ -262,9 +262,9 @@ TEST_BEGIN(test_stats_arenas_huge)
if (config_stats) { if (config_stats) {
assert_zu_gt(allocated, 0, assert_zu_gt(allocated, 0,
"allocated should be greater than zero"); "allocated should be greater than zero");
assert_zu_gt(nmalloc, 0, assert_u64_gt(nmalloc, 0,
"nmalloc should be greater than zero"); "nmalloc should be greater than zero");
assert_zu_ge(nmalloc, ndalloc, assert_u64_ge(nmalloc, ndalloc,
"nmalloc should be at least as large as ndalloc"); "nmalloc should be at least as large as ndalloc");
} }

View File

@ -99,6 +99,11 @@ int
main(void) main(void)
{ {
/* Core tsd bootstrapping must happen prior to data_tsd_boot(). */
if (nallocx(1, 0) == 0) {
malloc_printf("Initialization error");
return (test_status_fail);
}
data_tsd_boot(); data_tsd_boot();
return (test( return (test(

View File

@ -4,27 +4,27 @@
unsigned i, pow2; \ unsigned i, pow2; \
t x; \ t x; \
\ \
assert_zu_eq(pow2_ceil_##suf(0), 0, "Unexpected result"); \ assert_##suf##_eq(pow2_ceil_##suf(0), 0, "Unexpected result"); \
\ \
for (i = 0; i < sizeof(t) * 8; i++) { \ for (i = 0; i < sizeof(t) * 8; i++) { \
assert_zu_eq(pow2_ceil_##suf(((t)1) << i), ((t)1) << i, \ assert_##suf##_eq(pow2_ceil_##suf(((t)1) << i), ((t)1) \
"Unexpected result"); \ << i, "Unexpected result"); \
} \ } \
\ \
for (i = 2; i < sizeof(t) * 8; i++) { \ for (i = 2; i < sizeof(t) * 8; i++) { \
assert_zu_eq(pow2_ceil_##suf((((t)1) << i) - 1), \ assert_##suf##_eq(pow2_ceil_##suf((((t)1) << i) - 1), \
((t)1) << i, "Unexpected result"); \ ((t)1) << i, "Unexpected result"); \
} \ } \
\ \
for (i = 0; i < sizeof(t) * 8 - 1; i++) { \ for (i = 0; i < sizeof(t) * 8 - 1; i++) { \
assert_zu_eq(pow2_ceil_##suf((((t)1) << i) + 1), \ assert_##suf##_eq(pow2_ceil_##suf((((t)1) << i) + 1), \
((t)1) << (i+1), "Unexpected result"); \ ((t)1) << (i+1), "Unexpected result"); \
} \ } \
\ \
for (pow2 = 1; pow2 < 25; pow2++) { \ for (pow2 = 1; pow2 < 25; pow2++) { \
for (x = (((t)1) << (pow2-1)) + 1; x <= ((t)1) << pow2; \ for (x = (((t)1) << (pow2-1)) + 1; x <= ((t)1) << pow2; \
x++) { \ x++) { \
assert_zu_eq(pow2_ceil_##suf(x), \ assert_##suf##_eq(pow2_ceil_##suf(x), \
((t)1) << pow2, \ ((t)1) << pow2, \
"Unexpected result, x=%"pri, x); \ "Unexpected result, x=%"pri, x); \
} \ } \
@ -160,14 +160,14 @@ TEST_BEGIN(test_malloc_snprintf_truncated)
{ {
#define BUFLEN 15 #define BUFLEN 15
char buf[BUFLEN]; char buf[BUFLEN];
int result; size_t result;
size_t len; size_t len;
#define TEST(expected_str_untruncated, ...) do { \ #define TEST(expected_str_untruncated, ...) do { \
result = malloc_snprintf(buf, len, __VA_ARGS__); \ result = malloc_snprintf(buf, len, __VA_ARGS__); \
assert_d_eq(strncmp(buf, expected_str_untruncated, len-1), 0, \ assert_d_eq(strncmp(buf, expected_str_untruncated, len-1), 0, \
"Unexpected string inequality (\"%s\" vs \"%s\")", \ "Unexpected string inequality (\"%s\" vs \"%s\")", \
buf, expected_str_untruncated); \ buf, expected_str_untruncated); \
assert_d_eq(result, strlen(expected_str_untruncated), \ assert_zu_eq(result, strlen(expected_str_untruncated), \
"Unexpected result"); \ "Unexpected result"); \
} while (0) } while (0)
@ -193,11 +193,11 @@ TEST_BEGIN(test_malloc_snprintf)
{ {
#define BUFLEN 128 #define BUFLEN 128
char buf[BUFLEN]; char buf[BUFLEN];
int result; size_t result;
#define TEST(expected_str, ...) do { \ #define TEST(expected_str, ...) do { \
result = malloc_snprintf(buf, sizeof(buf), __VA_ARGS__); \ result = malloc_snprintf(buf, sizeof(buf), __VA_ARGS__); \
assert_str_eq(buf, expected_str, "Unexpected output"); \ assert_str_eq(buf, expected_str, "Unexpected output"); \
assert_d_eq(result, strlen(expected_str), "Unexpected result"); \ assert_zu_eq(result, strlen(expected_str), "Unexpected result");\
} while (0) } while (0)
TEST("hello", "hello"); TEST("hello", "hello");

278
test/unit/witness.c Normal file
View File

@ -0,0 +1,278 @@
#include "test/jemalloc_test.h"
static witness_lock_error_t *witness_lock_error_orig;
static witness_owner_error_t *witness_owner_error_orig;
static witness_not_owner_error_t *witness_not_owner_error_orig;
static witness_lockless_error_t *witness_lockless_error_orig;
static bool saw_lock_error;
static bool saw_owner_error;
static bool saw_not_owner_error;
static bool saw_lockless_error;
static void
witness_lock_error_intercept(const witness_list_t *witnesses,
const witness_t *witness)
{
saw_lock_error = true;
}
static void
witness_owner_error_intercept(const witness_t *witness)
{
saw_owner_error = true;
}
static void
witness_not_owner_error_intercept(const witness_t *witness)
{
saw_not_owner_error = true;
}
static void
witness_lockless_error_intercept(const witness_list_t *witnesses)
{
saw_lockless_error = true;
}
static int
witness_comp(const witness_t *a, const witness_t *b)
{
assert_u_eq(a->rank, b->rank, "Witnesses should have equal rank");
return (strcmp(a->name, b->name));
}
static int
witness_comp_reverse(const witness_t *a, const witness_t *b)
{
assert_u_eq(a->rank, b->rank, "Witnesses should have equal rank");
return (-strcmp(a->name, b->name));
}
TEST_BEGIN(test_witness)
{
witness_t a, b;
tsdn_t *tsdn;
test_skip_if(!config_debug);
tsdn = tsdn_fetch();
witness_assert_lockless(tsdn);
witness_init(&a, "a", 1, NULL);
witness_assert_not_owner(tsdn, &a);
witness_lock(tsdn, &a);
witness_assert_owner(tsdn, &a);
witness_init(&b, "b", 2, NULL);
witness_assert_not_owner(tsdn, &b);
witness_lock(tsdn, &b);
witness_assert_owner(tsdn, &b);
witness_unlock(tsdn, &a);
witness_unlock(tsdn, &b);
witness_assert_lockless(tsdn);
}
TEST_END
TEST_BEGIN(test_witness_comp)
{
witness_t a, b, c, d;
tsdn_t *tsdn;
test_skip_if(!config_debug);
tsdn = tsdn_fetch();
witness_assert_lockless(tsdn);
witness_init(&a, "a", 1, witness_comp);
witness_assert_not_owner(tsdn, &a);
witness_lock(tsdn, &a);
witness_assert_owner(tsdn, &a);
witness_init(&b, "b", 1, witness_comp);
witness_assert_not_owner(tsdn, &b);
witness_lock(tsdn, &b);
witness_assert_owner(tsdn, &b);
witness_unlock(tsdn, &b);
witness_lock_error_orig = witness_lock_error;
witness_lock_error = witness_lock_error_intercept;
saw_lock_error = false;
witness_init(&c, "c", 1, witness_comp_reverse);
witness_assert_not_owner(tsdn, &c);
assert_false(saw_lock_error, "Unexpected witness lock error");
witness_lock(tsdn, &c);
assert_true(saw_lock_error, "Expected witness lock error");
witness_unlock(tsdn, &c);
saw_lock_error = false;
witness_init(&d, "d", 1, NULL);
witness_assert_not_owner(tsdn, &d);
assert_false(saw_lock_error, "Unexpected witness lock error");
witness_lock(tsdn, &d);
assert_true(saw_lock_error, "Expected witness lock error");
witness_unlock(tsdn, &d);
witness_unlock(tsdn, &a);
witness_assert_lockless(tsdn);
witness_lock_error = witness_lock_error_orig;
}
TEST_END
TEST_BEGIN(test_witness_reversal)
{
witness_t a, b;
tsdn_t *tsdn;
test_skip_if(!config_debug);
witness_lock_error_orig = witness_lock_error;
witness_lock_error = witness_lock_error_intercept;
saw_lock_error = false;
tsdn = tsdn_fetch();
witness_assert_lockless(tsdn);
witness_init(&a, "a", 1, NULL);
witness_init(&b, "b", 2, NULL);
witness_lock(tsdn, &b);
assert_false(saw_lock_error, "Unexpected witness lock error");
witness_lock(tsdn, &a);
assert_true(saw_lock_error, "Expected witness lock error");
witness_unlock(tsdn, &a);
witness_unlock(tsdn, &b);
witness_assert_lockless(tsdn);
witness_lock_error = witness_lock_error_orig;
}
TEST_END
TEST_BEGIN(test_witness_recursive)
{
witness_t a;
tsdn_t *tsdn;
test_skip_if(!config_debug);
witness_not_owner_error_orig = witness_not_owner_error;
witness_not_owner_error = witness_not_owner_error_intercept;
saw_not_owner_error = false;
witness_lock_error_orig = witness_lock_error;
witness_lock_error = witness_lock_error_intercept;
saw_lock_error = false;
tsdn = tsdn_fetch();
witness_assert_lockless(tsdn);
witness_init(&a, "a", 1, NULL);
witness_lock(tsdn, &a);
assert_false(saw_lock_error, "Unexpected witness lock error");
assert_false(saw_not_owner_error, "Unexpected witness not owner error");
witness_lock(tsdn, &a);
assert_true(saw_lock_error, "Expected witness lock error");
assert_true(saw_not_owner_error, "Expected witness not owner error");
witness_unlock(tsdn, &a);
witness_assert_lockless(tsdn);
witness_owner_error = witness_owner_error_orig;
witness_lock_error = witness_lock_error_orig;
}
TEST_END
TEST_BEGIN(test_witness_unlock_not_owned)
{
witness_t a;
tsdn_t *tsdn;
test_skip_if(!config_debug);
witness_owner_error_orig = witness_owner_error;
witness_owner_error = witness_owner_error_intercept;
saw_owner_error = false;
tsdn = tsdn_fetch();
witness_assert_lockless(tsdn);
witness_init(&a, "a", 1, NULL);
assert_false(saw_owner_error, "Unexpected owner error");
witness_unlock(tsdn, &a);
assert_true(saw_owner_error, "Expected owner error");
witness_assert_lockless(tsdn);
witness_owner_error = witness_owner_error_orig;
}
TEST_END
TEST_BEGIN(test_witness_lockful)
{
witness_t a;
tsdn_t *tsdn;
test_skip_if(!config_debug);
witness_lockless_error_orig = witness_lockless_error;
witness_lockless_error = witness_lockless_error_intercept;
saw_lockless_error = false;
tsdn = tsdn_fetch();
witness_assert_lockless(tsdn);
witness_init(&a, "a", 1, NULL);
assert_false(saw_lockless_error, "Unexpected lockless error");
witness_assert_lockless(tsdn);
witness_lock(tsdn, &a);
witness_assert_lockless(tsdn);
assert_true(saw_lockless_error, "Expected lockless error");
witness_unlock(tsdn, &a);
witness_assert_lockless(tsdn);
witness_lockless_error = witness_lockless_error_orig;
}
TEST_END
int
main(void)
{
return (test(
test_witness,
test_witness_comp,
test_witness_reversal,
test_witness_recursive,
test_witness_unlock_not_owned,
test_witness_lockful));
}

View File

@ -8,39 +8,41 @@ const char *malloc_conf =
static void static void
test_zero(size_t sz_min, size_t sz_max) test_zero(size_t sz_min, size_t sz_max)
{ {
char *s; uint8_t *s;
size_t sz_prev, sz, i; size_t sz_prev, sz, i;
#define MAGIC ((uint8_t)0x61)
sz_prev = 0; sz_prev = 0;
s = (char *)mallocx(sz_min, 0); s = (uint8_t *)mallocx(sz_min, 0);
assert_ptr_not_null((void *)s, "Unexpected mallocx() failure"); assert_ptr_not_null((void *)s, "Unexpected mallocx() failure");
for (sz = sallocx(s, 0); sz <= sz_max; for (sz = sallocx(s, 0); sz <= sz_max;
sz_prev = sz, sz = sallocx(s, 0)) { sz_prev = sz, sz = sallocx(s, 0)) {
if (sz_prev > 0) { if (sz_prev > 0) {
assert_c_eq(s[0], 'a', assert_u_eq(s[0], MAGIC,
"Previously allocated byte %zu/%zu is corrupted", "Previously allocated byte %zu/%zu is corrupted",
ZU(0), sz_prev); ZU(0), sz_prev);
assert_c_eq(s[sz_prev-1], 'a', assert_u_eq(s[sz_prev-1], MAGIC,
"Previously allocated byte %zu/%zu is corrupted", "Previously allocated byte %zu/%zu is corrupted",
sz_prev-1, sz_prev); sz_prev-1, sz_prev);
} }
for (i = sz_prev; i < sz; i++) { for (i = sz_prev; i < sz; i++) {
assert_c_eq(s[i], 0x0, assert_u_eq(s[i], 0x0,
"Newly allocated byte %zu/%zu isn't zero-filled", "Newly allocated byte %zu/%zu isn't zero-filled",
i, sz); i, sz);
s[i] = 'a'; s[i] = MAGIC;
} }
if (xallocx(s, sz+1, 0, 0) == sz) { if (xallocx(s, sz+1, 0, 0) == sz) {
s = (char *)rallocx(s, sz+1, 0); s = (uint8_t *)rallocx(s, sz+1, 0);
assert_ptr_not_null((void *)s, assert_ptr_not_null((void *)s,
"Unexpected rallocx() failure"); "Unexpected rallocx() failure");
} }
} }
dallocx(s, 0); dallocx(s, 0);
#undef MAGIC
} }
TEST_BEGIN(test_zero_small) TEST_BEGIN(test_zero_small)