When multiple threads calling stats_print, race could happen as we read the
counters in separate mallctl calls; and the removed assertion could fail when
other operations happened in between the mallctl calls. For simplicity, output
"race" in the utilization field in this case.
This resolves#616.
Fix lg_chunk clamping to take into account cache-oblivious large
allocation. This regression only resulted in incorrect behavior if
!config_fill (false unless --disable-fill specified) and
config_cache_oblivious (true unless --disable-cache-oblivious
specified).
This regression was introduced by
8a03cf039cd06f9fa6972711195055d865673966 (Implement cache index
randomization for large allocations.), which was first released in
4.0.0.
This resolves#555.
Remove obsolete unit test scaffolding for extent quantization. Remove
redundant assertions. Add an assertion to
extents_first_best_fit_locked() that should help prevent aligned
allocation regressions.
This regression was caused by
b9408d77a63a54fd331f9b81c884f68e6d57f2e5 (Fix/simplify chunk_recycle()
allocation size computations.).
This resolves#647.
Implement and test a JSON validation parser. Use the parser to validate
JSON output from malloc_stats_print(), with a significant subset of
supported output options.
This resolves#583.
These bugs were introduced by b599b32280e1142856b0b96293a71e1684b1ccfb
(Add "J" (JSON) support to malloc_stats_print().), which was first
released in 4.3.0.
This resolves#615.
Fix chunk_alloc_dss() to account for bytes that are not a multiple of
the chunk size. This regression was introduced by
e2bcf037d445a84a71c7997670819ebd0a893b4a (Make dss operations
lockless.), which was first released in 4.3.0.
We don't touch witness at all when config_debug == false. Let's only pay the
memory cost in malloc_mutex_s when needed. Note that when !config_debug, we keep
the field in a union so that we don't have to do #ifdefs in multiple places.
In some cases the prof machinery allocates (in order to modify the
bt2gctx hash table), and such operations are synchronized via
bt2gctx_mtx. Rather than asserting that no locks are held on entry
into functions that may call prof_gdump(), make the weaker assertion
that no "core" locks are held. The prof machinery enqueues dumps
triggered by prof_gdump() calls when bt2gctx_mtx is held, so this
weakened assertion avoids false failures in such cases.
This fixes interactions with witness_assert_depth[_to_rank](), which was
added in dad74bd3c811ca2b1af1fd57b28f2456da5ba08b (Convert
witness_assert_lockless() to witness_assert_lock_depth().).
malloc_conf does not reliably work with MSVC, which complains of
"inconsistent dll linkage", i.e. its inability to support the
application overriding malloc_conf when dynamically linking/loading.
Work around this limitation by adding test harness support for per test
shell script sourcing, and converting all tests to use MALLOC_CONF
instead of malloc_conf.
Synchronize tcaches with tcaches_mtx rather than ctl_mtx. Add missing
synchronization for tcache flushing. This bug was introduced by
1cb181ed632e7573fb4eab194e4d216867222d27 (Implement explicit tcache
support.), which was first released in 4.0.0.
malloc_conf does not reliably work with MSVC, which complains of
"inconsistent dll linkage", i.e. its inability to support the
application overriding malloc_conf when dynamically linking/loading.
Work around this limitation by adding test harness support for per test
shell script sourcing, and converting all tests to use MALLOC_CONF
instead of malloc_conf.
This regression was caused by 8f61fdedb908c29905103b22dda32ceb29cd8ede
(Uniformly cast mallctl[bymib]() oldp/newp arguments to (void *).).
This resolves#538.
Introduces gen_travis.py, which generates .travis.yml, and updates .travis.yml
to be the generated version.
The travis build matrix approach doesn't play well with mixing and matching
various different environment settings, so we generate every build explicitly,
rather than letting them do it for us.
To avoid abusing travis resources (and save us time waiting for CI results), we
don't test every possible combination of options; we only check up to 2 unusual
settings at a time.
Extent splitting and coalescing is a major component of large allocation
overhead, and disabling coalescing of cached extents provides a simple
and effective hysteresis mechanism. Once two-phase purging is
implemented, it will probably make sense to leave coalescing disabled
for the first phase, but coalesce during the second phase.
This avoids a gcc diagnostic note:
note: The ABI for passing parameters with 64-byte alignment has
changed in GCC 4.6
This note related to the cacheline alignment of rtree_ctx_t, which was
introduced by 4a346f55939af4f200121cc4454089592d952f18 (Replace rtree
path cache with LRU cache.).
Fix extent_alloc_dss() to account for bytes that are not a multiple of
the page size. This regression was introduced by
577d4572b0821a15e5370f9bf566d884b7cf707c (Make dss operations
lockless.), which was first released in 4.3.0.
Fix rtree_subkey() to use uintptr_t rather than unsigned for key
bitmasking. This regression was introduced by
4a346f55939af4f200121cc4454089592d952f18 (Replace rtree path cache with
LRU cache.).
This fixes interactions with witness_assert_depth[_to_rank](), which was
added in d0e93ada51e20f4ae394ff4dbdcf96182767c89c (Add
witness_assert_depth[_to_rank]().).
Rather than dynamically building a table to aid per level computations,
define a constant table at compile time. Omit both high and low
insignificant bits. Use one to three tree levels, depending on the
number of significant bits.
Rework rtree_ctx_t to encapsulate an rtree leaf LRU lookup cache rather
than a single-path element lookup cache. The replacement is logically
much simpler, as well as slightly faster in the fast path case and less
prone to degraded performance during non-trivial sequences of lookups.