Bug fix for prof_active switch

The bug is subtle but critical: if application performs the following
three actions in sequence: (a) turn `prof_active` off, (b) make at
least one allocation that triggers the malloc slow path via the
`if (unlikely(bytes_until_sample < 0))` path, and (c) turn
`prof_active` back on, then the application would never get another
sample (until a very very long time later).

The fix is to properly reset `bytes_until_sample` rather than
throwing it all the way to `SSIZE_MAX`.

A side minor change is to call `prof_active_get_unlocked()` rather
than directly grabbing the `prof_active` variable - it is the very
reason why we defined the `prof_active_get_unlocked()` function.
This commit is contained in:
Yinan Zhang 2019-08-21 16:38:44 -07:00
parent 0043e68d4c
commit 9e031c1d11

View File

@ -2356,15 +2356,17 @@ je_malloc(size_t size) {
/*
* Avoid a prof_active check on the fastpath.
* If prof_active is false, set bytes_until_sample to
* a large value. If prof_active is set to true,
* sampling interval. If prof_active is set to true,
* bytes_until_sample will be reset.
*/
if (!prof_active) {
tsd_bytes_until_sample_set(tsd, SSIZE_MAX);
}
if (!prof_active_get_unlocked()) {
tsd_bytes_until_sample_set(tsd,
((uint64_t)1U << lg_prof_sample));
} else {
return malloc_default(size);
}
}
}
cache_bin_t *bin = tcache_small_bin_get(tcache, ind);
bool tcache_success;