Fix numerous arena bugs.

In arena_ralloc_large_grow(), update the map element for the end of the
newly grown run, rather than the interior map element that was the
beginning of the appended run.  This is a long-standing bug, and it had
the potential to cause massive corruption, but triggering it required
roughly the following sequence of events:
  1) Large in-place growing realloc(), with left-over space in the run
     that followed the large object.
  2) Allocation of the remainder run left over from (1).
  3) Deallocation of the remainder run *before* deallocation of the
     large run, with unfortunate interior map state left over from
     previous run allocation/deallocation activity, such that one or
     more pages of allocated memory would be treated as part of the
     remainder run during run coalescing.
In summary, this was a bad bug, but it was difficult to trigger.

In arena_bin_malloc_hard(), if another thread wins the race to allocate
a bin run, dispose of the spare run via arena_bin_lower_run() rather
than arena_run_dalloc(), since the run has already been prepared for use
as a bin run.  This bug has existed since March 14, 2010:
    e00572b384
    mmap()/munmap() without arena->lock or bin->lock.

Fix bugs in arena_dalloc_bin_run(), arena_trim_head(),
arena_trim_tail(), and arena_ralloc_large_grow() that could cause the
CHUNK_MAP_UNZEROED map bit to become corrupted.  These are all
long-standing bugs, but the chances of them actually causing problems
was much lower before the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED
conversion.

Fix a large run statistics regression in arena_ralloc_large_grow() that
was introduced on September 17, 2010:
    8e3c3c61b5
    Add {,r,s,d}allocm().

Add debug code to validate that supposedly pre-zeroed memory really is.
This commit is contained in:
Jason Evans
2010-10-17 17:51:37 -07:00
parent 397e5111b5
commit 940a2e02b2
2 changed files with 171 additions and 79 deletions

View File

@@ -280,8 +280,8 @@ tcache_alloc_large(tcache_t *tcache, size_t size, bool zero)
} else {
#ifdef JEMALLOC_PROF
arena_chunk_t *chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ret);
size_t pageind = (unsigned)(((uintptr_t)ret - (uintptr_t)chunk)
>> PAGE_SHIFT);
size_t pageind = (((uintptr_t)ret - (uintptr_t)chunk) >>
PAGE_SHIFT);
chunk->map[pageind-map_bias].bits |=
CHUNK_MAP_CLASS_MASK;
#endif
@@ -362,7 +362,6 @@ tcache_dalloc_large(tcache_t *tcache, void *ptr, size_t size)
arena_chunk_t *chunk;
size_t pageind, binind;
tcache_bin_t *tbin;
arena_chunk_map_t *mapelm;
assert((size & PAGE_MASK) == 0);
assert(arena_salloc(ptr) > small_maxclass);
@@ -371,7 +370,6 @@ tcache_dalloc_large(tcache_t *tcache, void *ptr, size_t size)
chunk = (arena_chunk_t *)CHUNK_ADDR2BASE(ptr);
arena = chunk->arena;
pageind = ((uintptr_t)ptr - (uintptr_t)chunk) >> PAGE_SHIFT;
mapelm = &chunk->map[pageind-map_bias];
binind = nbins + (size >> PAGE_SHIFT) - 1;
#ifdef JEMALLOC_FILL