Preserve CHUNK_MAP_UNZEROED for small runs.

Preserve CHUNK_MAP_UNZEROED when allocating small runs, because it is
possible that untouched pages will be returned to the tree of clean
runs, where the CHUNK_MAP_UNZEROED flag matters.  Prior to the
conversion from CHUNK_MAP_ZEROED, this was already a bug, but in the
worst case extra zeroing occurred.  After the conversion, this bug made
it possible to incorrectly treat pages as pre-zeroed.
This commit is contained in:
Jason Evans 2010-10-16 16:10:40 -07:00
parent 004ed142a6
commit 397e5111b5

View File

@ -389,14 +389,18 @@ arena_run_split(arena_t *arena, arena_run_t *run, size_t size, bool large,
* arena_dalloc_bin_run() has the ability to conditionally trim
* clean pages.
*/
chunk->map[run_ind-map_bias].bits = CHUNK_MAP_ALLOCATED |
flag_dirty;
chunk->map[run_ind-map_bias].bits =
(chunk->map[run_ind-map_bias].bits & CHUNK_MAP_UNZEROED) |
CHUNK_MAP_ALLOCATED | flag_dirty;
for (i = 1; i < need_pages - 1; i++) {
chunk->map[run_ind+i-map_bias].bits = (i << PAGE_SHIFT)
| CHUNK_MAP_ALLOCATED;
| (chunk->map[run_ind+i-map_bias].bits &
CHUNK_MAP_UNZEROED) | CHUNK_MAP_ALLOCATED;
}
chunk->map[run_ind+need_pages-1-map_bias].bits = ((need_pages
- 1) << PAGE_SHIFT) | CHUNK_MAP_ALLOCATED | flag_dirty;
- 1) << PAGE_SHIFT) |
(chunk->map[run_ind+need_pages-1-map_bias].bits &
CHUNK_MAP_UNZEROED) | CHUNK_MAP_ALLOCATED | flag_dirty;
}
}