Fix chunk_alloc_mmap() bugs.

Simplify chunk_alloc_mmap() to no longer attempt map extension.  The
extra complexity isn't warranted, because although in the success case
it saves one system call as compared to immediately falling back to
chunk_alloc_mmap_slow(), it also makes the failure case even more
expensive.  This simplification removes two bugs:

- For Windows platforms, pages_unmap() wasn't being called for unaligned
  mappings prior to falling back to chunk_alloc_mmap_slow().  This
  caused permanent virtual memory leaks.
- For non-Windows platforms, alignment greater than chunksize caused
  pages_map() to be called with size 0 when attempting map extension.
  This always resulted in an mmap() error, and subsequent fallback to
  chunk_alloc_mmap_slow().
This commit is contained in:
Jason Evans 2012-05-09 13:05:04 -07:00
parent 34a8cf6c40
commit de6fbdb72c
2 changed files with 11 additions and 35 deletions

View File

@ -134,6 +134,7 @@ chunk_alloc(size_t size, size_t alignment, bool base, bool *zero)
assert(size != 0); assert(size != 0);
assert((size & chunksize_mask) == 0); assert((size & chunksize_mask) == 0);
assert(alignment != 0);
assert((alignment & chunksize_mask) == 0); assert((alignment & chunksize_mask) == 0);
ret = chunk_recycle(size, alignment, base, zero); ret = chunk_recycle(size, alignment, base, zero);

View File

@ -16,6 +16,8 @@ pages_map(void *addr, size_t size)
{ {
void *ret; void *ret;
assert(size != 0);
#ifdef _WIN32 #ifdef _WIN32
/* /*
* If VirtualAlloc can't allocate at the given address when one is * If VirtualAlloc can't allocate at the given address when one is
@ -164,51 +166,24 @@ chunk_alloc_mmap(size_t size, size_t alignment, bool *zero)
* NetBSD has), but in the absence of such a feature, we have to work * NetBSD has), but in the absence of such a feature, we have to work
* hard to efficiently create aligned mappings. The reliable, but * hard to efficiently create aligned mappings. The reliable, but
* slow method is to create a mapping that is over-sized, then trim the * slow method is to create a mapping that is over-sized, then trim the
* excess. However, that always results in at least one call to * excess. However, that always results in one or two calls to
* pages_unmap(). * pages_unmap().
* *
* A more optimistic approach is to try mapping precisely the right * Optimistically try mapping precisely the right amount before falling
* amount, then try to append another mapping if alignment is off. In * back to the slow method, with the expectation that the optimistic
* practice, this works out well as long as the application is not * approach works most of the time.
* interleaving mappings via direct mmap() calls. If we do run into a
* situation where there is an interleaved mapping and we are unable to
* extend an unaligned mapping, our best option is to switch to the
* slow method until mmap() returns another aligned mapping. This will
* tend to leave a gap in the memory map that is too small to cause
* later problems for the optimistic method.
*
* Another possible confounding factor is address space layout
* randomization (ASLR), which causes mmap(2) to disregard the
* requested address. As such, repeatedly trying to extend unaligned
* mappings could result in an infinite loop, so if extension fails,
* immediately fall back to the reliable method of over-allocation
* followed by trimming.
*/ */
assert(alignment != 0);
assert((alignment & chunksize_mask) == 0);
ret = pages_map(NULL, size); ret = pages_map(NULL, size);
if (ret == NULL) if (ret == NULL)
return (NULL); return (NULL);
offset = ALIGNMENT_ADDR2OFFSET(ret, alignment); offset = ALIGNMENT_ADDR2OFFSET(ret, alignment);
if (offset != 0) { if (offset != 0) {
#ifdef _WIN32 pages_unmap(ret, size);
return (chunk_alloc_mmap_slow(size, alignment, zero)); return (chunk_alloc_mmap_slow(size, alignment, zero));
#else
/* Try to extend chunk boundary. */
if (pages_map((void *)((uintptr_t)ret + size), chunksize -
offset) == NULL) {
/*
* Extension failed. Clean up, then fall back to the
* reliable-but-expensive method.
*/
pages_unmap(ret, size);
return (chunk_alloc_mmap_slow(size, alignment, zero));
} else {
/* Clean up unneeded leading space. */
pages_unmap(ret, chunksize - offset);
ret = (void *)((uintptr_t)ret + (chunksize - offset));
}
#endif
} }
assert(ret != NULL); assert(ret != NULL);