Commit Graph

67 Commits

Author SHA1 Message Date
Jason Evans
374d26a43b Fix chunk_recycle() to stop leaking trailing chunks.
Fix chunk_recycle() to correctly compute trailsize and re-insert
trailing chunks.  This fixes a major virtual memory leak.

Simplify chunk_record() to avoid dropping/re-acquiring chunks_mtx.
2012-05-09 14:48:35 -07:00
Jason Evans
de6fbdb72c Fix chunk_alloc_mmap() bugs.
Simplify chunk_alloc_mmap() to no longer attempt map extension.  The
extra complexity isn't warranted, because although in the success case
it saves one system call as compared to immediately falling back to
chunk_alloc_mmap_slow(), it also makes the failure case even more
expensive.  This simplification removes two bugs:

- For Windows platforms, pages_unmap() wasn't being called for unaligned
  mappings prior to falling back to chunk_alloc_mmap_slow().  This
  caused permanent virtual memory leaks.
- For non-Windows platforms, alignment greater than chunksize caused
  pages_map() to be called with size 0 when attempting map extension.
  This always resulted in an mmap() error, and subsequent fallback to
  chunk_alloc_mmap_slow().
2012-05-09 13:05:04 -07:00
Jason Evans
34a8cf6c40 Fix a base allocator deadlock.
Fix a base allocator deadlock due to chunk_recycle() calling back into
the base allocator.
2012-05-02 20:41:42 -07:00
Jason Evans
f54166e7ef Add missing Valgrind annotations. 2012-04-23 22:41:36 -07:00
Jason Evans
a8f8d7540d Remove mmap_unaligned.
Remove mmap_unaligned, which was used to heuristically decide whether to
optimistically call mmap() in such a way that could reduce the total
number of system calls.  If I remember correctly, the intention of
mmap_unaligned was to avoid always executing the slow path in the
presence of ASLR.  However, that reasoning seems to have been based on a
flawed understanding of how ASLR actually works.  Although ASLR
apparently causes mmap() to ignore address requests, it does not cause
total placement randomness, so there is a reasonable expectation that
iterative mmap() calls will start returning chunk-aligned mappings once
the first chunk has been properly aligned.
2012-04-21 19:17:21 -07:00
Jason Evans
7ad54c1c30 Fix chunk allocation/deallocation bugs.
Fix chunk_alloc_dss() to zero memory when requested.

Fix chunk_dealloc() to avoid chunk_dealloc_mmap() for dss-allocated
memory.

Fix huge_palloc() to always junk fill when requested.

Improve chunk_recycle() to report that memory is zeroed as a side effect
of pages_purge().
2012-04-21 16:04:51 -07:00
Jason Evans
8f0e0eb1c0 Fix a memory corruption bug in chunk_alloc_dss().
Fix a memory corruption bug in chunk_alloc_dss() that was due to
claiming newly allocated memory is zeroed.

Reverse order of preference between mmap() and sbrk() to prefer mmap().

Clean up management of 'zero' parameter in chunk_alloc*().
2012-04-21 13:33:48 -07:00
Mike Hommey
666c5bf7a8 Add a pages_purge function to wrap madvise(JEMALLOC_MADV_PURGE) calls
This will be used to implement the feature on mingw, which doesn't have
madvise.
2012-04-18 18:57:48 -07:00
Jason Evans
7ca0fdfb85 Disable munmap() if it causes VM map holes.
Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.

Unify the chunk caching for mmap and dss.

Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
2012-04-12 20:20:58 -07:00
Jason Evans
a1ee7838e1 Rename labels.
Rename labels from FOO to label_foo in order to avoid system macro
definitions, in particular OUT and ERROR on mingw.

Reported by Mike Hommey.
2012-04-10 15:07:44 -07:00
Mike Hommey
eae269036c Add alignment support to chunk_alloc(). 2012-04-10 14:51:39 -07:00
Jason Evans
ae4c7b4b40 Clean up *PAGE* macros.
s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g.

Remove remnants of the dynamic-page-shift code.

Rename the "arenas.pagesize" mallctl to "arenas.page".

Remove the "arenas.chunksize" mallctl, which is redundant with
"opt.lg_chunk".
2012-04-02 07:04:34 -07:00
Jason Evans
cd9a1346e9 Implement tsd.
Implement tsd, which is a TLS/TSD abstraction that uses one or both
internally.  Modify bootstrapping such that no tsd's are utilized until
allocation is safe.

Remove malloc_[v]tprintf(), and use malloc_snprintf() instead.

Fix %p argument size handling in malloc_vsnprintf().

Fix a long-standing statistics-related bug in the "thread.arena"
mallctl that could cause crashes due to linked list corruption.
2012-03-23 15:14:55 -07:00
Jason Evans
4162627757 Remove the swap feature.
Remove the swap feature, which enabled per application swap files.  In
practice this feature has not proven itself useful to users.
2012-02-13 10:56:17 -08:00
Jason Evans
7372b15a31 Reduce cpp conditional logic complexity.
Convert configuration-related cpp conditional logic to use static
constant variables, e.g.:

  #ifdef JEMALLOC_DEBUG
    [...]
  #endif

becomes:

  if (config_debug) {
    [...]
  }

The advantage is clearer, more concise code.  The main disadvantage is
that data structures no longer have conditionally defined fields, so
they pay the cost of all fields regardless of whether they are used.  In
practice, this is only a minor concern; config_stats will go away in an
upcoming change, and config_prof is the only other major feature that
depends on more than a few special-purpose fields.
2012-02-10 20:22:09 -08:00
Jason Evans
12a4887826 Fix huge_ralloc to maintain chunk statistics.
Fix huge_ralloc() to properly maintain chunk statistics when using
mremap(2).
2011-11-11 14:41:59 -08:00
Jason Evans
7427525c28 Move repo contents in jemalloc/ to top level. 2011-03-31 20:36:17 -07:00