Commit Graph

28 Commits

Author SHA1 Message Date
Jason Evans
3377ffa1f4 Change CHUNK_MAP_ZEROED to CHUNK_MAP_UNZEROED.
Invert the chunk map bit that tracks whether a page is zeroed, so that
for zeroed arena chunks, the interior of the page map does not need to
be initialized (as it consists entirely of zero bytes).
2010-10-01 17:53:37 -07:00
Jason Evans
7393f44ff0 Omit chunk header in arena chunk map.
Omit the first map_bias elements of the map in arena_chunk_t.  This
avoids barely spilling over into an extra chunk header page for common
chunk sizes.
2010-10-01 17:35:43 -07:00
Jason Evans
6005f0710c Add the "arenas.purge" mallctl. 2010-09-30 16:55:08 -07:00
Jason Evans
8e3c3c61b5 Add {,r,s,d}allocm().
Add allocm(), rallocm(), sallocm(), and dallocm(), which are a
functional superset of malloc(), calloc(), posix_memalign(),
malloc_usable_size(), and free().
2010-09-17 15:46:18 -07:00
Jason Evans
2dbecf1f62 Port to Mac OS X.
Add Mac OS X support, based in large part on the OS X support in
Mozilla's version of jemalloc.
2010-09-11 18:20:16 -07:00
Jason Evans
dcd15098a8 Move assert() calls up in arena_run_reg_alloc().
Move assert() calls up in arena_run_reg_alloc(), so that a corrupt
pointer will likely be caught by an assertion *before* it is
dereferenced.
2010-08-05 12:13:42 -07:00
Jason Evans
8d4203c72d Fix arena chunk purge/dealloc race conditions.
Fix arena_chunk_dealloc() to put the new spare in a consistent state before
dropping the arena mutex to deallocate the previous spare.

Fix arena_run_dalloc() to insert a newly dirtied chunk into the
chunks_dirty list before potentially deallocating the chunk, so that dirty
page accounting is self-consistent.
2010-04-13 21:17:18 -07:00
Jason Evans
5065156f3f Fix threads-related profiling bugs.
Initialize bt2cnt_tsd so that cleanup at thread exit actually happens.

Associate (prof_ctx_t *) with allocated objects, rather than
(prof_thr_cnt_t *).  Each thread must always operate on its own
(prof_thr_cnt_t *), and an object may outlive the thread that allocated it.
2010-04-13 21:17:11 -07:00
Jason Evans
799ca0b68d Revert re-addition of purge_lock.
Linux kernels have been capable of concurrent page table access since
2.6.27, so this hack is not necessary for modern kernels.
2010-04-08 20:31:58 -07:00
Jason Evans
0b270a991d Reduce statistical heap sampling memory overhead.
If the mean heap sampling interval is larger than one page, simulate
sampled small objects with large objects.  This allows profiling context
pointers to be omitted for small objects.  As a result, the memory
overhead for sampling decreases as the sampling interval is increased.

Fix a compilation error in the profiling code.
2010-03-31 16:45:04 -07:00
Jason Evans
169cbc1ef7 Re-add purge_lock to funnel madvise(2) calls. 2010-03-26 18:10:19 -07:00
Jason Evans
c03a63d68d Set/clear CHUNK_MAP_ZEROED in arena_chunk_purge().
Properly set/clear CHUNK_MAP_ZEROED for all purged pages, according to
whether the pages are (potentially) file-backed or anonymous.  This was
merely a performance pessimization for the anonymous mapping case, but
was a calloc()-related bug for the swap_enabled case.
2010-03-22 11:45:01 -07:00
Jason Evans
19b3d61892 Track dirty and clean runs separately.
Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and
preferentially allocate dirty runs.
2010-03-18 20:36:40 -07:00
Jason Evans
dafde14e08 Remove medium size classes.
Remove medium size classes, because concurrent dirty page purging is
no longer capable of purging inactive dirty pages inside active runs
(due to recent arena/bin locking changes).

Enhance tcache to support caching large objects, so that the same range
of size classes is still cached, despite the removal of medium size
class support.
2010-03-17 16:27:39 -07:00
Jason Evans
e69bee01de Fix a run initialization race condition.
Initialize small run header before dropping arena->lock,
arena_chunk_purge() relies on valid small run headers during run
iteration.

Add some assertions.
2010-03-15 22:25:23 -07:00
Jason Evans
f00bb7f132 Add assertions.
Check for interior pointers in arena_[ds]alloc().

Check for corrupt pointers in tcache_alloc().
2010-03-15 16:44:12 -07:00
Jason Evans
d9ef75fed4 arena_chunk_purge() arena->nactive fix.
Update arena->nactive when pseudo-allocating runs in
arena_chunk_purge(), since arena_run_dalloc() subtracts from
arena->nactive.
2010-03-15 12:43:07 -07:00
Jason Evans
e00572b384 mmap()/munmap() without arena->lock or bin->lock. 2010-03-14 19:43:56 -07:00
Jason Evans
05b21be347 Purge dirty pages without arena->lock. 2010-03-14 19:41:18 -07:00
Jason Evans
86815df9dc Push locks into arena bins.
For bin-related allocation, protect data structures with bin locks
rather than arena locks.  Arena locks remain for run
allocation/deallocation and other miscellaneous operations.

Restructure statistics counters to maintain per bin
allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics
via aggregation in the ctl code.
2010-03-14 17:38:09 -07:00
Jason Evans
1e0a636c11 Simplify small object allocation/deallocation.
Use chained run free lists instead of bitmaps to track free objects
within small runs.

Remove reference counting for small object run pages.
2010-03-13 20:38:29 -08:00
Jason Evans
3fa9a2fad8 Simplify tcache object caching.
Use chains of cached objects, rather than using arrays of pointers.

Since tcache_bin_t is no longer dynamically sized, convert tcache_t's
tbin to an array of structures, rather than an array of pointers.  This
implicitly removes tcache_bin_{create,destroy}(), which further
simplifies the fast path for malloc/free.

Use cacheline alignment for tcache_t allocations.

Remove runtime configuration option for number of tcache bin slots, and
replace it with a boolean option for enabling/disabling tcache.

Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX
and 2X the number of regions per run for the size class.

For GC-triggered flush, discard 3/4 of the objects below the low water
mark, rather than 1/2.
2010-03-13 20:38:18 -08:00
Jason Evans
2caa4715ed Modify dirty page purging algorithm.
Convert chunks_dirty from a red-black tree to a doubly linked list,
and use it to purge dirty pages from chunks in FIFO order.

Add a lock around the code that purges dirty pages via madvise(2), in
order to avoid kernel contention.  If lock acquisition fails,
indefinitely postpone purging dirty pages.

Add a lower limit of one chunk worth of dirty pages per arena for
purging, in addition to the active:dirty ratio.

When purging, purge all dirty pages from at least one chunk, but rather
than purging enough pages to drop to half the purging threshold, merely
drop to the threshold.
2010-03-04 22:49:59 -08:00
Jason Evans
698805c525 Simplify malloc_message().
Rather than passing four strings to malloc_message(), malloc_write4(),
and all the functions that use them, only pass one string.
2010-03-03 17:45:38 -08:00
Jason Evans
f3ff75289b Rewrite red-black trees.
Use left-leaning 2-3 red-black trees instead of left-leaning 2-3-4
red-black trees.  This reduces maximum tree height from (3 lg n) to
(2 lg n).

Do lazy balance fixup, rather than transforming the tree during the down
pass.  This improves insert/remove speed by ~30%.

Use callback-based iteration rather than macros.
2010-02-28 15:00:18 -08:00
Jason Evans
f894f74d36 Fix a bug in nmalloc stats. 2010-02-12 14:46:37 -08:00
Jason Evans
59e9be0f5f Avoid extra dumping for JEMALLOC_OPTIONS=L. 2010-02-11 15:18:17 -08:00
Jason Evans
376b1529a3 Restructure source tree. 2010-02-11 14:45:59 -08:00