Commit Graph

2251 Commits

Author SHA1 Message Date
Jason Evans
c03a63d68d Set/clear CHUNK_MAP_ZEROED in arena_chunk_purge().
Properly set/clear CHUNK_MAP_ZEROED for all purged pages, according to
whether the pages are (potentially) file-backed or anonymous.  This was
merely a performance pessimization for the anonymous mapping case, but
was a calloc()-related bug for the swap_enabled case.
2010-03-22 11:45:01 -07:00
Jason Evans
19b3d61892 Track dirty and clean runs separately.
Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and
preferentially allocate dirty runs.
2010-03-18 20:36:40 -07:00
Jason Evans
dafde14e08 Remove medium size classes.
Remove medium size classes, because concurrent dirty page purging is
no longer capable of purging inactive dirty pages inside active runs
(due to recent arena/bin locking changes).

Enhance tcache to support caching large objects, so that the same range
of size classes is still cached, despite the removal of medium size
class support.
2010-03-17 16:27:39 -07:00
Jason Evans
e69bee01de Fix a run initialization race condition.
Initialize small run header before dropping arena->lock,
arena_chunk_purge() relies on valid small run headers during run
iteration.

Add some assertions.
2010-03-15 22:25:23 -07:00
Jason Evans
f00bb7f132 Add assertions.
Check for interior pointers in arena_[ds]alloc().

Check for corrupt pointers in tcache_alloc().
2010-03-15 16:44:12 -07:00
Jason Evans
6b5974403b Widen malloc_stats_print() output columns. 2010-03-15 15:50:48 -07:00
Jason Evans
d9ef75fed4 arena_chunk_purge() arena->nactive fix.
Update arena->nactive when pseudo-allocating runs in
arena_chunk_purge(), since arena_run_dalloc() subtracts from
arena->nactive.
2010-03-15 12:43:07 -07:00
Jason Evans
992242c545 Change xmallctl() --> CTL_GET() where possible. 2010-03-14 19:55:32 -07:00
Jason Evans
19b6a5537d Fix malloc_stats_print() man page prototype. 2010-03-14 19:52:26 -07:00
Jason Evans
e00572b384 mmap()/munmap() without arena->lock or bin->lock. 2010-03-14 19:43:56 -07:00
Jason Evans
05b21be347 Purge dirty pages without arena->lock. 2010-03-14 19:41:18 -07:00
Jason Evans
86815df9dc Push locks into arena bins.
For bin-related allocation, protect data structures with bin locks
rather than arena locks.  Arena locks remain for run
allocation/deallocation and other miscellaneous operations.

Restructure statistics counters to maintain per bin
allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics
via aggregation in the ctl code.
2010-03-14 17:38:09 -07:00
Jason Evans
1e0a636c11 Simplify small object allocation/deallocation.
Use chained run free lists instead of bitmaps to track free objects
within small runs.

Remove reference counting for small object run pages.
2010-03-13 20:38:29 -08:00
Jason Evans
3fa9a2fad8 Simplify tcache object caching.
Use chains of cached objects, rather than using arrays of pointers.

Since tcache_bin_t is no longer dynamically sized, convert tcache_t's
tbin to an array of structures, rather than an array of pointers.  This
implicitly removes tcache_bin_{create,destroy}(), which further
simplifies the fast path for malloc/free.

Use cacheline alignment for tcache_t allocations.

Remove runtime configuration option for number of tcache bin slots, and
replace it with a boolean option for enabling/disabling tcache.

Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX
and 2X the number of regions per run for the size class.

For GC-triggered flush, discard 3/4 of the objects below the low water
mark, rather than 1/2.
2010-03-13 20:38:18 -08:00
Jason Evans
2caa4715ed Modify dirty page purging algorithm.
Convert chunks_dirty from a red-black tree to a doubly linked list,
and use it to purge dirty pages from chunks in FIFO order.

Add a lock around the code that purges dirty pages via madvise(2), in
order to avoid kernel contention.  If lock acquisition fails,
indefinitely postpone purging dirty pages.

Add a lower limit of one chunk worth of dirty pages per arena for
purging, in addition to the active:dirty ratio.

When purging, purge all dirty pages from at least one chunk, but rather
than purging enough pages to drop to half the purging threshold, merely
drop to the threshold.
2010-03-04 22:49:59 -08:00
Jason Evans
3c2d9c899c Print version in malloc_stats_print(). 2010-03-03 17:55:03 -08:00
Jason Evans
698805c525 Simplify malloc_message().
Rather than passing four strings to malloc_message(), malloc_write4(),
and all the functions that use them, only pass one string.
2010-03-03 17:45:38 -08:00
Jason Evans
cfeccd34a3 Fix various config/build issues.
Don't look for a shared libunwind if --with-static-libunwind is
specified.

Set SONAME when linking the shared libjemalloc.

Add DESTDIR support.

Add install_{include,lib/man} build targets.

Clean up compiler flag configuration.
2010-03-03 16:38:07 -08:00
Jason Evans
9df0215f9b Move sampling init into prof_alloc_prep().
Move prof_sample_threshold initialization into prof_alloc_prep(),
before using it to decide whether to capture a backtrace.
2010-03-03 12:08:45 -08:00
Jason Evans
ca6bd4f1c8 Add the --with-static-libunwind configure option. 2010-03-02 14:12:58 -08:00
Jason Evans
a40bc7afe8 Add release versioning support.
Base version string on 'git describe --long', and provide cpp
macros in jemalloc.h.

Add the version mallctl.
2010-03-02 13:01:16 -08:00
Jason Evans
22ca855e8f Allow prof.dump mallctl to specify filename. 2010-03-02 12:11:35 -08:00
Jason Evans
74025c85bf Edit rb documentation. 2010-03-02 12:10:52 -08:00
Jason Evans
b9477e782b Implement sampling for heap profiling. 2010-03-01 20:15:26 -08:00
Jason Evans
f3ff75289b Rewrite red-black trees.
Use left-leaning 2-3 red-black trees instead of left-leaning 2-3-4
red-black trees.  This reduces maximum tree height from (3 lg n) to
(2 lg n).

Do lazy balance fixup, rather than transforming the tree during the down
pass.  This improves insert/remove speed by ~30%.

Use callback-based iteration rather than macros.
2010-02-28 15:00:18 -08:00
Jason Evans
fbb504def6 Don't implicitly enable interval-based profiling. 2010-02-16 15:46:57 -08:00
Jason Evans
f894f74d36 Fix a bug in nmalloc stats. 2010-02-12 14:46:37 -08:00
Jason Evans
65aae2cf57 Fix a man page typo. 2010-02-11 16:46:42 -08:00
Jason Evans
3b5ee5e857 Fix #include ordering for mb.h.
Include mb.h after mutex.h, in case it actually has to use the
mutex-based memory barrier implementation.
2010-02-11 15:56:23 -08:00
Jason Evans
59e9be0f5f Avoid extra dumping for JEMALLOC_OPTIONS=L. 2010-02-11 15:18:17 -08:00
Jason Evans
cd90fca928 Wrap mallctl* references with JEMALLOC_P(). 2010-02-11 14:55:25 -08:00
Jason Evans
376b1529a3 Restructure source tree. 2010-02-11 14:45:59 -08:00
Jason Evans
fe5faa2cc5 Remove tracing (--enable-trace).
Remove all functionality related to tracing.  This functionality was
useful for understanding memory fragmentation during early algorithmic
design of jemalloc, but it had little utility for non-trivial
applications, due to the sheer volume of data written to disk.
2010-02-11 13:38:12 -08:00
Jason Evans
d34f9e7e93 Implement interval-based heap profile dumping.
Add mallctl interfaces for profiling parameters.

Fix a file descriptor leak in heap profile dumping.
2010-02-11 13:19:21 -08:00
Jason Evans
b01a6c2057 Add JEMALLOC_PROF_PREFIX support.
If JEMALLOC_PROF_PREFIX is set in the environment, use it as the
filename prefix when dumping heap profiles, rather than "jeprof".
2010-02-11 10:25:36 -08:00
Jason Evans
c717718115 Dump /proc/<pid>/maps in heap profiles. 2010-02-11 09:25:56 -08:00
Jason Evans
3383af6c2d Fix a profiling bootstrap bug.
Bootstrap profiling in three stages, so that it is usable by the time
the first application allocation occurs.
2010-02-11 08:59:06 -08:00
Jason Evans
b27805b363 Various heap profiling improvements.
Add the --disable-prof-libgcc configure option, and add backtracing
based on libgcc, which is used by default.

Fix a bug in hash().

Fix various configuration-dependent compilation errors.
2010-02-10 18:20:38 -08:00
Jason Evans
6109fe07a1 Implement allocation profiling and leack checking.
Add the --enable-prof and --enable-prof-libunwind configure options.

Add the B/b, F/f, I/i, L/l, and U/u JEMALLOC_OPTIONS.

Interval-based profile dump triggering is not yet implemented.

Add supporting generic code:
* Add memory barriers.
* Add prn (LCG PRNG).
* Add hash (Murmur hash function).
* Add ckh (cuckoo hash tables).
2010-02-10 10:37:57 -08:00
Jason Evans
13668262d1 Fix some comments and whitespace. 2010-01-31 03:57:29 -08:00
Jason Evans
990d10cefb Fix large object stats collection bugs. 2010-01-31 03:49:35 -08:00
Jason Evans
a0bf242230 Fix bootstrapping crash.
If a custom small_size2bin table was required due to non-default size
class settings, memory allocation prior to initializing chunk parameters
would cause a crash due to division by 0.  The fix re-orders the various
*_boot() function calls.

Bootstrapping is simpler now than it was before the base allocator
started just using the chunk allocator directly.  This allows
arena_boot[01]() to be combined.

Add error detection for pthread_atfork() and atexit() function calls.
2010-01-29 14:30:41 -08:00
Jason Evans
d8f565f239 Remove tcache bin sorting during flush.
This feature caused significant performance degradation, and the
fragmentation reduction benefits were difficult to quantify.
2010-01-29 13:37:31 -08:00
Jason Evans
c66aaf1476 Statistics fixes and cleanup.
Fix a type mismatch for "arenas.nlruns" mallctl access.  This bug caused
a crash during statistics printing on 64-bit systems.

Fix the "stats.active" mallctl to include active memory in huge objects.

Report active bytes for the whole application, as well as per arena.

Remove several unused variables.
2010-01-29 11:24:19 -08:00
Jason Evans
4fb7f51337 Fix a chunk leak in chunk_alloc_mmap().
A missing 'else' in chunk_alloc_mmap() caused an extra chunk to be
allocated every time the optimistic alignment path was entered, since
the following block would always be executed immediately afterward.
This chunk leak caused no increase in physical memory usage, but virtual
memory could grow until resource exaustion caused allocation failures.
2010-01-27 18:27:09 -08:00
Jason Evans
95833311f1 madvise(..., MADV_{RANODOM,NOSYNC}) swap files.
Initialize malloc before calling into the ctl_*() functions.
2010-01-27 13:47:28 -08:00
Jason Evans
3c2343518c Implement mallctl{nametomib,bymib}().
Replace chunk stats code that was missing locking; this fixes a race
condition that could corrupt chunk statistics.

Converting malloc_stats_print() to use mallctl*().

Add a missing semicolon in th DSS code.

Convert malloc_tcache_flush() to a mallctl.

Convert malloc_swap_enable() to a set of mallctl's.
2010-01-27 13:10:56 -08:00
Jason Evans
fbbb624fc1 Simplify malloc_{pre,post}fork().
Revert to simpler lock acquistion/release code in
malloc_{pre,post}fork(), since dynamic arena rebalancing is no longer
implemented.
2010-01-24 17:56:48 -08:00
Jason Evans
68ddb6736d Print merged arena stats iff multiple arenas. 2010-01-24 17:21:47 -08:00
Jason Evans
41631d0061 Modify chunk_alloc() to support optional zeroing.
Use optional zeroing in arena_chunk_alloc() to avoid needless zeroing of
chunks.  This is particularly important in the context of swapfile and
DSS allocation, since a long-lived application may commonly recycle
chunks.
2010-01-24 17:13:07 -08:00