Commit Graph

46 Commits

Author SHA1 Message Date
Jason Evans
81b4e6eb6f Fix a heap profiling regression.
Call prof_ctx_set() in all paths through prof_{m,re}alloc().

Inline arena_prof_ctx_get().
2010-10-20 20:52:00 -07:00
Jason Evans
4d6a134e13 Inline the fast path for heap sampling.
Inline the heap sampling code that is executed for every allocation
event (regardless of whether a sample is taken).

Combine all prof TLS data into a single data structure, in order to
reduce the TLS lookup volume.
2010-10-20 19:05:59 -07:00
Jason Evans
93443689a4 Add per thread allocation counters, and enhance heap sampling.
Add the "thread.allocated" and "thread.deallocated" mallctls, which can
be used to query the total number of bytes ever allocated/deallocated by
the calling thread.

Add s2u() and sa2u(), which can be used to compute the usable size that
will result from an allocation request of a particular size/alignment.

Re-factor ipalloc() to use sa2u().

Enhance the heap profiler to trigger samples based on usable size,
rather than request size.  This has a subtle, but important, impact on
the accuracy of heap sampling.  For example, previous to this change,
16- and 17-byte objects were sampled at nearly the same rate, but
17-byte objects actually consume 32 bytes each.  Therefore it was
possible for the sample to be somewhat skewed compared to actual memory
usage of the allocated objects.
2010-10-20 17:39:18 -07:00
Jason Evans
940a2e02b2 Fix numerous arena bugs.
In arena_ralloc_large_grow(), update the map element for the end of the
newly grown run, rather than the interior map element that was the
beginning of the appended run.  This is a long-standing bug, and it had
the potential to cause massive corruption, but triggering it required
roughly the following sequence of events:
  1) Large in-place growing realloc(), with left-over space in the run
     that followed the large object.
  2) Allocation of the remainder run left over from (1).
  3) Deallocation of the remainder run *before* deallocation of the
     large run, with unfortunate interior map state left over from
     previous run allocation/deallocation activity, such that one or
     more pages of allocated memory would be treated as part of the
     remainder run during run coalescing.
In summary, this was a bad bug, but it was difficult to trigger.

In arena_bin_malloc_hard(), if another thread wins the race to allocate
a bin run, dispose of the spare run via arena_bin_lower_run() rather
than arena_run_dalloc(), since the run has already been prepared for use
as a bin run.  This bug has existed since March 14, 2010:
    e00572b384
    mmap()/munmap() without arena->lock or bin->lock.

Fix bugs in arena_dalloc_bin_run(), arena_trim_head(),
arena_trim_tail(), and arena_ralloc_large_grow() that could cause the
CHUNK_MAP_UNZEROED map bit to become corrupted.  These are all
long-standing bugs, but the chances of them actually causing problems
was much lower before the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED
conversion.

Fix a large run statistics regression in arena_ralloc_large_grow() that
was introduced on September 17, 2010:
    8e3c3c61b5
    Add {,r,s,d}allocm().

Add debug code to validate that supposedly pre-zeroed memory really is.
2010-10-17 17:52:14 -07:00
Jason Evans
c6e950665c Increase PRN 'a' and 'c' constants.
Increase PRN 'a' and 'c' constants, so that high bits tend to cascade
more.
2010-10-03 00:22:46 -07:00
Jason Evans
588a32cd84 Increase default backtrace depth from 4 to 128.
Increase the default backtrace depth, because shallow backtraces tend to
result in confusing pprof output graphs.
2010-10-02 22:38:14 -07:00
Jason Evans
a881cd2c61 Make cumulative heap profile data optional.
Add the R option to control whether cumulative heap profile data
are maintained.  Add the T option to control the size of per thread
backtrace caches, primarily because when the R option is specified,
backtraces that no longer have allocations associated with them are
discarded as soon as no thread caches refer to them.
2010-10-02 21:40:26 -07:00
Jason Evans
3377ffa1f4 Change CHUNK_MAP_ZEROED to CHUNK_MAP_UNZEROED.
Invert the chunk map bit that tracks whether a page is zeroed, so that
for zeroed arena chunks, the interior of the page map does not need to
be initialized (as it consists entirely of zero bytes).
2010-10-01 17:53:37 -07:00
Jason Evans
7393f44ff0 Omit chunk header in arena chunk map.
Omit the first map_bias elements of the map in arena_chunk_t.  This
avoids barely spilling over into an extra chunk header page for common
chunk sizes.
2010-10-01 17:35:43 -07:00
Jason Evans
37dab02e52 Disable interval-based profile dumps by default.
It is common to have to specify something like JEMALLOC_OPTIONS=F31i,
because interval-based dumps are often unuseful or too expensive.
Therefore, disable interval-based dumps by default.  To get the previous
default behavior it is now necessary to specify 31I as part of the
options.
2010-09-30 17:10:17 -07:00
Jason Evans
6005f0710c Add the "arenas.purge" mallctl. 2010-09-30 16:55:08 -07:00
Jason Evans
075e77cad4 Fix compiler warnings and errors.
Use INT_MAX instead of MAX_INT in ALLOCM_ALIGN(), and #include
<limits.h> in order to get its definition.

Modify prof code related to hash tables to avoid aliasing warnings from
gcc 4.1.2 (gcc 4.4.0 and 4.4.3 do not warn).
2010-09-20 19:53:25 -07:00
Jason Evans
355b438c85 Fix compiler warnings.
Add --enable-cc-silence, which can be used to silence harmless warnings.

Fix an aliasing bug in ckh_pointer_hash().
2010-09-20 19:20:48 -07:00
Jason Evans
6a0d2918ce Add memalign() and valloc() overrides.
If memalign() and/or valloc() are present on the system, override them
in order to avoid mixed allocator usage.
2010-09-20 16:52:41 -07:00
Jason Evans
a09f55c87d Wrap strerror_r().
Create the buferror() function, which wraps strerror_r().  This is
necessary because glibc provides a non-standard strerror_r().
2010-09-20 16:05:41 -07:00
Jason Evans
a094babe33 Add gcc attributes for *allocm() prototypes. 2010-09-17 17:35:42 -07:00
Jason Evans
8e3c3c61b5 Add {,r,s,d}allocm().
Add allocm(), rallocm(), sallocm(), and dallocm(), which are a
functional superset of malloc(), calloc(), posix_memalign(),
malloc_usable_size(), and free().
2010-09-17 15:46:18 -07:00
Jason Evans
8d7a94b275 Fix porting regressions.
Fix new build failures and test failures on Linux that were introduced
by the port to OS X.
2010-09-11 23:38:12 -07:00
Jason Evans
2dbecf1f62 Port to Mac OS X.
Add Mac OS X support, based in large part on the OS X support in
Mozilla's version of jemalloc.
2010-09-11 18:20:16 -07:00
Jordan DeLong
2206e1acc1 Add MAP_NORESERVE support.
Add MAP_NORESERVE to the chunk_mmap() case being used by
chunk_swap_enable(), if the system supports it.
2010-05-11 11:46:53 -07:00
Jason Evans
ecea0f6125 Fix junk filling of cached large objects.
Use the size argument to tcache_dalloc_large() to control the number of
bytes set to 0x5a when junk filling is enabled, rather than accessing a
non-existent arena bin.  This bug was capable of corrupting an
arbitrarily large memory region, depending on what followed the arena
data structure in memory (typically zeroed memory, another arena_t, or a
red-black tree node for a huge object).
2010-04-28 12:00:59 -07:00
Jason Evans
5065156f3f Fix threads-related profiling bugs.
Initialize bt2cnt_tsd so that cleanup at thread exit actually happens.

Associate (prof_ctx_t *) with allocated objects, rather than
(prof_thr_cnt_t *).  Each thread must always operate on its own
(prof_thr_cnt_t *), and an object may outlive the thread that allocated it.
2010-04-13 21:17:11 -07:00
Jason Evans
1bb602125c Update stale JEMALLOC_FILL code.
Fix a compilation error due to stale data structure access code in
tcache_dalloc_large() for junk filling.
2010-04-13 21:17:02 -07:00
Jason Evans
799ca0b68d Revert re-addition of purge_lock.
Linux kernels have been capable of concurrent page table access since
2.6.27, so this hack is not necessary for modern kernels.
2010-04-08 20:31:58 -07:00
Jason Evans
0656ec0eb4 Fix build system problems.
Split library build rules up so that parallel building works.

Fix autoconf-related dependencies.

Remove obsolete JEMALLOC_VERSION definition.
2010-04-07 23:37:35 -07:00
Jason Evans
f18c982001 Add sampling activation/deactivation control.
Add the E/e options to control whether the application starts with
sampling active/inactive (secondary control to F/f).  Add the
prof.active mallctl so that the application can activate/deactivate
sampling on the fly.
2010-03-31 18:43:24 -07:00
Jason Evans
a02fc08ec9 Make interval-triggered profile dumping optional.
Make it possible to disable interval-triggered profile dumping, even if
profiling is enabled.  This is useful if the user only wants a single
dump at exit, or if the application manually triggers profile dumps.
2010-03-31 17:35:51 -07:00
Jason Evans
0b270a991d Reduce statistical heap sampling memory overhead.
If the mean heap sampling interval is larger than one page, simulate
sampled small objects with large objects.  This allows profiling context
pointers to be omitted for small objects.  As a result, the memory
overhead for sampling decreases as the sampling interval is increased.

Fix a compilation error in the profiling code.
2010-03-31 16:45:04 -07:00
Jason Evans
169cbc1ef7 Re-add purge_lock to funnel madvise(2) calls. 2010-03-26 18:10:19 -07:00
Jason Evans
19b3d61892 Track dirty and clean runs separately.
Split arena->runs_avail into arena->runs_avail_{clean,dirty}, and
preferentially allocate dirty runs.
2010-03-18 20:36:40 -07:00
Jason Evans
dafde14e08 Remove medium size classes.
Remove medium size classes, because concurrent dirty page purging is
no longer capable of purging inactive dirty pages inside active runs
(due to recent arena/bin locking changes).

Enhance tcache to support caching large objects, so that the same range
of size classes is still cached, despite the removal of medium size
class support.
2010-03-17 16:27:39 -07:00
Jason Evans
f00bb7f132 Add assertions.
Check for interior pointers in arena_[ds]alloc().

Check for corrupt pointers in tcache_alloc().
2010-03-15 16:44:12 -07:00
Jason Evans
05b21be347 Purge dirty pages without arena->lock. 2010-03-14 19:41:18 -07:00
Jason Evans
86815df9dc Push locks into arena bins.
For bin-related allocation, protect data structures with bin locks
rather than arena locks.  Arena locks remain for run
allocation/deallocation and other miscellaneous operations.

Restructure statistics counters to maintain per bin
allocated/nmalloc/ndalloc, but continue to provide arena-wide statistics
via aggregation in the ctl code.
2010-03-14 17:38:09 -07:00
Jason Evans
1e0a636c11 Simplify small object allocation/deallocation.
Use chained run free lists instead of bitmaps to track free objects
within small runs.

Remove reference counting for small object run pages.
2010-03-13 20:38:29 -08:00
Jason Evans
3fa9a2fad8 Simplify tcache object caching.
Use chains of cached objects, rather than using arrays of pointers.

Since tcache_bin_t is no longer dynamically sized, convert tcache_t's
tbin to an array of structures, rather than an array of pointers.  This
implicitly removes tcache_bin_{create,destroy}(), which further
simplifies the fast path for malloc/free.

Use cacheline alignment for tcache_t allocations.

Remove runtime configuration option for number of tcache bin slots, and
replace it with a boolean option for enabling/disabling tcache.

Limit the number of tcache objects to the lesser of TCACHE_NSLOTS_MAX
and 2X the number of regions per run for the size class.

For GC-triggered flush, discard 3/4 of the objects below the low water
mark, rather than 1/2.
2010-03-13 20:38:18 -08:00
Jason Evans
2caa4715ed Modify dirty page purging algorithm.
Convert chunks_dirty from a red-black tree to a doubly linked list,
and use it to purge dirty pages from chunks in FIFO order.

Add a lock around the code that purges dirty pages via madvise(2), in
order to avoid kernel contention.  If lock acquisition fails,
indefinitely postpone purging dirty pages.

Add a lower limit of one chunk worth of dirty pages per arena for
purging, in addition to the active:dirty ratio.

When purging, purge all dirty pages from at least one chunk, but rather
than purging enough pages to drop to half the purging threshold, merely
drop to the threshold.
2010-03-04 22:49:59 -08:00
Jason Evans
698805c525 Simplify malloc_message().
Rather than passing four strings to malloc_message(), malloc_write4(),
and all the functions that use them, only pass one string.
2010-03-03 17:45:38 -08:00
Jason Evans
a40bc7afe8 Add release versioning support.
Base version string on 'git describe --long', and provide cpp
macros in jemalloc.h.

Add the version mallctl.
2010-03-02 13:01:16 -08:00
Jason Evans
22ca855e8f Allow prof.dump mallctl to specify filename. 2010-03-02 12:11:35 -08:00
Jason Evans
74025c85bf Edit rb documentation. 2010-03-02 12:10:52 -08:00
Jason Evans
b9477e782b Implement sampling for heap profiling. 2010-03-01 20:15:26 -08:00
Jason Evans
f3ff75289b Rewrite red-black trees.
Use left-leaning 2-3 red-black trees instead of left-leaning 2-3-4
red-black trees.  This reduces maximum tree height from (3 lg n) to
(2 lg n).

Do lazy balance fixup, rather than transforming the tree during the down
pass.  This improves insert/remove speed by ~30%.

Use callback-based iteration rather than macros.
2010-02-28 15:00:18 -08:00
Jason Evans
3b5ee5e857 Fix #include ordering for mb.h.
Include mb.h after mutex.h, in case it actually has to use the
mutex-based memory barrier implementation.
2010-02-11 15:56:23 -08:00
Jason Evans
cd90fca928 Wrap mallctl* references with JEMALLOC_P(). 2010-02-11 14:55:25 -08:00
Jason Evans
376b1529a3 Restructure source tree. 2010-02-11 14:45:59 -08:00