Commit Graph

1314 Commits

Author SHA1 Message Date
Jason Evans
a7153a0d7d Fix a "thread.arena" mallctl bug.
Fix a variable reversal bug in mallctl("thread.arena", ...).
2011-03-14 11:43:54 -07:00
Jason Evans
814b9bda7f Fix a cpp logic regression.
Fix a cpp logic error that was introduced by the recent commit:
	Fix "thread.{de,}allocatedp" mallctl.
2011-03-06 23:03:33 -08:00
Jason Evans
e27d134efc Merge branch 'dev' 2011-03-02 12:19:58 -08:00
je
6e56e5ec6a Update ChangeLog for 2.1.2. 2011-03-02 11:23:41 -08:00
Arun Sharma
af5d6987f8 Build both PIC and no PIC static libraries
When jemalloc is linked into an executable (as opposed to a shared
library), compiling with -fno-pic can have significant advantages,
mainly because we don't have to go throught the GOT (global offset
table).

Users who want to link jemalloc into a shared library that could
be dlopened need to link with libjemalloc_pic.a or libjemalloc.so.
2011-03-02 11:14:50 -08:00
Jason Evans
655f04a5a4 Fix style nits. 2011-02-13 18:44:59 -08:00
Jason Evans
9dcad2dfd1 Fix "thread.{de,}allocatedp" mallctl.
For the non-TLS case (as on OS X), if the "thread.{de,}allocatedp"
mallctl was called before any allocation occurred for that thread, the
TSD was still NULL, thus putting the application at risk of
dereferencing NULL.  Fix this by refactoring the initialization code,
and making it part of the conditional logic for all per thread
allocation counter accesses.
2011-02-13 18:11:54 -08:00
Jason Evans
6369286f83 Add release dates to ChangeLog. 2011-02-07 22:48:35 -08:00
Jason Evans
a73ebd946a Merge branch 'dev' 2011-01-31 20:12:32 -08:00
Jason Evans
ada55b2e92 Update ChangeLog for 2.1.1. 2011-01-31 20:08:56 -08:00
Jason Evans
31bfb3e7b0 Fix an alignment-related bug in huge_ralloc().
Fix huge_ralloc() to call huge_palloc() only if alignment requires it.
This bug caused under-sized allocation for aligned huge reallocation
(via rallocm()) if the requested alignment was less than the chunk size
(4 MiB by default).
2011-01-31 19:58:22 -08:00
Jason Evans
f256680f87 Fix ALLOCM_LG_ALIGN definition.
Fix ALLOCM_LG_ALIGN to take a parameter and use it.  Apparently, an
editing error left ALLOCM_LG_ALIGN with the same definition as
ALLOCM_LG_ALIGN_MASK.
2011-01-26 08:24:24 -08:00
Jason Evans
dbd3832d20 Fix assertion typos.
s/=/==/ in several assertions, as well as fixing spelling errors.
2011-01-14 17:37:27 -08:00
Jason Evans
10e4523094 Fix a heap dumping deadlock.
Restructure the ctx initialization code such that the ctx isn't locked
across portions of the initialization code where allocation could occur.
Instead artificially inflate the cnt_merged.curobjs field, just as is
done elsewhere to avoid similar races to the one that would otherwise be
created by the reduction in locking scope.

This bug affected interval- and growth-triggered heap dumping, but not
manual heap dumping.
2011-01-14 17:27:44 -08:00
Jason Evans
624f2f3cc9 Fix a "thread.arena" mallctl bug.
When setting a new arena association for the calling thread, also update
the tcache's cached arena pointer, primarily so that
tcache_alloc_small_hard() uses the intended arena.
2010-12-29 12:21:05 -08:00
Jason Evans
8ad0eacfb3 Update various comments. 2010-12-17 18:07:53 -08:00
Jason Evans
2a6f2af6e4 Remove an arena_bin_run_size_calc() constraint.
Remove the constraint that small run headers fit in one page.  This
constraint was necessary to avoid dirty page purging issues for unused
pages within runs for medium size classes (which no longer exist).
2010-12-16 14:23:32 -08:00
Jason Evans
2b769797ce Edit INSTALL. 2010-12-16 14:13:46 -08:00
Jason Evans
50ac670d09 Remove high_water from tcache_bin_t.
Remove the high_water field from tcache_bin_t, since it is not useful
for anything.
2010-12-16 14:12:48 -08:00
Jason Evans
1c4b088b08 Merge branch 'dev' 2010-12-03 17:05:01 -08:00
Jason Evans
0e8d3d2cb9 Updated ChangeLog for 2.1.0. 2010-12-03 17:02:16 -08:00
Jason Evans
ecf229a39f Add the "thread.[de]allocatedp" mallctl's. 2010-12-03 15:55:47 -08:00
Jason Evans
cfdc8cfbd6 Use mremap(2) for huge realloc().
If mremap(2) is available and supports MREMAP_FIXED, use it for huge
realloc().

Initialize rtree later during bootstrapping, so that --enable-debug
--enable-dss works.

Fix a minor swap_avail stats bug.
2010-11-30 16:50:58 -08:00
Jason Evans
aee7fd2b70 Convert man page from roff to DocBook.
Convert the man page source from roff to DocBook, and generate html and
roff output.  Modify the build system such that the documentation can be
built as part of the release process, so that users need not have
DocBook tools installed.
2010-11-26 19:32:22 -08:00
Jason Evans
fc4dcfa2f5 Push down ctl_mtx.
Many mallctl*() end points require no locking, so push the locking down
to just the functions that need it.  This is of particular import for
"thread.allocated" and "thread.deallocated", which are intended as a
low-overhead way to introspect per thread allocation activity.
2010-11-24 15:44:21 -08:00
Jason Evans
1f17bd9395 Fix mallctlnametomib() documentation.
Fix the prototype for mallctlnametomib() in the manual page to
correspond to reality.
2010-11-05 15:53:34 -07:00
Jason Evans
0a36622dd1 Merge branch 'dev' 2010-10-29 20:21:45 -07:00
Jason Evans
53806fef53 Update ChangeLog for 2.0.1. 2010-10-29 20:16:39 -07:00
Jason Evans
b04a940ee5 Fix prof bugs.
Fix a race condition in ctx destruction that could cause undefined
behavior (deadlock observed).

Add mutex unlocks to some OOM error paths.
2010-10-27 19:47:40 -07:00
Jason Evans
d4bab21756 Fix compilation error.
Don't declare loop variable inside for (...) clause.
2010-10-24 20:08:37 -07:00
Jason Evans
b059a534f7 Re-indent ChangeLog.
Fix indentation inconsistencies in ChangeLog.
2010-10-24 16:54:40 -07:00
Jason Evans
a39d5b6ef2 Merge branch 'dev' 2010-10-24 16:51:13 -07:00
Jason Evans
3af83344a5 Document groff commands for manpage formatting.
Document how to format the manpage for the terminal, pdf, and html.
2010-10-24 16:48:52 -07:00
Jason Evans
0176e3057d Bump library version number. 2010-10-24 16:32:13 -07:00
Jason Evans
379f847f44 Add ChangeLog.
Add ChangeLog, which briefly summarizes releases.

Edit README and INSTALL.
2010-10-24 16:18:29 -07:00
Jason Evans
ce93055c49 Use madvise(..., MADV_FREE) on OS X.
Use madvise(..., MADV_FREE) rather than msync(..., MS_KILLPAGES) on OS
X, since it works for at least OS X 10.5 and 10.6.
2010-10-24 13:03:07 -07:00
Jason Evans
0d38791e7a Edit manpage.
Make various minor edits to the manpage.
2010-10-24 12:51:38 -07:00
Jason Evans
8da141f47a Re-format size class table.
Use a more compact layout for the size class table in the man page.
This avoids layout glitches due to approaching the single-page table
size limit.
2010-10-24 11:34:50 -07:00
Jason Evans
49d0293c88 Add missing #ifdef JEMALLOC_PROF.
Only call prof_boot0() if profiling is enabled.
2010-10-23 23:43:37 -07:00
Jason Evans
e73397062a Replace JEMALLOC_OPTIONS with MALLOC_CONF.
Replace the single-character run-time flags with key/value pairs, which
can be set via the malloc_conf global, /etc/malloc.conf, and the
MALLOC_CONF environment variable.

Replace the JEMALLOC_PROF_PREFIX environment variable with the
"opt.prof_prefix" option.

Replace umax2s() with u2s().
2010-10-23 18:37:06 -07:00
Jason Evans
e4f7846f1f Fix heap profiling bugs.
Fix a regression due to the recent heap profiling accuracy improvements:
prof_{m,re}alloc() must set the object's profiling context regardless of
whether it is sampled.

Fix management of the CHUNK_MAP_CLASS chunk map bits, such that all
large object (re-)allocation paths correctly initialize the bits.  Prior
to this fix, in-place realloc() cleared the bits, resulting in incorrect
reported object size from arena_salloc_demote().  After this fix the
non-demoted bit pattern is all zeros (instead of all ones), which makes
it easier to assure that the bits are properly set.
2010-10-22 10:45:59 -07:00
Jason Evans
81b4e6eb6f Fix a heap profiling regression.
Call prof_ctx_set() in all paths through prof_{m,re}alloc().

Inline arena_prof_ctx_get().
2010-10-20 20:52:00 -07:00
Jason Evans
4d6a134e13 Inline the fast path for heap sampling.
Inline the heap sampling code that is executed for every allocation
event (regardless of whether a sample is taken).

Combine all prof TLS data into a single data structure, in order to
reduce the TLS lookup volume.
2010-10-20 19:05:59 -07:00
Jason Evans
93443689a4 Add per thread allocation counters, and enhance heap sampling.
Add the "thread.allocated" and "thread.deallocated" mallctls, which can
be used to query the total number of bytes ever allocated/deallocated by
the calling thread.

Add s2u() and sa2u(), which can be used to compute the usable size that
will result from an allocation request of a particular size/alignment.

Re-factor ipalloc() to use sa2u().

Enhance the heap profiler to trigger samples based on usable size,
rather than request size.  This has a subtle, but important, impact on
the accuracy of heap sampling.  For example, previous to this change,
16- and 17-byte objects were sampled at nearly the same rate, but
17-byte objects actually consume 32 bytes each.  Therefore it was
possible for the sample to be somewhat skewed compared to actual memory
usage of the allocated objects.
2010-10-20 17:39:18 -07:00
Jason Evans
21fb95bba6 Fix a bug in arena_dalloc_bin_run().
Fix the newsize argument to arena_run_trim_tail() that
arena_dalloc_bin_run() passes.  Previously, oldsize-newsize (i.e. the
complement) was passed, which could erroneously cause dirty pages to be
returned to the clean available runs tree.  Prior to the
CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED conversion, this bug merely
caused dirty pages to be unaccounted for (and therefore never get
purged), but with CHUNK_MAP_UNZEROED, this could cause dirty pages to be
treated as zeroed (i.e. memory corruption).
2010-10-18 17:45:40 -07:00
Jason Evans
088e6a0a37 Fix arena bugs.
Split arena_dissociate_bin_run() out of arena_dalloc_bin_run(), so that
arena_bin_malloc_hard() can avoid dissociation when recovering from
losing a race.  This fixes a bug introduced by a recent attempted fix.

Fix a regression in arena_ralloc_large_grow() that was introduced by
recent fixes.
2010-10-18 00:04:44 -07:00
Jason Evans
8de6a02823 Fix arena bugs.
Move part of arena_bin_lower_run() into the callers, since the
conditions under which it should be called differ slightly between
callers.

Fix arena_chunk_purge() to omit run size in the last map entry for each
run it temporarily allocates.
2010-10-17 20:57:30 -07:00
Jason Evans
12ca91402b Add assertions to run coalescing.
Assert that the chunk map bits at the ends of the runs that participate
in coalescing are self-consistent.
2010-10-17 19:56:09 -07:00
Jason Evans
940a2e02b2 Fix numerous arena bugs.
In arena_ralloc_large_grow(), update the map element for the end of the
newly grown run, rather than the interior map element that was the
beginning of the appended run.  This is a long-standing bug, and it had
the potential to cause massive corruption, but triggering it required
roughly the following sequence of events:
  1) Large in-place growing realloc(), with left-over space in the run
     that followed the large object.
  2) Allocation of the remainder run left over from (1).
  3) Deallocation of the remainder run *before* deallocation of the
     large run, with unfortunate interior map state left over from
     previous run allocation/deallocation activity, such that one or
     more pages of allocated memory would be treated as part of the
     remainder run during run coalescing.
In summary, this was a bad bug, but it was difficult to trigger.

In arena_bin_malloc_hard(), if another thread wins the race to allocate
a bin run, dispose of the spare run via arena_bin_lower_run() rather
than arena_run_dalloc(), since the run has already been prepared for use
as a bin run.  This bug has existed since March 14, 2010:
    e00572b384
    mmap()/munmap() without arena->lock or bin->lock.

Fix bugs in arena_dalloc_bin_run(), arena_trim_head(),
arena_trim_tail(), and arena_ralloc_large_grow() that could cause the
CHUNK_MAP_UNZEROED map bit to become corrupted.  These are all
long-standing bugs, but the chances of them actually causing problems
was much lower before the CHUNK_MAP_ZEROED --> CHUNK_MAP_UNZEROED
conversion.

Fix a large run statistics regression in arena_ralloc_large_grow() that
was introduced on September 17, 2010:
    8e3c3c61b5
    Add {,r,s,d}allocm().

Add debug code to validate that supposedly pre-zeroed memory really is.
2010-10-17 17:52:14 -07:00
Jason Evans
397e5111b5 Preserve CHUNK_MAP_UNZEROED for small runs.
Preserve CHUNK_MAP_UNZEROED when allocating small runs, because it is
possible that untouched pages will be returned to the tree of clean
runs, where the CHUNK_MAP_UNZEROED flag matters.  Prior to the
conversion from CHUNK_MAP_ZEROED, this was already a bug, but in the
worst case extra zeroing occurred.  After the conversion, this bug made
it possible to incorrectly treat pages as pre-zeroed.
2010-10-16 16:19:10 -07:00