Commit Graph

1335 Commits

Author SHA1 Message Date
Jason Evans
339c2b23b2 Fix chunk_unmap() to propagate dirty state.
Fix chunk_unmap() to propagate whether a chunk is dirty, and modify
dirty chunk purging to record this information so it can be passed to
chunk_unmap().  Since the broken version of chunk_unmap() claimed that
all chunks were clean, this resulted in potential memory corruption for
purging implementations that do not zero (e.g. MADV_FREE).

This regression was introduced by
ee41ad409a (Integrate whole chunks into
unused dirty page purging machinery.).
2015-02-17 22:25:56 -08:00
Jason Evans
47701b22ee arena_chunk_dirty_node_init() --> extent_node_dirty_linkage_init() 2015-02-17 22:23:10 -08:00
Jason Evans
eafebfdfbe Remove obsolete type arena_chunk_miscelms_t. 2015-02-17 16:12:31 -08:00
Jason Evans
a4e1888d1a Simplify extent_node_t and add extent_node_init(). 2015-02-17 15:13:52 -08:00
Jason Evans
ee41ad409a Integrate whole chunks into unused dirty page purging machinery.
Extend per arena unused dirty page purging to manage unused dirty chunks
in aaddtion to unused dirty runs.  Rather than immediately unmapping
deallocated chunks (or purging them in the --disable-munmap case), store
them in a separate set of trees, chunks_[sz]ad_dirty.  Preferrentially
allocate dirty chunks.  When excessive unused dirty pages accumulate,
purge runs and chunks in ingegrated LRU order (and unmap chunks in the
--enable-munmap case).

Refactor extent_node_t to provide accessor functions.
2015-02-16 21:02:17 -08:00
Jason Evans
40ab8f98e4 Remove more obsolete (incorrect) assertions.
This regression was introduced by
88fef7ceda (Refactor huge_*() calls into
arena internals.), and went undetected because of the --enable-debug
regression.
2015-02-15 20:26:45 -08:00
Jason Evans
cb9b44914e Remove obsolete (incorrect) assertions.
This regression was introduced by
88fef7ceda (Refactor huge_*() calls into
arena internals.), and went undetected because of the --enable-debug
regression.
2015-02-15 20:13:28 -08:00
Jason Evans
02e5dcf39d Fix --enable-debug regression.
Fix --enable-debug to actually enable debug mode.  This regression was
introduced by cbf3a6d703 (Move centralized
chunk management into arenas.).
2015-02-15 20:12:06 -08:00
Jason Evans
2195ba4e1f Normalize *_link and link_* fields to all be *_link. 2015-02-15 16:43:52 -08:00
Jason Evans
b01186cebd Remove redundant tcache_boot() call. 2015-02-15 14:04:55 -08:00
Jason Evans
41cfe03f39 If MALLOCX_ARENA(a) is specified, use it during tcache fill. 2015-02-13 15:28:56 -08:00
Abhishek Kulkarni
feaaa3df0d Take into account the install suffix that jemalloc was built with in the pkg-config file.
Signed-off-by: Abhishek Kulkarni <adkulkar@umail.iu.edu>
2015-02-13 12:46:19 -08:00
Dan McGregor
f8880310eb Put VERSION file in object directory
Also allow for the possibility that there exists a VERSION
file in the srcroot, in case of building from a release tarball
out of tree.
2015-02-13 12:36:14 -08:00
Dan McGregor
ab5e3790f6 Build docs in object directory 2015-02-13 12:14:34 -08:00
Jason Evans
5f7140b045 Make prof_tctx accesses atomic.
Although exceedingly unlikely, it appears that writes to the prof_tctx
field of arena_chunk_map_misc_t could be reordered such that a stale
value could be read during deallocation, with profiler metadata
corruption and invalid pointer dereferences being the most likely
effects.
2015-02-12 15:54:53 -08:00
Jason Evans
88fef7ceda Refactor huge_*() calls into arena internals.
Make redirects to the huge_*() API the arena code's responsibility,
since arenas now take responsibility for all allocation sizes.
2015-02-12 14:06:37 -08:00
Daniel Micay
1eaf3b6f34 add missing check for new_addr chunk size
8ddc93293c switched this to over using the
address tree in order to avoid false negatives, so it now needs to check
that the size of the free extent is large enough to satisfy the request.
2015-02-12 15:46:30 -05:00
Jason Evans
cbf3a6d703 Move centralized chunk management into arenas.
Migrate all centralized data structures related to huge allocations and
recyclable chunks into arena_t, so that each arena can manage huge
allocations and recyclable virtual memory completely independently of
other arenas.

Add chunk node caching to arenas, in order to avoid contention on the
base allocator.

Use chunks_rtree to look up huge allocations rather than a red-black
tree.  Maintain a per arena unsorted list of huge allocations (which
will be needed to enumerate huge allocations during arena reset).

Remove the --enable-ivsalloc option, make ivsalloc() always available,
and use it for size queries if --enable-debug is enabled.  The only
practical implications to this removal are that 1) ivsalloc() is now
always available during live debugging (and the underlying radix tree is
available during core-based debugging), and 2) size query validation can
no longer be enabled independent of --enable-debug.

Remove the stats.chunks.{current,total,high} mallctls, and replace their
underlying statistics with simpler atomically updated counters used
exclusively for gdump triggering.  These statistics are no longer very
useful because each arena manages chunks independently, and per arena
statistics provide similar information.

Simplify chunk synchronization code, now that base chunk allocation
cannot cause recursive lock acquisition.
2015-02-12 00:15:56 -08:00
Jason Evans
f30e261c5b Update ckh to support metadata allocation tracking. 2015-02-12 00:15:24 -08:00
Jason Evans
064dbfbaf7 Fix a regression in tcache_bin_flush_small().
Fix a serious regression in tcache_bin_flush_small() that was introduced
by 1cb181ed63 (Implement explicit tcache
support.).
2015-02-12 00:15:16 -08:00
Jason Evans
051eae8cc5 Remove unnecessary xchg* lock prefixes. 2015-02-10 16:05:52 -08:00
Jason Evans
9e561e8d3f Test and fix tcache ID recycling. 2015-02-10 09:03:48 -08:00
Jason Evans
1cb181ed63 Implement explicit tcache support.
Add the MALLOCX_TCACHE() and MALLOCX_TCACHE_NONE macros, which can be
used in conjunction with the *allocx() API.

Add the tcache.create, tcache.flush, and tcache.destroy mallctls.

This resolves #145.
2015-02-09 17:44:48 -08:00
Jason Evans
23694b0745 Fix arena_get() for (!init_if_missing && refresh_if_missing) case.
Fix arena_get() to refresh the cache as needed in the (!init_if_missing
&& refresh_if_missing) case.

This flaw was introduced by the initial arena_get() implementation,
which was part of 8bb3198f72 (Refactor/fix
arenas manipulation.).
2015-02-09 17:43:10 -08:00
Jason Evans
8d0e04d42f Refactor rtree to be lock-free.
Recent huge allocation refactoring associates huge allocations with
arenas, but it remains necessary to quickly look up huge allocation
metadata during reallocation/deallocation.  A global radix tree remains
a good solution to this problem, but locking would have become the
primary bottleneck after (upcoming) migration of chunk management from
global to per arena data structures.

This lock-free implementation uses double-checked reads to traverse the
tree, so that in the steady state, each read or write requires only a
single atomic operation.

This implementation also assures that no more than two tree levels
actually exist, through a combination of careful virtual memory
allocation which makes large sparse nodes cheap, and skipping the root
node on x64 (possible because the top 16 bits are all 0 in practice).
2015-02-04 16:51:53 -08:00
Jason Evans
c810fcea1f Add (x != 0) assertion to lg_floor(x).
lg_floor(0) is undefined, but depending on compiler options may not
cause a crash.  This assertion makes it harder to accidentally abuse
lg_floor().
2015-02-04 16:51:53 -08:00
Jason Evans
f500a10b2e Refactor base_alloc() to guarantee demand-zeroed memory.
Refactor base_alloc() to guarantee that allocations are carved from
demand-zeroed virtual memory.  This supports sparse data structures such
as multi-page radix tree nodes.

Enhance base_alloc() to keep track of fragments which were too small to
support previous allocation requests, and try to consume them during
subsequent requests.  This becomes important when request sizes commonly
approach or exceed the chunk size (as could radix tree node
allocations).
2015-02-04 16:51:53 -08:00
Jason Evans
918a1a5b3f Reduce extent_node_t size to fit in one cache line. 2015-02-04 16:51:53 -08:00
Jason Evans
a55dfa4b0a Implement more atomic operations.
- atomic_*_p().
- atomic_cas_*().
- atomic_write_*().
2015-02-04 16:50:05 -08:00
Jason Evans
8ddc93293c Fix chunk_recycle()'s new_addr functionality.
Fix chunk_recycle()'s new_addr functionality to search by address rather
than just size if new_addr is specified.  The functionality added by
a95018ee81 (Attempt to expand huge
allocations in-place.) only worked if the two search orders happened to
return the same results (e.g. in simple test cases).
2015-02-04 16:50:04 -08:00
Jason Evans
f8723572d8 Add missing prototypes for bootstrap_{malloc,calloc,free}(). 2015-02-04 16:50:04 -08:00
Jason Evans
b0808d5f63 Fix shell test to use = instead of ==. 2015-02-04 16:50:04 -08:00
Mike Hommey
6505733012 Make opt.lg_dirty_mult work as documented
The documentation for opt.lg_dirty_mult says:
    Per-arena minimum ratio (log base 2) of active to dirty
    pages.  Some dirty unused pages may be allowed to accumulate,
    within the limit set by the ratio (or one chunk worth of dirty
    pages, whichever is greater) (...)

The restriction in parentheses currently doesn't happen. This makes
jemalloc aggressively madvise(), which in turns increases the amount
of page faults significantly.

For instance, this resulted in several(!) hundred(!) milliseconds
startup regression on Firefox for Android.

This may require further tweaking, but starting with actually doing
what the documentation says is a good start.
2015-02-04 07:16:55 +09:00
Felix Janda
008267b9f6 util.c: strerror_r returns char* only on glibc 2015-02-03 18:58:02 +01:00
Jason Evans
5b8ed5b7c9 Implement the prof.gdump mallctl.
This feature makes it possible to toggle the gdump feature on/off during
program execution, whereas the the opt.prof_dump mallctl value can only
be set during program startup.

This resolves #72.
2015-01-25 21:21:35 -08:00
Jason Evans
41f2e692f6 Fix quoting for CONFIG-related sed expression. 2015-01-25 20:15:13 -08:00
Jason Evans
0fd663e9c5 Avoid pointless chunk_recycle() call.
Avoid calling chunk_recycle() for mmap()ed chunks if config_munmap is
disabled, in which case there are never any recyclable chunks.

This resolves #164.
2015-01-25 17:31:24 -08:00
Sébastien Marie
77d597ebb2 add openbsd support 2015-01-25 13:00:42 -08:00
Sébastien Marie
eee27b2a38 huge_node_locked don't have to unlock huge_mtx
in src/huge.c, after each call of huge_node_locked(), huge_mtx is
already unlocked. don't unlock it twice (it is a undefined behaviour).
2015-01-25 15:12:28 +01:00
Jason Evans
4581b97809 Implement metadata statistics.
There are three categories of metadata:

- Base allocations are used for bootstrap-sensitive internal allocator
  data structures.
- Arena chunk headers comprise pages which track the states of the
  non-metadata pages.
- Internal allocations differ from application-originated allocations
  in that they are for internal use, and that they are omitted from heap
  profiles.

The metadata statistics comprise the metadata categories as follows:

- stats.metadata: All metadata -- base + arena chunk headers + internal
  allocations.
- stats.arenas.<i>.metadata.mapped: Arena chunk headers.
- stats.arenas.<i>.metadata.allocated: Internal allocations.  This is
  reported separately from the other metadata statistics because it
  overlaps with the allocated and active statistics, whereas the other
  metadata statistics do not.

Base allocations are not reported separately, though their magnitude can
be computed by subtracting the arena-specific metadata.

This resolves #163.
2015-01-23 23:34:43 -08:00
Guilherme Goncalves
ec98a44662 Use the correct type for opt.junk when printing stats. 2015-01-23 11:01:42 -02:00
Jason Evans
bec6a8da39 Implement the jemalloc-config script.
This resolves #133.
2015-01-22 17:55:58 -08:00
Jason Evans
8afcaa9d81 Update copyright dates for 2015. 2015-01-22 16:03:00 -08:00
Jason Evans
228b2e9242 Document under what circumstances in-place resizing succeeds.
This resolves #100.
2015-01-22 15:28:25 -08:00
Jason Evans
10aff3f3e1 Refactor bootstrapping to delay tsd initialization.
Refactor bootstrapping to delay tsd initialization, primarily to support
integration with FreeBSD's libc.

Refactor a0*() for internal-only use, and add the
bootstrap_{malloc,calloc,free}() API for use by FreeBSD's libc.  This
separation limits use of the a0*() functions to metadata allocation,
which doesn't require malloc/calloc/free API compatibility.

This resolves #170.
2015-01-22 14:04:27 -08:00
Jason Evans
bc96876f99 Fix arenas_cache_cleanup().
Fix arenas_cache_cleanup() to check whether arenas_cache is NULL before
deallocation, rather than checking arenas.
2015-01-22 14:02:56 -08:00
Abhishek Kulkarni
b617df81bb Add missing symbols to private_symbols.txt.
This resolves #185.
2015-01-21 12:44:35 -08:00
Jason Evans
44b57b8e8b Fix OOM handling in memalign() and valloc().
Fix memalign() and valloc() to heed imemalign()'s return value.

Reported by Kurt Wampler.
2015-01-16 18:04:17 -08:00
Jason Evans
24057f3da8 Fix an infinite recursion bug related to a0/tsd bootstrapping.
This resolves #184.
2015-01-14 16:27:31 -08:00
Guilherme Goncalves
51f86346c0 Add a isblank definition for MSVC < 2013 2015-01-09 14:33:46 -08:00