Commit Graph

488 Commits

Author SHA1 Message Date
Jason Evans
10aff3f3e1 Refactor bootstrapping to delay tsd initialization.
Refactor bootstrapping to delay tsd initialization, primarily to support
integration with FreeBSD's libc.

Refactor a0*() for internal-only use, and add the
bootstrap_{malloc,calloc,free}() API for use by FreeBSD's libc.  This
separation limits use of the a0*() functions to metadata allocation,
which doesn't require malloc/calloc/free API compatibility.

This resolves #170.
2015-01-22 14:04:27 -08:00
Jason Evans
bc96876f99 Fix arenas_cache_cleanup().
Fix arenas_cache_cleanup() to check whether arenas_cache is NULL before
deallocation, rather than checking arenas.
2015-01-22 14:02:56 -08:00
Jason Evans
44b57b8e8b Fix OOM handling in memalign() and valloc().
Fix memalign() and valloc() to heed imemalign()'s return value.

Reported by Kurt Wampler.
2015-01-16 18:04:17 -08:00
Guilherme Goncalves
2c5cb613df Introduce two new modes of junk filling: "alloc" and "free".
In addition to true/false, opt.junk can now be either "alloc" or "free",
giving applications the possibility of junking memory only on allocation
or deallocation.

This resolves #172.
2014-12-14 17:07:26 -08:00
Daniel Micay
b74041fb6e Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
This eliminates the malloc tunables as tools for an attacker.

Closes #173
2014-12-14 15:36:15 -08:00
Jason Evans
e12eaf93dc Style and spelling fixes. 2014-12-08 16:34:04 -08:00
Daniel Micay
dc65213111 rm unused arena wrangling from xallocx
It has no use for the arena_t since unlike rallocx it never makes a new
memory allocation. It's just an unused parameter in ixalloc_helper.
2014-10-30 23:19:34 -07:00
Jason Evans
cfc5706f69 Miscellaneous cleanups. 2014-10-30 23:18:45 -07:00
Daniel Micay
d33f834591 avoid redundant chunk header reads
* use sized deallocation in iralloct_realign
* iralloc and ixalloc always need the old size, so pass it in from the
  caller where it's often already calculated
2014-10-30 17:06:38 -07:00
Daniel Micay
809b0ac391 mark huge allocations as unlikely
This cleans up the fast path a bit more by moving away more code.
2014-10-30 17:06:38 -07:00
Jason Evans
81e547566e Add --with-lg-tiny-min, generalize --with-lg-quantum. 2014-10-10 22:35:07 -07:00
Jason Evans
9b75677e53 Don't fetch tsd in a0{d,}alloc().
Don't fetch tsd in a0{d,}alloc(), because doing so can cause infinite
recursion on systems that require an allocated tsd wrapper.
2014-10-10 18:19:20 -07:00
Jason Evans
fc0b3b7383 Add configure options.
Add:
  --with-lg-page
  --with-lg-page-sizes
  --with-lg-size-class-group
  --with-lg-quantum

Get rid of STATIC_PAGE_SHIFT, in favor of directly setting LG_PAGE.

Fix various edge conditions exposed by the configure options.
2014-10-09 22:44:37 -07:00
Jason Evans
3a8b9b1fd9 Fix a recursive lock acquisition regression.
Fix a recursive lock acquisition regression, which was introduced by
8bb3198f72 (Refactor/fix arenas
manipulation.).
2014-10-08 00:54:16 -07:00
Daniel Micay
f22214a29d Use regular arena allocation for huge tree nodes.
This avoids grabbing the base mutex, as a step towards fine-grained
locking for huge allocations. The thread cache also provides a tiny
(~3%) improvement for serial huge allocations.
2014-10-07 23:57:09 -07:00
Jason Evans
8bb3198f72 Refactor/fix arenas manipulation.
Abstract arenas access to use arena_get() (or a0get() where appropriate)
rather than directly reading e.g. arenas[ind].  Prior to the addition of
the arenas.extend mallctl, the worst possible outcome of directly
accessing arenas was a stale read, but arenas.extend may allocate and
assign a new array to arenas.

Add a tsd-based arenas_cache, which amortizes arenas reads.  This
introduces some subtle bootstrapping issues, with tsd_boot() now being
split into tsd_boot[01]() to support tsd wrapper allocation
bootstrapping, as well as an arenas_cache_bypass tsd variable which
dynamically terminates allocation of arenas_cache itself.

Promote a0malloc(), a0calloc(), and a0free() to be generally useful for
internal allocation, and use them in several places (more may be
appropriate).

Abstract arena->nthreads management and fix a missing decrement during
thread destruction (recent tsd refactoring left arenas_cleanup()
unused).

Change arena_choose() to propagate OOM, and handle OOM in all callers.
This is important for providing consistent allocation behavior when the
MALLOCX_ARENA() flag is being used.  Prior to this fix, it was possible
for an OOM to result in allocation silently allocating from a different
arena than the one specified.
2014-10-07 23:14:57 -07:00
Jason Evans
155bfa7da1 Normalize size classes.
Normalize size classes to use the same number of size classes per size
doubling (currently hard coded to 4), across the intire range of size
classes.  Small size classes already used this spacing, but in order to
support this change, additional small size classes now fill [4 KiB .. 16
KiB).  Large size classes range from [16 KiB .. 4 MiB).  Huge size
classes now support non-multiples of the chunk size in order to fill (4
MiB .. 16 MiB).
2014-10-06 01:45:13 -07:00
Jason Evans
0800afd03f Silence a compiler warning. 2014-10-04 14:59:17 -07:00
Jason Evans
029d44cf8b Fix tsd cleanup regressions.
Fix tsd cleanup regressions that were introduced in
5460aa6f66 (Convert all tsd variables to
reside in a single tsd structure.).  These regressions were twofold:

1) tsd_tryget() should never (and need never) return NULL.  Rename it to
   tsd_fetch() and simplify all callers.
2) tsd_*_set() must only be called when tsd is in the nominal state,
   because cleanup happens during the nominal-->purgatory transition,
   and re-initialization must not happen while in the purgatory state.
   Add tsd_nominal() and use it as needed.  Note that tsd_*{p,}_get()
   can still be used as long as no re-initialization that would require
   cleanup occurs.  This means that e.g. the thread_allocated counter
   can be updated unconditionally.
2014-10-04 11:22:55 -07:00
Jason Evans
fc12c0b8bc Implement/test/fix prof-related mallctl's.
Implement/test/fix the opt.prof_thread_active_init,
prof.thread_active_init, and thread.prof.active mallctl's.

Test/fix the thread.prof.name mallctl.

Refactor opt_prof_active to be read-only and move mutable state into the
prof_active variable.  Stop leaning on ctl-related locking for
protection.
2014-10-03 23:25:30 -07:00
Jason Evans
551ebc4364 Convert to uniform style: cond == false --> !cond 2014-10-03 10:16:09 -07:00
Dave Rigby
e3a16fce5e Mark malloc_conf as a weak symbol
This fixes issue #113 - je_malloc_conf is not respected on OS X
2014-09-29 15:05:55 -07:00
Jason Evans
5460aa6f66 Convert all tsd variables to reside in a single tsd structure. 2014-09-23 02:36:08 -07:00
Jason Evans
c3e9e7b041 Fix irallocx_prof() sample logic.
Fix irallocx_prof() sample logic to only update the threshold counter
after it knows what size the allocation ended up being.  This regression
was caused by 6e73dc194e (Fix a profile
sampling race.), which did not make it into any releases prior to this
fix.
2014-09-11 17:04:03 -07:00
Jason Evans
9c640bfdd4 Apply likely()/unlikely() to allocation/deallocation fast paths. 2014-09-11 17:01:58 -07:00
Jason Evans
91566fc079 Fix mallocx() to always honor MALLOCX_ARENA() when profiling. 2014-09-11 13:15:33 -07:00
Daniel Micay
23fdf8b359 mark some conditions as unlikely
* assertion failure
* malloc_init failure
* malloc not already initialized (in malloc_init)
* running in valgrind
* thread cache disabled at runtime

Clang and GCC already consider a comparison with NULL or -1 to be cold,
so many branches (out-of-memory) are already correctly considered as
cold and marking them is not important.
2014-09-10 21:49:42 -04:00
Jason Evans
6e73dc194e Fix a profile sampling race.
Fix a profile sampling race that was due to preparing to sample, yet
doing nothing to assure that the context remains valid until the stats
are updated.

These regressions were caused by
602c8e0971 (Implement per thread heap
profiling.), which did not make it into any releases prior to these
fixes.
2014-09-09 19:47:09 -07:00
Jason Evans
a2260c95cd Fix sdallocx() assertion.
Refactor sdallocx() and nallocx() to share inallocx(), and fix an
sdallocx() assertion to check usize rather than size.
2014-09-09 10:39:15 -07:00
Daniel Micay
4cfe55166e Add support for sized deallocation.
This adds a new `sdallocx` function to the external API, allowing the
size to be passed by the caller.  It avoids some extra reads in the
thread cache fast path.  In the case where stats are enabled, this
avoids the work of calculating the size from the pointer.

An assertion validates the size that's passed in, so enabling debugging
will allow users of the API to debug cases where an incorrect size is
passed in.

The performance win for a contrived microbenchmark doing an allocation
and immediately freeing it is ~10%.  It may have a different impact on a
real workload.

Closes #28
2014-09-08 17:34:24 -07:00
Jason Evans
b718cf77e9 Optimize [nmd]alloc() fast paths.
Optimize [nmd]alloc() fast paths such that the (flags == 0) case is
streamlined, flags decoding only happens to the minimum degree
necessary, and no conditionals are repeated.
2014-09-07 14:40:19 -07:00
Sara Golemon
3e24afa28e Test for availability of malloc hooks via autoconf
__*_hook() is glibc, but on at least one glibc platform (homebrew),
the __GLIBC__ define isn't set correctly and we miss being able to
use these hooks.

Do a feature test for it during configuration so that we enable it
anywhere the hooks are actually available.
2014-08-22 15:19:21 -07:00
Jason Evans
602c8e0971 Implement per thread heap profiling.
Rename data structures (prof_thr_cnt_t-->prof_tctx_t,
prof_ctx_t-->prof_gctx_t), and convert to storing a prof_tctx_t for
sampled objects.

Convert PROF_ALLOC_PREP() to prof_alloc_prep(), since precise backtrace
depth within jemalloc functions is no longer an issue (pprof prunes
irrelevant frames).

Implement mallctl's:
- prof.reset implements full sample data reset, and optional change of
  sample interval.
- prof.lg_sample reads the current sample interval (opt.lg_prof_sample
  was the permanent source of truth prior to prof.reset).
- thread.prof.name provides naming capability for threads within heap
  profile dumps.
- thread.prof.active makes it possible to activate/deactivate heap
  profiling for individual threads.

Modify the heap dump files to contain per thread heap profile data.
This change is incompatible with the existing pprof, which will require
enhancements to read and process the enriched data.
2014-08-19 21:31:16 -07:00
Richard Diamond
94ed6812bc Don't catch fork()ing events for Native Client.
Native Client doesn't allow forking, thus there is no need to catch
fork()ing events for Native Client.

Additionally, without this commit, jemalloc will introduce an unresolved
pthread_atfork() in PNaCl Rust bins.
2014-06-02 07:45:33 -07:00
Jason Evans
e2deab7a75 Refactor huge allocation to be managed by arenas.
Refactor huge allocation to be managed by arenas (though the global
red-black tree of huge allocations remains for lookup during
deallocation).  This is the logical conclusion of recent changes that 1)
made per arena dss precedence apply to huge allocation, and 2) made it
possible to replace the per arena chunk allocation/deallocation
functions.

Remove the top level huge stats, and replace them with per arena huge
stats.

Normalize function names and types to *dalloc* (some were *dealloc*).

Remove the --enable-mremap option.  As jemalloc currently operates, this
is a performace regression for some applications, but planned work to
logarithmically space huge size classes should provide similar amortized
performance.  The motivation for this change was that mremap-based huge
reallocation forced leaky abstractions that prevented refactoring.
2014-05-15 22:36:41 -07:00
aravind
fb7fe50a88 Add support for user-specified chunk allocators/deallocators.
Add new mallctl endpoints "arena<i>.chunk.alloc" and
"arena<i>.chunk.dealloc" to allow userspace to configure
jemalloc's chunk allocator and deallocator on a per-arena
basis.
2014-05-12 10:46:03 -07:00
Jason Evans
a344dd01c7 Fix coding sytle nits. 2014-05-01 15:51:30 -07:00
Jason Evans
6f001059aa Simplify backtracing.
Simplify backtracing to not ignore any frames, and compensate for this
in pprof in order to increase flexibility with respect to function-based
refactoring even in the presence of non-deterministic inlining.  Modify
pprof to blacklist all jemalloc allocation entry points including
non-standard ones like mallocx(), and ignore all allocator-internal
frames.  Prior to this change, pprof excluded the specifically
blacklisted functions from backtraces, but it left allocator-internal
frames intact.
2014-04-22 20:55:09 -07:00
Jason Evans
bd87b01999 Optimize Valgrind integration.
Forcefully disable tcache if running inside Valgrind, and remove
Valgrind calls in tcache-specific code.

Restructure Valgrind-related code to move most Valgrind calls out of the
fast path functions.

Take advantage of static knowledge to elide some branches in
JEMALLOC_VALGRIND_REALLOC().
2014-04-15 16:49:57 -07:00
Jason Evans
ecd3e59ca3 Remove the "opt.valgrind" mallctl.
Remove the "opt.valgrind" mallctl because it is unnecessary -- jemalloc
automatically detects whether it is running inside valgrind.
2014-04-15 14:33:50 -07:00
Jason Evans
9790b9667f Remove the *allocm() API, which is superceded by the *allocx() API. 2014-04-14 22:32:31 -07:00
Jason Evans
9b0cbf0850 Remove support for non-prof-promote heap profiling metadata.
Make promotion of sampled small objects to large objects mandatory, so
that profiling metadata can always be stored in the chunk map, rather
than requiring one pointer per small region in each small-region page
run.  In practice the non-prof-promote code was only useful when using
jemalloc to track all objects and report them as leaks at program exit.
However, Valgrind is at least as good a tool for this particular use
case.

Furthermore, the non-prof-promote code is getting in the way of
some optimizations that will make heap profiling much cheaper for the
predominant use case (sampling a small representative proportion of all
allocations).
2014-04-11 14:24:51 -07:00
Ben Maurer
be8e59f5a6 Don't dereference chunk->arena in free() hot path
When you call free() we load chunk->arena even though that
data isn't used on the tcache hot path.

In profiling some FB applications, I found that ~30% of the
dTLB misses in the free() function come from this line. With
4 MB chunks, the arena_chunk_t->map is ~ 32 KB (1024 pages
in the chunk, 4 8 byte pointers in arena_chunk_map_t). This
means there's only a 1/8 chance of the page containing
chunk->arena also comtaining the map bits.
2014-04-05 15:59:08 -07:00
Max Wang
fbb31029a5 Use arena dss prec instead of default for huge allocs.
Pass a dss_prec_t parameter to huge_{m,p,r}alloc instead of defaulting
to the chunk dss prec.
2014-03-28 13:43:58 -07:00
Jason Evans
e2206edebc Fix unused variable warnings. 2014-01-21 14:59:13 -08:00
Jason Evans
b2c31660be Extract profiling code from [re]allocation functions.
Extract profiling code from malloc(), imemalign(), calloc(), realloc(),
mallocx(), rallocx(), and xallocx().  This slightly reduces the amount
of code compiled into the fast paths, but the primary benefit is the
combinatorial complexity reduction.

Simplify iralloc[t]() by creating a separate ixalloc() that handles the
no-move cases.

Further simplify [mrxn]allocx() (and by implication [mrn]allocm()) to
make request size overflows due to size class and/or alignment
constraints trigger undefined behavior (detected by debug-only
assertions).

Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling
backtrace creation in imemalign().  This bug impacted posix_memalign()
and aligned_alloc().
2014-01-12 15:41:05 -08:00
Jason Evans
0405312921 Fix an uninitialized variable read in xallocx(). 2013-12-20 15:52:01 -08:00
Jason Evans
665769357c Optimize arena_prof_ctx_set().
Refactor such that arena_prof_ctx_set() receives usize as an argument,
and use it to determine whether to handle ptr as a small region, rather
than reading the chunk page map.
2013-12-15 21:57:02 -08:00
Jason Evans
d82a5e6a34 Implement the *allocx() API.
Implement the *allocx() API, which is a successor to the *allocm() API.
The *allocx() functions are slightly simpler to use because they have
fewer parameters, they directly return the results of primary interest,
and mallocx()/rallocx() avoid the strict aliasing pitfall that
allocm()/rallocx() share with posix_memalign().  The following code
violates strict aliasing rules:

    foo_t *foo;
    allocm((void **)&foo, NULL, 42, 0);

whereas the following is safe:

    foo_t *foo;
    void *p;
    allocm(&p, NULL, 42, 0);
    foo = (foo_t *)p;

mallocx() does not have this problem:

    foo_t *foo = (foo_t *)mallocx(42, 0);
2013-12-12 22:35:52 -08:00
Jason Evans
7369232544 Silence some unused variable warnings. 2013-12-10 13:51:52 -08:00
Jason Evans
52b30691f9 Remove unused variable. 2013-12-02 15:16:39 -08:00
Jason Evans
addad093f8 Clean up malloc_ncpus().
Clean up malloc_ncpus() by replacing incorrectly indented if..else
branches with a ?: expression.

Submitted by Igor Podlesny.
2013-11-29 16:19:44 -08:00
Jason Evans
39e7fd0580 Fix ALLOCM_ARENA(a) handling in rallocm().
Fix rallocm() to use the specified arena for allocation, not just
deallocation.

Clarify ALLOCM_ARENA(a) documentation.
2013-11-25 18:02:35 -08:00
Leonard Crestez
ac4403cacb Delay pthread_atfork registering.
This function causes recursive allocation on LinuxThreads.

Signed-off-by: Crestez Dan Leonard <lcrestez@ixiacom.com>
2013-10-24 16:40:31 -07:00
Jason Evans
1d1cee127a Add a missing mutex unlock in malloc_init_hard() error path.
Add a missing mutex unlock in a malloc_init_hard() error path (failed
mutex initialization).  In practice this bug was very unlikely to ever
trigger, but if it did, application deadlock would likely result.

Reported by Pat Lynch.
2013-10-21 15:04:12 -07:00
Jason Evans
e2985a2381 Avoid (x < 0) comparison for unsigned x.
Avoid (min < 0) comparison for unsigned min in malloc_conf_init().  This
bug had no practical consequences.

Reported by Pat Lynch.
2013-10-21 15:01:44 -07:00
Jason Evans
6556e28be1 Prefer not_reached() over assert(false) where appropriate. 2013-10-21 14:56:27 -07:00
Jason Evans
543abf7e6c Fix inlining warning.
Add the JEMALLOC_ALWAYS_INLINE_C macro and use it for always-inlined
functions declared in .c files.  This fixes a function attribute
inconsistency for debug builds that resulted in (harmless) compiler
warnings about functions not being inlinable.

Reported by Ricardo Nabinger Sanchez.
2013-10-19 17:26:00 -07:00
Alexandre Perrin
dd6ef0302f malloc_conf_init: revert errno value when readlink(2) fail. 2013-10-13 15:33:15 -07:00
Jason Evans
88c222c8e9 Fix a prof-related locking order bug.
Fix a locking order bug that could cause deadlock during fork if heap
profiling were enabled.
2013-02-06 11:59:30 -08:00
Jason Evans
bbe29d374d Fix potential TLS-related memory corruption.
Avoid writing to uninitialized TLS as a side effect of deallocation.
Initializing TLS during deallocation is unsafe because it is possible
that a thread never did any allocation, and that TLS has already been
deallocated by the threads library, resulting in write-after-free
corruption.  These fixes affect prof_tdata and quarantine; all other
uses of TLS are already safe, whether intentionally (as for tcache) or
unintentionally (as for arenas).
2013-01-31 14:23:48 -08:00
Jason Evans
d1b6e18a99 Revert opt_abort and opt_junk refactoring.
Revert refactoring of opt_abort and opt_junk declarations.  clang
accepts the config_*-based declarations (and generates correct code),
but gcc complains with:

  error: initializer element is not constant
2013-01-22 16:54:26 -08:00
Jason Evans
ba175a2bfb Use config_* instead of JEMALLOC_*.
Convert a couple of stragglers from JEMALLOC_* to use config_*.
2013-01-22 12:14:45 -08:00
Jason Evans
88393cb0eb Add and use JEMALLOC_ALWAYS_INLINE.
Add JEMALLOC_ALWAYS_INLINE and use it to guarantee that the entire fast
paths of the primary allocation/deallocation functions are inlined.
2013-01-22 08:45:43 -08:00
Garrett Cooper
6e6164ae15 Don't mangle errno with free(3) if utrace(2) fails
This ensures POLA on FreeBSD (at least) as free(3) is generally assumed
to not fiddle around with errno.

Signed-off-by: Garrett Cooper <yanegomi@gmail.com>
2012-12-24 10:30:57 -08:00
Jason Evans
1bf2743e08 Add clipping support to lg_chunk option processing.
Modify processing of the lg_chunk option so that it clips an
out-of-range input to the edge of the valid range.  This makes it
possible to request the minimum possible chunk size without intimate
knowledge of allocator internals.

Submitted by Ian Lepore (see FreeBSD PR bin/174641).
2012-12-23 08:51:48 -08:00
Jason Evans
609ae595f0 Add arena-specific and selective dss allocation.
Add the "arenas.extend" mallctl, so that it is possible to create new
arenas that are outside the set that jemalloc automatically multiplexes
threads onto.

Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
to explicitly allocate from a particular arena.

Add the "opt.dss" mallctl, which controls the default precedence of dss
allocation relative to mmap allocation.

Add the "arena.<i>.dss" mallctl, which makes it possible to set the
default dss precedence on a per arena or global basis.

Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".

Add the "stats.arenas.<i>.dss" mallctl.
2012-10-12 18:26:16 -07:00
Jason Evans
2cc11ff837 Make malloc_usable_size() implementation consistent with prototype.
Use JEMALLOC_USABLE_SIZE_CONST for the malloc_usable_size()
implementation as well as the prototype, for consistency's sake.
2012-10-09 16:29:21 -07:00
Jason Evans
b5225928fe Fix fork(2)-related mutex acquisition order.
Fix mutex acquisition order inversion for the chunks rtree and the base
mutex.  Chunks rtree acquisition was introduced by the previous commit,
so this bug was short-lived.
2012-10-09 16:16:00 -07:00
Jason Evans
20f1fc95ad Fix fork(2)-related deadlocks.
Add a library constructor for jemalloc that initializes the allocator.
This fixes a race that could occur if threads were created by the main
thread prior to any memory allocation, followed by fork(2), and then
memory allocation in the child process.

Fix the prefork/postfork functions to acquire/release the ctl, prof, and
rtree mutexes.  This fixes various fork() child process deadlocks, but
one possible deadlock remains (intentionally) unaddressed: prof
backtracing can acquire runtime library mutexes, so deadlock is still
possible if heap profiling is enabled during fork().  This deadlock is
known to be a real issue in at least the case of libgcc-based
backtracing.

Reported by tfengjun.
2012-10-09 15:21:46 -07:00
Corey Richardson
1d553f72cb If sysconf() fails, the number of CPUs is reported as UINT_MAX, not 1 as it should be 2012-10-08 15:45:38 -07:00
Jason Evans
5c710cee78 Remove const from __*_hook variable declarations.
Remove const from __*_hook variable declarations, so that glibc can
modify them during process forking.
2012-05-23 16:09:22 -07:00
Jason Evans
174b70efb4 Disable tcache by default if running inside Valgrind.
Disable tcache by default if running inside Valgrind, in order to avoid
making unallocated objects appear reachable to Valgrind.
2012-05-15 23:31:53 -07:00
Jason Evans
781fe75e0a Auto-detect whether running inside Valgrind.
Auto-detect whether running inside Valgrind, thus removing the need to
manually specify MALLOC_CONF=valgrind:true.
2012-05-15 14:48:14 -07:00
Jason Evans
58ad1e4956 Return early in _malloc_{pre,post}fork() if uninitialized.
Avoid mutex operations in _malloc_{pre,post}fork() unless jemalloc has
been initialized.

Reported by David Xu.
2012-05-11 17:40:16 -07:00
Mike Hommey
fd97b1dfc7 Add support for MSVC
Tested with MSVC 8 32 and 64 bits.
2012-05-01 11:32:11 -07:00
Mike Hommey
da99e31105 Replace JEMALLOC_ATTR with various different macros when it makes sense
Theses newly added macros will be used to implement the equivalent under
MSVC. Also, move the definitions to headers, where they make more sense,
and for some, are even more useful there (e.g. malloc).
2012-04-30 17:57:31 -07:00
Mike Hommey
a14bce85e8 Use Get/SetLastError on Win32
Using errno on win32 doesn't quite work, because the value set in a shared
library can't be read from e.g. an executable calling the function setting
errno.

At the same time, since buferror always uses errno/GetLastError, don't pass
it.
2012-04-30 16:50:55 -07:00
Jason Evans
3fb50b0407 Fix a PROF_ALLOC_PREP() error path.
Fix a PROF_ALLOC_PREP() error path to initialize the return value to
NULL.
2012-04-25 13:13:44 -07:00
Jason Evans
8694e2e7b9 Silence compiler warnings. 2012-04-23 13:05:32 -07:00
Mike Hommey
a19e87fbad Add support for Mingw 2012-04-21 21:27:46 -07:00
Jason Evans
a8f8d7540d Remove mmap_unaligned.
Remove mmap_unaligned, which was used to heuristically decide whether to
optimistically call mmap() in such a way that could reduce the total
number of system calls.  If I remember correctly, the intention of
mmap_unaligned was to avoid always executing the slow path in the
presence of ASLR.  However, that reasoning seems to have been based on a
flawed understanding of how ASLR actually works.  Although ASLR
apparently causes mmap() to ignore address requests, it does not cause
total placement randomness, so there is a reasonable expectation that
iterative mmap() calls will start returning chunk-aligned mappings once
the first chunk has been properly aligned.
2012-04-21 19:17:21 -07:00
Jason Evans
606f1fdc3c Put CONF_HANDLE_*() keys in quotes.
Put CONF_HANDLE_*() keys in quotes, so that they aren't mangled when
--with-private-namespace is used.
2012-04-20 21:39:14 -07:00
Jason Evans
86e58583bb Make special FreeBSD function overrides visible.
Make special FreeBSD libc/libthr function overrides for
_malloc_prefork(), _malloc_postfork(), and _malloc_thread_cleanup()
visible.
2012-04-18 19:01:00 -07:00
Jason Evans
0b25fe79aa Update prof defaults to match common usage.
Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).

Change the "opt.prof_accum" default from true to false.

Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be
abused to disable final profile dumping.
2012-04-17 16:39:33 -07:00
Jason Evans
7ca0fdfb85 Disable munmap() if it causes VM map holes.
Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.

Unify the chunk caching for mmap and dss.

Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
2012-04-12 20:20:58 -07:00
Jason Evans
d6abcbb14b Always disable redzone by default.
Always disable redzone by default, even when --enable-debug is
specified.  The memory overhead for redzones can be substantial, which
makes this feature something that should only be opted into.
2012-04-12 17:09:54 -07:00
Mike Hommey
b8325f9cb0 Call base_boot before chunk_boot0
Chunk_boot0 calls rtree_new, which calls base_alloc, which locks the
base_mtx mutex. That mutex is initialized in base_boot.
2012-04-12 11:42:20 -07:00
Jason Evans
5ff709c264 Normalize aligned allocation algorithms.
Normalize arena_palloc(), chunk_alloc_mmap_slow(), and
chunk_recycle_dss() to use the same algorithm for trimming
over-allocation.

Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and
ALIGNMENT_CEILING() macros, and use them where appropriate.

Remove the run_size_p parameter from sa2u().

Fix a potential deadlock in chunk_recycle_dss() that was introduced by
eae269036c (Add alignment support to
chunk_alloc()).
2012-04-11 18:13:45 -07:00
Jason Evans
122449b073 Implement Valgrind support, redzones, and quarantine.
Implement Valgrind support, as well as the redzone and quarantine
features, which help Valgrind detect memory errors.  Redzones are only
implemented for small objects because the changes necessary to support
redzones around large and huge objects are complicated by in-place
reallocation, to the point that it isn't clear that the maintenance
burden is worth the incremental improvement to Valgrind support.

Merge arena_salloc() and arena_salloc_demote().

Refactor i[v]salloc() to expose the 'demote' option.
2012-04-11 11:46:18 -07:00
Jason Evans
a1ee7838e1 Rename labels.
Rename labels from FOO to label_foo in order to avoid system macro
definitions, in particular OUT and ERROR on mingw.

Reported by Mike Hommey.
2012-04-10 15:07:44 -07:00
Jason Evans
b147611b52 Add utrace(2)-based tracing (--enable-utrace). 2012-04-05 13:36:17 -07:00
Jason Evans
02b231205e Fix threaded initialization and enable it on Linux.
Reported by Mike Hommey.
2012-04-05 11:06:23 -07:00
Jason Evans
01b3fe55ff Add a0malloc(), a0calloc(), and a0free().
Add a0malloc(), a0calloc(), and a0free(), which are used by FreeBSD's
libc to allocate/deallocate TLS in static binaries.
2012-04-03 19:25:48 -07:00
Jason Evans
633aaff967 Postpone mutex initialization on FreeBSD.
Postpone mutex initialization on FreeBSD until after base allocation is
safe.
2012-04-03 19:25:30 -07:00
Jason Evans
ae4c7b4b40 Clean up *PAGE* macros.
s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g.

Remove remnants of the dynamic-page-shift code.

Rename the "arenas.pagesize" mallctl to "arenas.page".

Remove the "arenas.chunksize" mallctl, which is redundant with
"opt.lg_chunk".
2012-04-02 07:04:34 -07:00
Jason Evans
f004737267 Revert "Avoid NULL check in free() and malloc_usable_size()."
This reverts commit 96d4120ac0.

ivsalloc() depends on chunks_rtree being initialized.  This can be
worked around via a NULL pointer check.  However,
thread_allocated_tsd_get() also depends on initialization having
occurred, and there is no way to guard its call in free() that is
cheaper than checking whether ptr is NULL.
2012-04-02 15:18:24 -07:00
Jason Evans
96d4120ac0 Avoid NULL check in free() and malloc_usable_size().
Generalize isalloc() to handle NULL pointers in such a way that the NULL
checking overhead is only paid when introspecting huge allocations (or
NULL).  This allows free() and malloc_usable_size() to no longer check
for NULL.

Submitted by Igor Bukanov and Mike Hommey.
2012-04-02 14:50:03 -07:00
Mike Hommey
80b25932ca Move last bit of zone initialization in zone.c, and lazy-initialize 2012-04-02 14:15:20 -07:00
Jason Evans
4eeb52f080 Remove vsnprintf() and strtoumax() validation.
Remove code that validates malloc_vsnprintf() and malloc_strtoumax()
against their namesakes.  The validation code has adequately served its
usefulness at this point, and it isn't worth dealing with the different
formatting for %p with glibc versus other implementations for NULL
pointers ("(nil)" vs. "0x0").

Reported by Mike Hommey.
2012-04-02 02:30:24 -07:00
Mike Hommey
3c2ba0dcbc Avoid crashes when system libraries use the purgeable zone allocator 2012-03-30 11:03:20 -07:00
Mike Hommey
71a93b8725 Move zone registration to zone.c 2012-03-30 10:53:00 -07:00
Mike Hommey
e77fa59ece Don't use pthread_atfork to register prefork/postfork handlers on OSX
OSX libc calls zone allocators' force_lock/force_unlock already.
2012-03-28 16:17:21 -07:00
Jason Evans
2465bdf493 Check for NULL ptr in malloc_usable_size().
Check for NULL ptr in malloc_usable_size(), rather than just asserting
that ptr is non-NULL.  This matches behavior of other implementations
(e.g., glibc and tcmalloc).
2012-03-26 13:13:55 -07:00
Mike Hommey
5c89c50d18 Fix glibc hooks when using both --with-jemalloc-prefix and --with-mangling 2012-03-26 12:43:05 -07:00
Jason Evans
41b6afb834 Port to FreeBSD.
Use FreeBSD-specific functions (_pthread_mutex_init_calloc_cb(),
_malloc_{pre,post}fork()) to avoid bootstrapping issues due to
allocation in libc and libthr.

Add malloc_strtoumax() and use it instead of strtoul().  Disable
validation code in malloc_vsnprintf() and malloc_strtoumax() until
jemalloc is initialized.  This is necessary because locale
initialization causes allocation for both vsnprintf() and strtoumax().

Force the lazy-lock feature on in order to avoid pthread_self(),
because it causes allocation.

Use syscall(SYS_write, ...) rather than write(...), because libthr wraps
write() and causes allocation.  Without this workaround, it would not be
possible to print error messages in malloc_conf_init() without
substantially reworking bootstrapping.

Fix choose_arena_hard() to look at how many threads are assigned to the
candidate choice, rather than checking whether the arena is
uninitialized.  This bug potentially caused more arenas to be
initialized than necessary.
2012-02-02 23:09:53 -08:00
Jason Evans
6da5418ded Remove ephemeral mutexes.
Remove ephemeral mutexes from the prof machinery, and remove
malloc_mutex_destroy().  This simplifies mutex management on systems
that call malloc()/free() inside pthread_mutex_{create,destroy}().

Add atomic_*_u() for operation on unsigned values.

Fix prof_printf() to call malloc_vsnprintf() rather than
malloc_snprintf().
2012-03-23 18:05:51 -07:00
Jason Evans
9225a1991a Add JEMALLOC_CC_SILENCE_INIT().
Add JEMALLOC_CC_SILENCE_INIT(), which provides succinct syntax for
initializing a variable to avoid a spurious compiler warning.
2012-03-23 15:39:07 -07:00
Jason Evans
cd9a1346e9 Implement tsd.
Implement tsd, which is a TLS/TSD abstraction that uses one or both
internally.  Modify bootstrapping such that no tsd's are utilized until
allocation is safe.

Remove malloc_[v]tprintf(), and use malloc_snprintf() instead.

Fix %p argument size handling in malloc_vsnprintf().

Fix a long-standing statistics-related bug in the "thread.arena"
mallctl that could cause crashes due to linked list corruption.
2012-03-23 15:14:55 -07:00
Mike Hommey
154829d256 Improve zone support for OSX
I tested a build from 10.7 run on 10.7 and 10.6, and a build from 10.6
run on 10.6.  The AC_COMPILE_IFELSE limbo is to avoid running a program
during configure, which presumably makes it work when cross compiling
for iOS.
2012-03-20 11:52:50 -07:00
Jason Evans
e24c7af35d Invert NO_TLS to JEMALLOC_TLS. 2012-03-19 10:21:17 -07:00
Jason Evans
4e2e3dd9cf Fix fork-related bugs.
Acquire/release arena bin locks as part of the prefork/postfork.  This
bug made deadlock in the child between fork and exec a possibility.

Split jemalloc_postfork() into jemalloc_postfork_{parent,child}() so
that the child can reinitialize mutexes rather than unlocking them.  In
practice, this bug tended not to cause problems.
2012-03-13 16:31:41 -07:00
Jason Evans
0a0bbf63e5 Implement aligned_alloc().
Implement aligned_alloc(), which was added in the C11 standard.  The
function is weakly specified to the point that a minimally compliant
implementation would be painful to use (size must be an integral
multiple of alignment!), which in practice makes posix_memalign() a
safer choice.
2012-03-13 12:55:21 -07:00
Jason Evans
4c2faa8a7c Fix a regression in JE_COMPILABLE().
Revert JE_COMPILABLE() so that it detects link errors.  Cross-compiling
should still work as long as a valid configure cache is provided.

Clean up some comments/whitespace.
2012-03-13 11:09:23 -07:00
Jason Evans
d81e4bdd5c Implement malloc_vsnprintf().
Implement malloc_vsnprintf() (a subset of vsnprintf(3)) as well as
several other printing functions based on it, so that formatted printing
can be relied upon without concern for inducing a dependency on floating
point runtime support.  Replace malloc_write() calls with
malloc_*printf() where doing so simplifies the code.

Add name mangling for library-private symbols in the data and BSS
sections.  Adjust CONF_HANDLE_*() macros in malloc_conf_init() to expose
all opt_* variable use to cpp so that proper mangling occurs.
2012-03-07 16:19:19 -08:00
Jason Evans
4507f34628 Remove the lg_tcache_gc_sweep option.
Remove the lg_tcache_gc_sweep option, because it is no longer
very useful.  Prior to the addition of dynamic adjustment of tcache fill
count, it was possible for fill/flush overhead to be a problem, but this
problem no longer occurs.
2012-03-05 14:34:37 -08:00
Jason Evans
7e77eaffff Add the --disable-experimental option. 2012-03-02 17:47:37 -08:00
Jason Evans
0a5489e37d Add --with-mangling.
Add the --with-mangling configure option, which can be used to specify
name mangling on a per public symbol basis that takes precedence over
--with-jemalloc-prefix.

Expose the memalign() and valloc() overrides even if
--with-jemalloc-prefix is specified.  This change does no real harm, and
simplifies the code.
2012-03-01 17:19:20 -08:00
Jason Evans
7e15dab94d Add nallocm().
Add nallocm(), which computes the real allocation size that would result
from the corresponding allocm() call.  nallocm() is a functional
superset of OS X's malloc_good_size(), in that it takes alignment
constraints into account.
2012-02-29 12:56:37 -08:00
Jason Evans
4bb0983013 Use glibc allocator hooks.
When jemalloc is used as a libc malloc replacement (i.e. not prefixed),
some particular setups may end up inconsistently calling malloc from
libc and free from jemalloc, or the other way around.

glibc provides hooks to make its functions use alternative
implementations.  Use them.

Submitted by Karl Tomlinson and Mike Hommey.
2012-02-29 10:37:27 -08:00
Jason Evans
5965631636 Do not enforce minimum alignment in memalign().
Do not enforce minimum alignment in memalign().  This is a non-standard
function, and there is disagreement over whether to enforce minimum
alignment.  Solaris documentation (whence memalign() originated) says
that minimum alignment is required:

  The value of alignment must be a power of two and must be greater than
  or equal  to  the size of a word.

However, Linux's manual page says in its NOTES section:

  memalign() may not check that the boundary parameter is correct.

This is descriptive rather than prescriptive, but applications with
bad assumptions about memalign() exist, so be as forgiving as possible.

Reported by Mike Hommey.
2012-02-28 21:37:38 -08:00
Jason Evans
d073a32109 Enable the stats configuration option by default. 2012-02-28 20:41:16 -08:00
Jason Evans
c90ad71237 Remove the sysv option. 2012-02-28 20:31:37 -08:00
Jason Evans
f081b88dfb Fix realloc(p, 0) to act like free(p).
Reported by Yoni Londer.
2012-02-28 20:24:05 -08:00
Jason Evans
b172610317 Simplify small size class infrastructure.
Program-generate small size class tables for all valid combinations of
LG_TINY_MIN, LG_QUANTUM, and PAGE_SHIFT.  Use the appropriate table to generate
all relevant data structures, and remove the distinction between
tiny/quantum/cacheline/subpage bins.

Remove --enable-dynamic-page-shift.  This option didn't prove useful in
practice, and it prevented optimizations.

Add Tilera architecture support.
2012-02-28 16:50:47 -08:00
Jason Evans
5389146191 Remove the opt.lg_prof_bt_max option.
Remove opt.lg_prof_bt_max, and hard code it to 7.  The original
intention of this option was to enable faster backtracing by limiting
backtrace depth.  However, this makes graphical pprof output very
difficult to interpret.  In practice, decreasing sampling frequency is a
better mechanism for limiting profiling overhead.
2012-02-13 18:41:36 -08:00
Jason Evans
0b526ff94d Remove the opt.lg_prof_tcmax option.
Remove the opt.lg_prof_tcmax option and hard-code a cache size of 1024.
This setting is something that users just shouldn't have to worry about.
If lock contention actually ends up being a problem, the simple solution
available to the user is to reduce sampling frequency.
2012-02-13 18:04:26 -08:00
Jason Evans
6ffbbeb5d6 Silence compiler warnings. 2012-02-13 12:31:30 -08:00
Jason Evans
4162627757 Remove the swap feature.
Remove the swap feature, which enabled per application swap files.  In
practice this feature has not proven itself useful to users.
2012-02-13 10:56:17 -08:00
Jason Evans
7372b15a31 Reduce cpp conditional logic complexity.
Convert configuration-related cpp conditional logic to use static
constant variables, e.g.:

  #ifdef JEMALLOC_DEBUG
    [...]
  #endif

becomes:

  if (config_debug) {
    [...]
  }

The advantage is clearer, more concise code.  The main disadvantage is
that data structures no longer have conditionally defined fields, so
they pay the cost of all fields regardless of whether they are used.  In
practice, this is only a minor concern; config_stats will go away in an
upcoming change, and config_prof is the only other major feature that
depends on more than a few special-purpose fields.
2012-02-10 20:22:09 -08:00
Jason Evans
30fbef8aea Fix rallocm() test to support >4KiB pages. 2011-11-05 21:06:55 -07:00
Jason Evans
8e6f8b490d Initialize arenas_tsd before setting it.
Reported by: Ethan Burns, Rich Prohaska, Tudor Bosman
2011-11-03 18:40:03 -07:00
Jason Evans
46405e670f Fix a prof-related bug in realloc().
Fix realloc() such that it only records the object passed in as freed if
no OOM error occurs.
2011-08-30 23:37:29 -07:00
Jason Evans
749c2a0ab6 Add missing prof_malloc() call in allocm().
Add a missing prof_malloc() call in allocm().  Before this fix, negative
object/byte counts could be observed in heap profiles for applications
that use allocm().
2011-08-12 18:37:54 -07:00
Jason Evans
a507004d29 Fix off-by-one backtracing issues.
Rewrite prof_alloc_prep() as a cpp macro, PROF_ALLOC_PREP(), in order to
remove any doubt as to whether an additional stack frame is created.
Prior to this change, it was assumed that inlining would reduce the
total number of frames in the backtrace, but in practice behavior wasn't
completely predictable.

Create imemalign() and call it from posix_memalign(), memalign(), and
valloc(), so that all entry points require the same number of stack
frames to be ignored during backtracing.
2011-08-12 13:48:27 -07:00
Jason Evans
b493ce22a4 Conditionalize an isalloc() call in rallocm().
Conditionalize an isalloc() call in rallocm() that be unnecessary.
2011-08-12 11:28:47 -07:00
Jason Evans
183ba50c19 Fix two prof-related bugs in rallocm().
Properly handle boundary conditions for sampled region promotion in
rallocm().  Prior to this fix, some combinations of 'size' and 'extra'
values could cause erroneous behavior.  Additionally, size class
recording for promoted regions was incorrect.
2011-08-11 23:00:25 -07:00
Jason Evans
7427525c28 Move repo contents in jemalloc/ to top level. 2011-03-31 20:36:17 -07:00