Commit Graph

214 Commits

Author SHA1 Message Date
Jason Evans
bbe29d374d Fix potential TLS-related memory corruption.
Avoid writing to uninitialized TLS as a side effect of deallocation.
Initializing TLS during deallocation is unsafe because it is possible
that a thread never did any allocation, and that TLS has already been
deallocated by the threads library, resulting in write-after-free
corruption.  These fixes affect prof_tdata and quarantine; all other
uses of TLS are already safe, whether intentionally (as for tcache) or
unintentionally (as for arenas).
2013-01-31 14:23:48 -08:00
Jason Evans
d1b6e18a99 Revert opt_abort and opt_junk refactoring.
Revert refactoring of opt_abort and opt_junk declarations.  clang
accepts the config_*-based declarations (and generates correct code),
but gcc complains with:

  error: initializer element is not constant
2013-01-22 16:54:26 -08:00
Jason Evans
ba175a2bfb Use config_* instead of JEMALLOC_*.
Convert a couple of stragglers from JEMALLOC_* to use config_*.
2013-01-22 12:14:45 -08:00
Jason Evans
ae03bf6a57 Update hash from MurmurHash2 to MurmurHash3.
Update hash from MurmurHash2 to MurmurHash3, primarily because the
latter generates 128 bits in a single call for no extra cost, which
simplifies integration with cuckoo hashing.
2013-01-22 12:02:08 -08:00
Jason Evans
88393cb0eb Add and use JEMALLOC_ALWAYS_INLINE.
Add JEMALLOC_ALWAYS_INLINE and use it to guarantee that the entire fast
paths of the primary allocation/deallocation functions are inlined.
2013-01-22 08:45:43 -08:00
Jason Evans
38067483c5 Tighten valgrind integration.
Tighten valgrind integration such that immediately after memory is
validated or zeroed, valgrind is told to forget the memory's 'defined'
state.  The only place newly allocated memory should be left marked as
'defined' is in the public functions (e.g. calloc() and realloc()).
2013-01-21 20:04:42 -08:00
Jason Evans
14a2c6a698 Avoid validating freshly mapped memory.
Move validation of supposedly zeroed pages from chunk_alloc() to
chunk_recycle().  There is little point to validating newly mapped
memory returned by chunk_alloc_mmap(), and memory that comes from sbrk()
is explicitly zeroed, so there is little risk to assuming that
chunk_alloc_dss() actually does the zeroing properly.

This relaxation of validation can make a big difference to application
startup time and overall system usage on platforms that use jemalloc as
the system allocator (namely FreeBSD).

Submitted by Ian Lepore <ian@FreeBSD.org>.
2013-01-21 19:56:34 -08:00
Garrett Cooper
6e6164ae15 Don't mangle errno with free(3) if utrace(2) fails
This ensures POLA on FreeBSD (at least) as free(3) is generally assumed
to not fiddle around with errno.

Signed-off-by: Garrett Cooper <yanegomi@gmail.com>
2012-12-24 10:30:57 -08:00
Jason Evans
1bf2743e08 Add clipping support to lg_chunk option processing.
Modify processing of the lg_chunk option so that it clips an
out-of-range input to the edge of the valid range.  This makes it
possible to request the minimum possible chunk size without intimate
knowledge of allocator internals.

Submitted by Ian Lepore (see FreeBSD PR bin/174641).
2012-12-23 08:51:48 -08:00
Jason Evans
1271185b87 Fix chunk_recycle() Valgrind integration.
Fix chunk_recycyle() to unconditionally inform Valgrind that returned
memory is undefined.  This fixes Valgrind warnings that would result
from a huge allocation being freed, then recycled for use as an arena
chunk.  The arena code would write metadata to the chunk header, and
Valgrind would consider these invalid writes.
2012-12-12 10:12:18 -08:00
Jason Evans
6eb84fbe31 Fix "arenas.extend" mallctl to return the number of arenas.
Reported by Mike Hommey.
2012-11-29 22:13:04 -08:00
Jason Evans
a3b3386ddd Avoid arena_prof_accum()-related locking when possible.
Refactor arena_prof_accum() and its callers to avoid arena locking when
prof_interval is 0 (as when profiling is disabled).

Reported by Ben Maurer.
2012-11-13 13:47:53 -08:00
Jason Evans
abf6739317 Tweak chunk purge order according to fragmentation.
Tweak chunk purge order to purge unfragmented chunks from high to low
memory.  This facilitates dirty run reuse.
2012-11-07 10:08:34 -08:00
Mike Hommey
847ff223de Don't register jemalloc's zone allocator if something else already replaced the system default zone 2012-11-06 16:06:59 -08:00
Jason Evans
e3d13060c8 Purge unused dirty pages in a fragmentation-reducing order.
Purge unused dirty pages in an order that first performs clean/dirty run
defragmentation, in order to mitigate available run fragmentation.

Remove the limitation that prevented purging unless at least one chunk
worth of dirty pages had accumulated in an arena.  This limitation was
intended to avoid excessive purging for small applications, but the
threshold was arbitrary, and the effect of questionable utility.

Relax opt_lg_dirty_mult from 5 to 3.  This compensates for increased
likelihood of allocating clean runs, given the same ratio of clean:dirty
runs, and reduces the potential for repeated purging in pathological
large malloc/free loops that push the active:dirty page ratio just over
the purge threshold.
2012-11-06 00:59:53 -08:00
Jason Evans
34457f5144 Fix deadlock in the arenas.purge mallctl.
Fix deadlock in the arenas.purge mallctl due to recursive mutex
acquisition.
2012-11-03 21:18:28 -07:00
Jason Evans
12efefb195 Fix dss/mmap allocation precedence code.
Fix dss/mmap allocation precedence code to use recyclable mmap memory
only after primary dss allocation fails.
2012-10-16 22:06:56 -07:00
Jason Evans
a5c80f893e Add ctl_mutex proection to arena_i_dss_ctl().
Add ctl_mutex proection to arena_i_dss_ctl(), since ctl_stats.narenas is
accessed.
2012-10-15 12:48:59 -07:00
Jason Evans
609ae595f0 Add arena-specific and selective dss allocation.
Add the "arenas.extend" mallctl, so that it is possible to create new
arenas that are outside the set that jemalloc automatically multiplexes
threads onto.

Add the ALLOCM_ARENA() flag for {,r,d}allocm(), so that it is possible
to explicitly allocate from a particular arena.

Add the "opt.dss" mallctl, which controls the default precedence of dss
allocation relative to mmap allocation.

Add the "arena.<i>.dss" mallctl, which makes it possible to set the
default dss precedence on a per arena or global basis.

Add the "arena.<i>.purge" mallctl, which obsoletes "arenas.purge".

Add the "stats.arenas.<i>.dss" mallctl.
2012-10-12 18:26:16 -07:00
Jan Beich
d0ffd8ed4f mark _pthread_mutex_init_calloc_cb as public explicitly
Mozilla build hides everything by default using visibility pragma and
unhides only explicitly listed headers. But this doesn't work on FreeBSD
because _pthread_mutex_init_calloc_cb is neither documented nor exposed
via any header.
2012-10-10 09:10:37 -07:00
Jason Evans
2cc11ff837 Make malloc_usable_size() implementation consistent with prototype.
Use JEMALLOC_USABLE_SIZE_CONST for the malloc_usable_size()
implementation as well as the prototype, for consistency's sake.
2012-10-09 16:29:21 -07:00
Jason Evans
b5225928fe Fix fork(2)-related mutex acquisition order.
Fix mutex acquisition order inversion for the chunks rtree and the base
mutex.  Chunks rtree acquisition was introduced by the previous commit,
so this bug was short-lived.
2012-10-09 16:16:00 -07:00
Jason Evans
20f1fc95ad Fix fork(2)-related deadlocks.
Add a library constructor for jemalloc that initializes the allocator.
This fixes a race that could occur if threads were created by the main
thread prior to any memory allocation, followed by fork(2), and then
memory allocation in the child process.

Fix the prefork/postfork functions to acquire/release the ctl, prof, and
rtree mutexes.  This fixes various fork() child process deadlocks, but
one possible deadlock remains (intentionally) unaddressed: prof
backtracing can acquire runtime library mutexes, so deadlock is still
possible if heap profiling is enabled during fork().  This deadlock is
known to be a real issue in at least the case of libgcc-based
backtracing.

Reported by tfengjun.
2012-10-09 15:21:46 -07:00
Jason Evans
7de92767c2 Fix mlockall()/madvise() interaction.
mlockall(2) can cause purging via madvise(2) to fail.  Fix purging code
to check whether madvise() succeeded, and base zeroed page metadata on
the result.

Reported by Olivier Lecomte.
2012-10-08 18:04:49 -07:00
Jason Evans
f4c3f8545b Fix error return value in thread_tcache_enabled_ctl().
Reported by Corey Richardson.
2012-10-08 15:48:04 -07:00
Corey Richardson
1d553f72cb If sysconf() fails, the number of CPUs is reported as UINT_MAX, not 1 as it should be 2012-10-08 15:45:38 -07:00
Corey Richardson
35579afb55 Remove unused variable and branch (reported by clang-analzyer) 2012-10-08 15:45:38 -07:00
Jason Evans
5c710cee78 Remove const from __*_hook variable declarations.
Remove const from __*_hook variable declarations, so that glibc can
modify them during process forking.
2012-05-23 16:09:22 -07:00
Jason Evans
f1966e1dc7 Update a comment. 2012-05-16 00:35:08 -07:00
Jason Evans
174b70efb4 Disable tcache by default if running inside Valgrind.
Disable tcache by default if running inside Valgrind, in order to avoid
making unallocated objects appear reachable to Valgrind.
2012-05-15 23:31:53 -07:00
Jason Evans
781fe75e0a Auto-detect whether running inside Valgrind.
Auto-detect whether running inside Valgrind, thus removing the need to
manually specify MALLOC_CONF=valgrind:true.
2012-05-15 14:48:14 -07:00
Jason Evans
58ad1e4956 Return early in _malloc_{pre,post}fork() if uninitialized.
Avoid mutex operations in _malloc_{pre,post}fork() unless jemalloc has
been initialized.

Reported by David Xu.
2012-05-11 17:40:16 -07:00
Jason Evans
d8ceef6c55 Fix large calloc() zeroing bugs.
Refactor code such that arena_mapbits_{large,small}_set() always
preserves the unzeroed flag, and manually manipulate the unzeroed flag
in the one case where it actually gets reset (in arena_chunk_purge()).
This fixes unzeroed preservation bugs in arena_run_split() and
arena_ralloc_large_grow().  These bugs caused large calloc() to return
non-zeroed memory under some circumstances.
2012-05-10 21:49:43 -07:00
Jason Evans
30fe12b866 Add arena chunk map assertions. 2012-05-10 21:49:43 -07:00
Jason Evans
5b0c99649f Refactor arena_run_alloc().
Refactor duplicated arena_run_alloc() code into
arena_run_alloc_helper().
2012-05-10 21:49:43 -07:00
Jason Evans
2e671ffbad Add the --enable-mremap option.
Add the --enable-mremap option, and disable the use of mremap(2) by
default, for the same reason that freeing chunks via munmap(2) is
disabled by default on Linux: semi-permanent VM map fragmentation.
2012-05-09 16:12:00 -07:00
Jason Evans
374d26a43b Fix chunk_recycle() to stop leaking trailing chunks.
Fix chunk_recycle() to correctly compute trailsize and re-insert
trailing chunks.  This fixes a major virtual memory leak.

Simplify chunk_record() to avoid dropping/re-acquiring chunks_mtx.
2012-05-09 14:48:35 -07:00
Jason Evans
de6fbdb72c Fix chunk_alloc_mmap() bugs.
Simplify chunk_alloc_mmap() to no longer attempt map extension.  The
extra complexity isn't warranted, because although in the success case
it saves one system call as compared to immediately falling back to
chunk_alloc_mmap_slow(), it also makes the failure case even more
expensive.  This simplification removes two bugs:

- For Windows platforms, pages_unmap() wasn't being called for unaligned
  mappings prior to falling back to chunk_alloc_mmap_slow().  This
  caused permanent virtual memory leaks.
- For non-Windows platforms, alignment greater than chunksize caused
  pages_map() to be called with size 0 when attempting map extension.
  This always resulted in an mmap() error, and subsequent fallback to
  chunk_alloc_mmap_slow().
2012-05-09 13:05:04 -07:00
Jason Evans
34a8cf6c40 Fix a base allocator deadlock.
Fix a base allocator deadlock due to chunk_recycle() calling back into
the base allocator.
2012-05-02 20:41:42 -07:00
Mike Hommey
c584fc75bb Don't use sizeof() on a VARIABLE_ARRAY
In the alloca() case, this fails to be the right size.
2012-05-02 16:33:19 -07:00
Mike Hommey
3597e91482 Allow je_malloc_message to be overridden when linking statically
If an application wants to override je_malloc_message, it is better to define
the symbol locally than to change its value in main(), which might be too late
for various reasons.

Due to je_malloc_message being initialized in util.c, statically linking
jemalloc with an application defining je_malloc_message fails due to
"multiple definition of" the symbol.

Defining it without a value (like je_malloc_conf) makes it more easily
overridable.
2012-05-02 16:25:41 -07:00
Jason Evans
80737c3323 Further optimize and harden arena_salloc().
Further optimize arena_salloc() to only look at the binind chunk map
bits in the common case.

Add more sanity checks to arena_salloc() that detect chunk map
inconsistencies for large allocations (whether due to allocator bugs or
application bugs).
2012-05-02 16:11:03 -07:00
Jason Evans
889ec59bd3 Make malloc_write() non-inline.
Make malloc_write() non-inline, in order to resolve its dependency on
je_malloc_write().
2012-05-02 02:08:03 -07:00
Jason Evans
203484e2ea Optimize malloc() and free() fast paths.
Embed the bin index for small page runs into the chunk page map, in
order to omit [...] in the following dependent load sequence:
  ptr-->mapelm-->[run-->bin-->]bin_info

Move various non-critcal code out of the inlined function chain into
helper functions (tcache_event_hard(), arena_dalloc_small(), and
locking).
2012-05-02 00:30:36 -07:00
Mike Hommey
fd97b1dfc7 Add support for MSVC
Tested with MSVC 8 32 and 64 bits.
2012-05-01 11:32:11 -07:00
Mike Hommey
da99e31105 Replace JEMALLOC_ATTR with various different macros when it makes sense
Theses newly added macros will be used to implement the equivalent under
MSVC. Also, move the definitions to headers, where they make more sense,
and for some, are even more useful there (e.g. malloc).
2012-04-30 17:57:31 -07:00
Mike Hommey
a14bce85e8 Use Get/SetLastError on Win32
Using errno on win32 doesn't quite work, because the value set in a shared
library can't be read from e.g. an executable calling the function setting
errno.

At the same time, since buferror always uses errno/GetLastError, don't pass
it.
2012-04-30 16:50:55 -07:00
Mike Hommey
af04b744bd Remove the VOID macro
Windows headers define a VOID macro.
2012-04-30 16:42:30 -07:00
Mike Hommey
8b49971d0c Avoid variable length arrays and remove declarations within code
MSVC doesn't support C99, and building as C++ to be able to use them is
dangerous, as C++ and C99 are incompatible.

Introduce a VARIABLE_ARRAY macro that either uses VLA when supported,
or alloca() otherwise. Note that using alloca() inside loops doesn't
quite work like VLAs, thus the use of VARIABLE_ARRAY there is discouraged.
It might be worth investigating ways to check whether VARIABLE_ARRAY is
used in such context at runtime in debug builds and bail out if that
happens.
2012-04-29 00:25:34 -07:00
Jason Evans
f278994029 Fix more prof_tdata resurrection corner cases. 2012-04-28 23:27:13 -07:00
Jason Evans
0050a0f7e6 Handle prof_tdata resurrection.
Handle prof_tdata resurrection during thread shutdown, similarly to how
tcache and quarantine handle resurrection.
2012-04-28 18:14:24 -07:00
Jason Evans
95ff6aadca Don't set prof_tdata during thread cleanup.
Don't set prof_tdata during thread cleanup, because doing so will cause
the cleanup function to be called again, the second time with a NULL
argument.
2012-04-28 14:15:28 -07:00
Jason Evans
3fb50b0407 Fix a PROF_ALLOC_PREP() error path.
Fix a PROF_ALLOC_PREP() error path to initialize the return value to
NULL.
2012-04-25 13:13:44 -07:00
Jason Evans
6b9ed67b4b Fix the "epoch" mallctl.
Fix the "epoch" mallctl to update cached stats even if the passed in
epoch is 0.
2012-04-25 13:12:46 -07:00
Jason Evans
f54166e7ef Add missing Valgrind annotations. 2012-04-23 22:41:36 -07:00
Jason Evans
7e060397a3 Fix quarantine_grow() bugs. 2012-04-23 22:07:30 -07:00
Jason Evans
9cd351d147 Add usize sanity checking to quarantine. 2012-04-23 21:43:18 -07:00
Jason Evans
577dd84660 Handle quarantine resurrection during thread exit.
Handle quarantine resurrection during thread exit in much the same way
as tcache resurrection is handled.
2012-04-23 21:14:26 -07:00
Jason Evans
87667a86a0 Fix two CHILD() macro calls in the ctl tree. 2012-04-23 19:54:15 -07:00
Jason Evans
65f343a632 Fix ctl regression.
Fix ctl to correctly compute the number of children at each level of the
ctl tree.
2012-04-23 19:31:45 -07:00
Jason Evans
8694e2e7b9 Silence compiler warnings. 2012-04-23 13:05:32 -07:00
Mike Hommey
461ad5c87a Avoid using a union for ctl_node_s
MSVC doesn't support C99, and as such doesn't support designated
initialization of structs and unions. As there is never a mix of
indexed and named nodes, it is pretty straightforward to use a
different type for each.
2012-04-23 11:43:44 -07:00
Jason Evans
52386b2dc6 Fix heap profiling bugs.
Fix a potential deadlock that could occur during interval- and
growth-triggered heap profile dumps.

Fix an off-by-one heap profile statistics bug that could be observed in
interval- and growth-triggered heap profiles.

Fix heap profile dump filename sequence numbers (regression during
conversion to malloc_snprintf()).
2012-04-22 16:00:11 -07:00
Mike Hommey
08e2221e99 Remove leftovers from the vsnprintf check in malloc_vsnprintf
Commit 4eeb52f removed vsnprintf validation, but left a now unused va_copy.
It so happens that MSVC doesn't support va_copy.
2012-04-21 21:29:12 -07:00
Mike Hommey
a19e87fbad Add support for Mingw 2012-04-21 21:27:46 -07:00
Jason Evans
a8f8d7540d Remove mmap_unaligned.
Remove mmap_unaligned, which was used to heuristically decide whether to
optimistically call mmap() in such a way that could reduce the total
number of system calls.  If I remember correctly, the intention of
mmap_unaligned was to avoid always executing the slow path in the
presence of ASLR.  However, that reasoning seems to have been based on a
flawed understanding of how ASLR actually works.  Although ASLR
apparently causes mmap() to ignore address requests, it does not cause
total placement randomness, so there is a reasonable expectation that
iterative mmap() calls will start returning chunk-aligned mappings once
the first chunk has been properly aligned.
2012-04-21 19:17:21 -07:00
Jason Evans
7ad54c1c30 Fix chunk allocation/deallocation bugs.
Fix chunk_alloc_dss() to zero memory when requested.

Fix chunk_dealloc() to avoid chunk_dealloc_mmap() for dss-allocated
memory.

Fix huge_palloc() to always junk fill when requested.

Improve chunk_recycle() to report that memory is zeroed as a side effect
of pages_purge().
2012-04-21 16:04:51 -07:00
Jason Evans
8f0e0eb1c0 Fix a memory corruption bug in chunk_alloc_dss().
Fix a memory corruption bug in chunk_alloc_dss() that was due to
claiming newly allocated memory is zeroed.

Reverse order of preference between mmap() and sbrk() to prefer mmap().

Clean up management of 'zero' parameter in chunk_alloc*().
2012-04-21 13:33:48 -07:00
Jason Evans
606f1fdc3c Put CONF_HANDLE_*() keys in quotes.
Put CONF_HANDLE_*() keys in quotes, so that they aren't mangled when
--with-private-namespace is used.
2012-04-20 21:39:14 -07:00
Jason Evans
f7088e6c99 Make arena_salloc() an inline function. 2012-04-19 18:28:03 -07:00
Mike Hommey
13067ec835 Remove extra argument for malloc_tsd_cleanup_register
Bookkeeping an extra argument that actually only stores a function pointer
for a function we already have is not very useful.
2012-04-18 19:25:01 -07:00
Jason Evans
86e58583bb Make special FreeBSD function overrides visible.
Make special FreeBSD libc/libthr function overrides for
_malloc_prefork(), _malloc_postfork(), and _malloc_thread_cleanup()
visible.
2012-04-18 19:01:00 -07:00
Mike Hommey
1ad56385ad Fix malloc_vsnprintf handling of %o, %u and %x
These flags take unsigned values, but they were fed with signed values
taken with va_arg, and that led to sign extension in cases where the
corresponding value has the most significant bit set.
2012-04-18 18:59:27 -07:00
Mike Hommey
666c5bf7a8 Add a pages_purge function to wrap madvise(JEMALLOC_MADV_PURGE) calls
This will be used to implement the feature on mingw, which doesn't have
madvise.
2012-04-18 18:57:48 -07:00
Jason Evans
78f7352259 Clean up a few config-related conditionals/asserts.
Clean up a few config-related conditionals to avoid unnecessary
dependencies on prof symbols.  Use cassert() rather than assert()
everywhere that it's appropriate.
2012-04-18 13:38:40 -07:00
Jason Evans
0b25fe79aa Update prof defaults to match common usage.
Change the "opt.lg_prof_sample" default from 0 to 19 (1 B to 512 KiB).

Change the "opt.prof_accum" default from true to false.

Add the "opt.prof_final" mallctl, so that "opt.prof_prefix" need not be
abused to disable final profile dumping.
2012-04-17 16:39:33 -07:00
Jason Evans
59ae2766af Add the --disable-munmap option.
Add the --disable-munmap option, remove the configure test that
attempted to detect the VM allocation quirk known to exist on Linux
x86[_64], and make --disable-munmap implicit on Linux.
2012-04-16 18:08:58 -07:00
Jason Evans
7ca0fdfb85 Disable munmap() if it causes VM map holes.
Add a configure test to determine whether common mmap()/munmap()
patterns cause VM map holes, and only use munmap() to discard unused
chunks if the problem does not exist.

Unify the chunk caching for mmap and dss.

Fix options processing to limit lg_chunk to be large enough that
redzones will always fit.
2012-04-12 20:20:58 -07:00
Jason Evans
d6abcbb14b Always disable redzone by default.
Always disable redzone by default, even when --enable-debug is
specified.  The memory overhead for redzones can be substantial, which
makes this feature something that should only be opted into.
2012-04-12 17:09:54 -07:00
Mike Hommey
b8325f9cb0 Call base_boot before chunk_boot0
Chunk_boot0 calls rtree_new, which calls base_alloc, which locks the
base_mtx mutex. That mutex is initialized in base_boot.
2012-04-12 11:42:20 -07:00
Mike Hommey
83c324acd8 Use a stub replacement and disable dss when sbrk is not supported 2012-04-12 08:43:08 -07:00
Jason Evans
5ff709c264 Normalize aligned allocation algorithms.
Normalize arena_palloc(), chunk_alloc_mmap_slow(), and
chunk_recycle_dss() to use the same algorithm for trimming
over-allocation.

Add the ALIGNMENT_ADDR2BASE(), ALIGNMENT_ADDR2OFFSET(), and
ALIGNMENT_CEILING() macros, and use them where appropriate.

Remove the run_size_p parameter from sa2u().

Fix a potential deadlock in chunk_recycle_dss() that was introduced by
eae269036c (Add alignment support to
chunk_alloc()).
2012-04-11 18:13:45 -07:00
Jason Evans
122449b073 Implement Valgrind support, redzones, and quarantine.
Implement Valgrind support, as well as the redzone and quarantine
features, which help Valgrind detect memory errors.  Redzones are only
implemented for small objects because the changes necessary to support
redzones around large and huge objects are complicated by in-place
reallocation, to the point that it isn't clear that the maintenance
burden is worth the incremental improvement to Valgrind support.

Merge arena_salloc() and arena_salloc_demote().

Refactor i[v]salloc() to expose the 'demote' option.
2012-04-11 11:46:18 -07:00
Jason Evans
a1ee7838e1 Rename labels.
Rename labels from FOO to label_foo in order to avoid system macro
definitions, in particular OUT and ERROR on mingw.

Reported by Mike Hommey.
2012-04-10 15:07:44 -07:00
Mike Hommey
eae269036c Add alignment support to chunk_alloc(). 2012-04-10 14:51:39 -07:00
Mike Hommey
c5851eaf6e Remove MAP_NORESERVE support
It was only used by the swap feature, and that is gone.
2012-04-10 12:05:27 -07:00
Jason Evans
3701367e4c Always initialize tcache data structures.
Always initialize tcache data structures if the tcache configuration
option is enabled, regardless of opt_tcache.  This fixes
"thread.tcache.enabled" mallctl manipulation in the case when opt_tcache
is false.
2012-04-06 12:41:55 -07:00
Jason Evans
fad100bc35 Remove arena_malloc_prechosen().
Remove arena_malloc_prechosen(), now that arena_malloc() can be invoked
in a way that is semantically equivalent.
2012-04-06 12:24:46 -07:00
Jason Evans
b147611b52 Add utrace(2)-based tracing (--enable-utrace). 2012-04-05 13:36:17 -07:00
Jason Evans
02b231205e Fix threaded initialization and enable it on Linux.
Reported by Mike Hommey.
2012-04-05 11:06:23 -07:00
Jason Evans
f3ca7c8386 Add missing "opt.lg_tcache_max" mallctl implementation. 2012-04-04 16:16:09 -07:00
Jason Evans
01b3fe55ff Add a0malloc(), a0calloc(), and a0free().
Add a0malloc(), a0calloc(), and a0free(), which are used by FreeBSD's
libc to allocate/deallocate TLS in static binaries.
2012-04-03 19:25:48 -07:00
Jason Evans
633aaff967 Postpone mutex initialization on FreeBSD.
Postpone mutex initialization on FreeBSD until after base allocation is
safe.
2012-04-03 19:25:30 -07:00
Jason Evans
9d4d76874d Finish renaming "arenas.pagesize" to "arenas.page". 2012-04-02 07:15:42 -07:00
Jason Evans
ae4c7b4b40 Clean up *PAGE* macros.
s/PAGE_SHIFT/LG_PAGE/g and s/PAGE_SIZE/PAGE/g.

Remove remnants of the dynamic-page-shift code.

Rename the "arenas.pagesize" mallctl to "arenas.page".

Remove the "arenas.chunksize" mallctl, which is redundant with
"opt.lg_chunk".
2012-04-02 07:04:34 -07:00
Jason Evans
f004737267 Revert "Avoid NULL check in free() and malloc_usable_size()."
This reverts commit 96d4120ac0.

ivsalloc() depends on chunks_rtree being initialized.  This can be
worked around via a NULL pointer check.  However,
thread_allocated_tsd_get() also depends on initialization having
occurred, and there is no way to guard its call in free() that is
cheaper than checking whether ptr is NULL.
2012-04-02 15:18:24 -07:00
Jason Evans
96d4120ac0 Avoid NULL check in free() and malloc_usable_size().
Generalize isalloc() to handle NULL pointers in such a way that the NULL
checking overhead is only paid when introspecting huge allocations (or
NULL).  This allows free() and malloc_usable_size() to no longer check
for NULL.

Submitted by Igor Bukanov and Mike Hommey.
2012-04-02 14:50:03 -07:00
Mike Hommey
80b25932ca Move last bit of zone initialization in zone.c, and lazy-initialize 2012-04-02 14:15:20 -07:00
Jason Evans
4eeb52f080 Remove vsnprintf() and strtoumax() validation.
Remove code that validates malloc_vsnprintf() and malloc_strtoumax()
against their namesakes.  The validation code has adequately served its
usefulness at this point, and it isn't worth dealing with the different
formatting for %p with glibc versus other implementations for NULL
pointers ("(nil)" vs. "0x0").

Reported by Mike Hommey.
2012-04-02 02:30:24 -07:00
Mike Hommey
3c2ba0dcbc Avoid crashes when system libraries use the purgeable zone allocator 2012-03-30 11:03:20 -07:00