Commit Graph

939 Commits

Author SHA1 Message Date
Abhishek Kulkarni
b617df81bb Add missing symbols to private_symbols.txt.
This resolves #185.
2015-01-21 12:44:35 -08:00
Jason Evans
44b57b8e8b Fix OOM handling in memalign() and valloc().
Fix memalign() and valloc() to heed imemalign()'s return value.

Reported by Kurt Wampler.
2015-01-16 18:04:17 -08:00
Jason Evans
24057f3da8 Fix an infinite recursion bug related to a0/tsd bootstrapping.
This resolves #184.
2015-01-14 16:27:31 -08:00
Guilherme Goncalves
51f86346c0 Add a isblank definition for MSVC < 2013 2015-01-09 14:33:46 -08:00
Mike Hommey
b7b44dfad0 Make mixed declarations an error
It often happens that code changes introduce mixed declarations, that then
break building with Visual Studio. Since the code style is to not use
mixed declarations anyways, we might as well enforce it with -Werror.
2014-12-18 15:12:53 +09:00
Guilherme Goncalves
9c6a8d3b0c Move variable declaration to the top its block for MSVC compatibility. 2014-12-17 14:46:35 -02:00
Bert Maher
b4acf7300a [pprof] Produce global profile unless thread-local profile requested
Currently pprof will print output for all threads if a single thread is not
specified, but this doesn't play well with many output formats (e.g., any of
the dot-based formats).  Instead, default to printing just the overall profile
when no specific thread is requested.

This resolves #157.
2014-12-14 17:12:20 -08:00
Guilherme Goncalves
2c5cb613df Introduce two new modes of junk filling: "alloc" and "free".
In addition to true/false, opt.junk can now be either "alloc" or "free",
giving applications the possibility of junking memory only on allocation
or deallocation.

This resolves #172.
2014-12-14 17:07:26 -08:00
Daniel Micay
b74041fb6e Ignore MALLOC_CONF in set{uid,gid,cap} binaries.
This eliminates the malloc tunables as tools for an attacker.

Closes #173
2014-12-14 15:36:15 -08:00
Jason Evans
e12eaf93dc Style and spelling fixes. 2014-12-08 16:34:04 -08:00
Chih-hung Hsieh
59cd80e6c6 Add a C11 atomics-based implementation of atomic.h API. 2014-12-06 21:17:49 -08:00
Jason Evans
a18c2b1f15 Style fixes. 2014-12-05 17:49:47 -08:00
Jason Evans
1036ddbf11 Fix OOM cleanup in huge_palloc().
Fix OOM cleanup in huge_palloc() to call idalloct() rather than
base_node_dalloc().  This bug is a result of incomplete refactoring, and
has no impact other than leaking memory during OOM.
2014-12-04 16:42:42 -08:00
Yuriy Kaminskiy
f79e01f75b Fix test_stats_arenas_bins for 32-bit builds. 2014-12-02 16:27:15 -08:00
Daniel Micay
879e76a9e5 teach the dss chunk allocator to handle new_addr
This provides in-place expansion of huge allocations when the end of the
allocation is at the end of the sbrk heap. There's already the ability
to extend in-place via recycled chunks but this handles the initial
growth of the heap via repeated vector / string reallocations.

A possible future extension could allow realloc to go from the following:

    | huge allocation | recycled chunks |
                                        ^ dss_end

To a larger allocation built from recycled *and* new chunks:

    |                      huge allocation                      |
                                                                ^ dss_end

Doing that would involve teaching the chunk recycling code to request
new chunks to satisfy the request. The chunk_dss code wouldn't require
any further changes.

    #include <stdlib.h>

    int main(void) {
        size_t chunk = 4 * 1024 * 1024;
        void *ptr = NULL;
        for (size_t size = chunk; size < chunk * 128; size *= 2) {
            ptr = realloc(ptr, size);
            if (!ptr) return 1;
        }
    }

dss:secondary: 0.083s
dss:primary: 0.083s

After:

dss:secondary: 0.083s
dss:primary: 0.003s

The dss heap grows in the upwards direction, so the oldest chunks are at
the low addresses and they are used first. Linux prefers to grow the
mmap heap downwards, so the trick will not work in the *current* mmap
chunk allocator as a huge allocation will only be at the top of the heap
in a contrived case.
2014-11-28 16:11:19 -08:00
Guilherme Goncalves
a2136025c4 Remove extra definition of je_tsd_boot on win32. 2014-11-18 19:08:18 -02:00
Jason Evans
d49cb68b9e Fix more pointer arithmetic undefined behavior.
Reported by Guilherme Gonçalves.

This resolves #166.
2014-11-17 10:31:59 -08:00
Jason Evans
2012d5a560 Fix pointer arithmetic undefined behavior.
Reported by Denis Denisov.
2014-11-17 09:54:49 -08:00
Jason Evans
9cf2be0a81 Make quarantine_init() static. 2014-11-07 14:50:38 -08:00
Jason Evans
c002a5c800 Fix two quarantine regressions.
Fix quarantine to actually update tsd when expanding, and to avoid
double initialization (leaking the first quarantine) due to recursive
initialization.

This resolves #161.
2014-11-04 18:03:11 -08:00
Jason Evans
2b2f6dc1e4 Disable arena_dirty_count() validation. 2014-11-01 02:29:10 -07:00
Jason Evans
82cb603ed7 Don't dereference NULL tdata in prof_{enter,leave}().
It is possible for the thread's tdata to be NULL late during thread
destruction, so take care not to dereference a NULL pointer in such
cases.
2014-11-01 00:20:28 -07:00
Jason Evans
d7a9bab92d Fix arena_sdalloc() to use promoted size (second attempt).
Unlike the preceeding attempted fix, this version avoids the potential
for converting an invalid bin index to a size class.
2014-10-31 22:26:24 -07:00
Jason Evans
6da2e9d4f6 Fix arena_sdalloc() to use promoted size. 2014-10-31 17:08:13 -07:00
Daniel Micay
dc65213111 rm unused arena wrangling from xallocx
It has no use for the arena_t since unlike rallocx it never makes a new
memory allocation. It's just an unused parameter in ixalloc_helper.
2014-10-30 23:19:34 -07:00
Jason Evans
cfc5706f69 Miscellaneous cleanups. 2014-10-30 23:18:45 -07:00
Daniel Micay
d33f834591 avoid redundant chunk header reads
* use sized deallocation in iralloct_realign
* iralloc and ixalloc always need the old size, so pass it in from the
  caller where it's often already calculated
2014-10-30 17:06:38 -07:00
Daniel Micay
809b0ac391 mark huge allocations as unlikely
This cleans up the fast path a bit more by moving away more code.
2014-10-30 17:06:38 -07:00
Jason Evans
c93ed81cd0 Fix prof_{enter,leave}() calls to pass tdata_self. 2014-10-30 16:50:33 -07:00
Jason Evans
af1f592763 Use JEMALLOC_INLINE_C everywhere it's appropriate. 2014-10-30 16:38:08 -07:00
Jason Evans
1f725eb7b5 Merge pull request #154 from guilherme-pg/implicit-int
Fix variable declaration with no type in the configure script.
2014-10-20 16:55:33 -07:00
Guilherme Goncalves
79725aa6f6 Fix variable declaration with no type in the configure script. 2014-10-20 14:08:37 -02:00
Jason Evans
8f47e3d82b Merge pull request #151 from thestinger/ralloc
use sized deallocation internally for ralloc
2014-10-16 13:12:05 -07:00
Daniel Micay
a9ea10d27c use sized deallocation internally for ralloc
The size of the source allocation is known at this point, so reading the
chunk header can be avoided for the small size class fast path. This is
not very useful right now, but it provides a significant performance
boost with an alternate ralloc entry point taking the old size.
2014-10-16 15:39:59 -04:00
Jason Evans
c83bccd273 Initialize chunks_mtx for all configurations.
This resolves #150.
2014-10-16 12:33:18 -07:00
Jason Evans
9673983443 Purge/zero sub-chunk huge allocations as necessary.
Purge trailing pages during shrinking huge reallocation when resulting
size is not a multiple of the chunk size.  Similarly, zero pages if
necessary during growing huge reallocation when the resulting size is
not a multiple of the chunk size.
2014-10-15 18:02:02 -07:00
Jason Evans
bf8d6a1092 Add small run utilization to stats output.
Add the 'util' column, which reports the proportion of available regions
that are currently in use for each small size class.  Small run
utilization is the complement of external fragmentation.  For example,
utilization of 0.75 indicates that 25% of small run memory is consumed
by external fragmentation, in other (more obtuse) words, 33% external
fragmentation overhead.

This resolves #27.
2014-10-15 16:18:42 -07:00
Jason Evans
acbcbad1e1 Thwart compiler optimizations. 2014-10-15 14:49:14 -07:00
Jason Evans
d1f3ab4008 Fix line wrapping. 2014-10-14 22:31:49 -07:00
Jason Evans
9b41ac909f Fix huge allocation statistics. 2014-10-14 22:20:00 -07:00
Jason Evans
0cdabd2d48 Update size class documentation. 2014-10-14 22:19:21 -07:00
Jason Evans
3c4d92e82a Add per size class huge allocation statistics.
Add per size class huge allocation statistics, and normalize various
stats:
- Change the arenas.nlruns type from size_t to unsigned.
- Add the arenas.nhchunks and arenas.hchunks.<i>.size mallctl's.
- Replace the stats.arenas.<i>.bins.<j>.allocated mallctl with
  stats.arenas.<i>.bins.<j>.curregs .
- Add the stats.arenas.<i>.hchunks.<j>.nmalloc,
  stats.arenas.<i>.hchunks.<j>.ndalloc,
  stats.arenas.<i>.hchunks.<j>.nrequests, and
  stats.arenas.<i>.hchunks.<j>.curhchunks mallctl's.
2014-10-12 23:02:10 -07:00
Jason Evans
44c97b712e Fix a prof_tctx_t/prof_tdata_t cleanup race.
Fix a prof_tctx_t/prof_tdata_t cleanup race by storing a copy of thr_uid
in prof_tctx_t, so that the associated tdata need not be present during
tctx teardown.
2014-10-12 13:03:20 -07:00
Jason Evans
381c23dd9d Remove arena_dalloc_bin_run() clean page preservation.
Remove code in arena_dalloc_bin_run() that preserved the "clean" state
of trailing clean pages by splitting them into a separate run during
deallocation.  This was a useful mechanism for reducing dirty page
churn when bin runs comprised many pages, but bin runs are now quite
small.

Remove the nextind field from arena_run_t now that it is no longer
needed, and change arena_run_t's bin field (arena_bin_t *) to binind
(index_t).  These two changes remove 8 bytes of chunk header overhead
per page, which saves 1/512 of all arena chunk memory.
2014-10-10 23:01:03 -07:00
Jason Evans
81e547566e Add --with-lg-tiny-min, generalize --with-lg-quantum. 2014-10-10 22:35:07 -07:00
Jason Evans
2eb941a3d3 Add AC_CACHE_CHECK() for pause instruction.
This supports cross compilation.
2014-10-10 20:40:43 -07:00
Jason Evans
9b75677e53 Don't fetch tsd in a0{d,}alloc().
Don't fetch tsd in a0{d,}alloc(), because doing so can cause infinite
recursion on systems that require an allocated tsd wrapper.
2014-10-10 18:19:20 -07:00
Jason Evans
fc0b3b7383 Add configure options.
Add:
  --with-lg-page
  --with-lg-page-sizes
  --with-lg-size-class-group
  --with-lg-quantum

Get rid of STATIC_PAGE_SHIFT, in favor of directly setting LG_PAGE.

Fix various edge conditions exposed by the configure options.
2014-10-09 22:44:37 -07:00
Jason Evans
b123ddc760 Don't configure HAVE_SSE2.
Don't configure HAVE_SSE2 (on behalf of SFMT), because its dependencies
are notoriously unportable in practice.

This resolves #119.
2014-10-08 18:18:03 -07:00
Jason Evans
57efa7bb0e Avoid atexit(3) when possible, disable prof_final by default.
atexit(3) can deadlock internally during its own initialization if
jemalloc calls atexit() during jemalloc initialization.  Mitigate the
impact by restructuring prof initialization to avoid calling atexit()
unless the registered function will actually dump a final heap profile.

Additionally, disable prof_final by default so that this land mine is
opt-in rather than opt-out.

This resolves #144.
2014-10-08 18:08:00 -07:00