6f29a83924
rtree-based extent lookups remain more expensive than chunk-based run lookups, but with this optimization the fast path slowdown is ~3 CPU cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles prior. The path caching speedup tends to degrade gracefully unless allocated memory is spread far apart (as is the case when using a mixture of sbrk() and mmap()). |
||
---|---|---|
.. | ||
internal | ||
jemalloc_defs.h.in | ||
jemalloc_macros.h.in | ||
jemalloc_mangle.sh | ||
jemalloc_protos.h.in | ||
jemalloc_rename.sh | ||
jemalloc_typedefs.h.in | ||
jemalloc.sh |