Add rtree lookup path caching.

rtree-based extent lookups remain more expensive than chunk-based run
lookups, but with this optimization the fast path slowdown is ~3 CPU
cycles per metadata lookup (on Intel Core i7-4980HQ), versus ~11 cycles
prior.  The path caching speedup tends to degrade gracefully unless
allocated memory is spread far apart (as is the case when using a
mixture of sbrk() and mmap()).
This commit is contained in:
Jason Evans
2016-06-02 18:43:10 -07:00
parent 7be2ebc23f
commit 6f29a83924
7 changed files with 267 additions and 94 deletions

View File

@@ -399,6 +399,7 @@ rtree_child_read
rtree_child_read_hard
rtree_child_tryread
rtree_clear
rtree_ctx_start_level
rtree_delete
rtree_elm_acquire
rtree_elm_lookup
@@ -502,6 +503,9 @@ tsd_nominal
tsd_prof_tdata_get
tsd_prof_tdata_set
tsd_prof_tdatap_get
tsd_rtree_ctx_get
tsd_rtree_ctx_set
tsd_rtree_ctxp_get
tsd_rtree_elm_witnesses_get
tsd_rtree_elm_witnesses_set
tsd_rtree_elm_witnessesp_get
@@ -529,6 +533,7 @@ tsd_witnesses_set
tsd_witnessesp_get
tsdn_fetch
tsdn_null
tsdn_rtree_ctx
tsdn_tsd
witness_assert_lockless
witness_assert_not_owner