d131331310
We have observed new workload patterns (namely ML training type) that cycle through oversized allocations frequently, because 1) the dataset might be sparse which is faster to go through, and 2) GPU accelerated. As a result, the eager purging from the oversize arena becomes a bottleneck. To offer an easy solution, allow normal purging of the oversized extents when background threads are enabled. |
||
---|---|---|
.. | ||
analyze | ||
include/test | ||
integration | ||
src | ||
stress | ||
unit | ||
test.sh.in |