This is the fall-out from analyzing the code paths around Compile(ICompiledDelegateCache) and friends, which are hit quite commonly in high-density Reaqtor hosting scenarios. For a micro-benchmark compiling the following sample expression tree
Expression<Func<string, int>> expr = s => s.ToUpper().Substring(1, 2).ToLower().Length - 1;
with caching enabled, we're seeing about ~20% savings in memory allocations. As a result, the comparison to the regular Compile() in the BCL is now better in terms of execution time (always was up to 10x for moderately complex expressions) and memory allocations which are harder to beat because we have to do a bunch of cache management, while Compile() is pretty much "just" populating a buffer with IL instructions (if we just account for the managed side of the house).
This is the fall-out from analyzing the code paths around
Compile(ICompiledDelegateCache)
and friends, which are hit quite commonly in high-density Reaqtor hosting scenarios. For a micro-benchmark compiling the following sample expression treewith caching enabled, we're seeing about ~20% savings in memory allocations. As a result, the comparison to the regular
Compile()
in the BCL is now better in terms of execution time (always was up to 10x for moderately complex expressions) and memory allocations which are harder to beat because we have to do a bunch of cache management, whileCompile()
is pretty much "just" populating a buffer with IL instructions (if we just account for the managed side of the house).