dotnet / runtime

.NET is a cross-platform runtime for cloud, mobile, desktop, and IoT apps.
https://docs.microsoft.com/dotnet/core/
MIT License
14.64k stars 4.57k forks source link

CLR/JIT should optimize "alloc temporary small object" to "alloc on stack" automatically #4584

Closed ygc369 closed 2 weeks ago

ygc369 commented 8 years ago

If a small object's reference is never stored in heap or static variables, (only stored in local variables), it is temporary, the CLR or JIT should alloc it on stack and free it in time. Thus, GC pressure would be lower. This feature is just like escape analysis technique in JVM. I've suggested this in Roslyn forum(https://github.com/dotnet/roslyn/issues/2104), but they said I should suggest here.

category:cq theme:stack-allocation skill-level:expert cost:large

AndyAyersMS commented 6 years ago

Egor restored the branch and I updated it to build versus master here: master...AndyAyersMS:StackAllocation. Also started looking into what it would take to stack allocate a simple delegate that is directly invoked. Still work to do here as the error path also causes the delegate to escape.

using System;

class X
{
    public static int Main()
    {
        int a = 100;
        Func<int> f = () => { return a; };
        return f();
    }
}

Note somewhat paradoxically a lambda with no local captures compiles into a more complex sequence as the C# compiler caches the both the closure instance and the delegate in statics. This is a good idea for reducing allocations but it means that the jit will have a much harder time reasoning about the code.

// Very simple delegate. Static caching makes flow analysis harder
using System;

class X
{
    public static int Main()
    {
        Func<int> f = () => { return 100; };
        return f();
    }
}

As far as when or if this ability might be added to the jit, I can't say. I would like to sketch out more of the work involved to try and better understand the key problems that need to be solved.

Feel free to pick up this fork and play around with it. I'm not sure how correct it is when it actually does decide to allocate something on the stack.

benaadams commented 6 years ago

Note somewhat paradoxically a lambda with no local captures compiles into a more complex sequence as the C# compiler caches the both the closure instance and the delegate in statics.

Would that mean a method group conversion; which C# still doesn't cache https://github.com/dotnet/roslyn/issues/5835, actually has an advantage for stack allocation?

using System;

class X
{
    public static int Function() => 100;

    public static int Invoke(Func<int> f) => f();

    public static int Main()
    {
        var result = Invoke(Function); // always allocates
        result += Invoke(() => Function()); // doesn't allocate
        return result;
    }
}
AndyAyersMS commented 6 years ago

I suppose so, though it is a funny kind of advantage.

There is a natural tension in places between doing optimizations very early, when generating IL, or waiting until later and trying to do them in the JIT. Often times doing the former makes the latter more difficult. We see this here for the noncapturing case in various ways -- the use of statics, cctors, and even the use of dup to pend a value to the stack across control flow.

The .Net ecosystem as a whole is better off with caching closures and delegates where it is viable -- there are many different execution engines with varying capabilities, and caching will greatly reduce allocation costs for all of them. But it may mean we need some extra information passed to the jit so it can safely recognize these patterns, if we care to optimize them further.

benaadams commented 6 years ago

Been having a look at where method group conversions occur in coreclr and they are mostly then stored in a heap variable (e.g. param to delegate .ctor, that then gets passed somewhere else) - so you are right caching would probably be the more effective approach as they wouldn't be viable as stack only 😢

AndyAyersMS commented 6 years ago

Brief update:

G_M49993_IG01: 57 push rdi 56 push rsi 4883EC68 sub rsp, 104 488D7C2420 lea rdi, [rsp+20H] B912000000 mov ecx, 18 33C0 xor rax, rax F3AB rep stosd

G_M49993_IG02: 48B90054C964F97F0000 mov rcx, 0x7FF964C95400 E84DFD815F call CORINFO_HELP_NEWSFAST // closure still escapes C7400864000000 mov dword ptr [rax+8], 100

;; setup the delegate on the stack

   33D2                 xor      rdx, rdx
   488D4C2420           lea      rcx, bword ptr [rsp+20H]  // delegate on stack
   488911               mov      qword ptr [rcx], rdx
   48BA403CA3C1F97F0000 mov      rdx, 0x7FF9C1A33C40   // delegate method table
   4889542428           mov      qword ptr [rsp+28H], rdx
   488D542420           lea      rdx, bword ptr [rsp+20H]
   488D7208             lea      rsi, [rdx+8]
   488D4E08             lea      rcx, bword ptr [rsi+8]
   488BD0               mov      rdx, rax
   E8E8236C5F           call     CORINFO_HELP_CHECKED_ASSIGN_REF
   48B93817DB64F97F0000 mov      rcx, 0x7FF964DB1738  // function pointer
   48894E18             mov      qword ptr [rsi+24], rcx

;; delegate now pointed to by RSI .. fetch this and call via the function pointer

   488D4E08             lea      rcx, bword ptr [rsi+8]
   488B09               mov      rcx, gword ptr [rcx]
   FF5618               call     qword ptr [rsi+24]System.Func`1[Int32][System.Int32]:Invoke():int:this
   90                   nop

G_M49993_IG03: 4883C468 add rsp, 104 5E pop rsi 5F pop rdi C3 ret


Seems like a next step is to try and run the stack conversion earlier so that we can promote the delegate struct (will require some tricks as it is too big to promote right now) and propagate the field values. Then turn the call from indirect to direct. Then it would be really nice to inline the call... but that will require a bunch more work.

If/when we gain the ability to inline we likely discover that some locals previously though to escape no longer do so.  So one thought is to push stack allocation opts early and do it the same time as inlining. But then we'd lose out on struct promotion and propagation. The other tack would be to revise inlining so we could incrementally inline later on. But we would need to do that before morph. So feels like there are bunch of phase ordering problems to sort through here.
AndyAyersMS commented 6 years ago

Did some prototyping on top of the changes above to get the delegate to be promotable. Lots of things to sort through on the way:

Current jit dump: (same code as above but with promoted fields):

; Assembly listing for method X:Main():int
; Emitting BLENDED_CODE for X64 CPU with AVX
; optimized code
; rsp based frame
; partially interruptible
; Final local variable assignments
;
;  V00 tmp0         [V00,T01] (  3,  6   )     ref  ->  rax         class-hnd exact non-null
;  V01 tmp1         [V01,T00] (  4,  8   )     ref  ->  rsi         class-hnd exact non-null
;* V02 tmp2         [V02    ] (  0,  0   )    long  ->  zero-ref
;  V03 tmp3         [V03    ] (  3,  3   )  struct (72) [rsp+0x20]   do-not-enreg[XSFB] must-init addr-exposed
;  V04 tmp4         [V04    ] (  3,  3   )     ref  ->  [rsp+0x28]   do-not-enreg[X] addr-exposed V03.??(offs=0x08) P-DEP
;  V05 tmp5         [V05    ] (  3,  3   )     ref  ->  [rsp+0x30]   do-not-enreg[X] addr-exposed V03.??(offs=0x10) P-DEP
;  V06 tmp6         [V06    ] (  3,  3   )    long  ->  [rsp+0x38]   do-not-enreg[X] addr-exposed V03.??(offs=0x18) P-DEP
;  V07 tmp7         [V07    ] (  3,  3   )    long  ->  [rsp+0x40]   do-not-enreg[X] addr-exposed V03.??(offs=0x20) P-DEP
;  V08 tmp8         [V08    ] (  3,  3   )     ref  ->  [rsp+0x48]   do-not-enreg[X] addr-exposed V03.??(offs=0x28) P-DEP
;  V09 tmp9         [V09    ] (  3,  3   )    long  ->  [rsp+0x50]   do-not-enreg[X] addr-exposed V03.??(offs=0x30) P-DEP
;  V10 OutArgs      [V10    ] (  1,  1   )  lclBlk (32) [rsp+0x00]
;  V11 rat0         [V11,T02] (  2,  4   )     ref  ->  rsi
;
; Lcl frame size = 104

G_M49993_IG01:
       57                   push     rdi
       56                   push     rsi
       4883EC68             sub      rsp, 104
       488D7C2420           lea      rdi, [rsp+20H]
       B912000000           mov      ecx, 18
       33C0                 xor      rax, rax
       F3AB                 rep stosd

G_M49993_IG02:
       48B900549E60F97F0000 mov      rcx, 0x7FF9609E5400
       E84DFD825F           call     CORINFO_HELP_NEWSFAST
       C7400864000000       mov      dword ptr [rax+8], 100
       33D2                 xor      rdx, rdx
       488D4C2420           lea      rcx, bword ptr [rsp+20H]
       488911               mov      qword ptr [rcx], rdx
       48BA403CAABCF97F0000 mov      rdx, 0x7FF9BCAA3C40
       4889542428           mov      qword ptr [rsp+28H], rdx
       488D542420           lea      rdx, bword ptr [rsp+20H]
       488D7208             lea      rsi, [rdx+8]
       488D4E08             lea      rcx, bword ptr [rsi+8]
       488BD0               mov      rdx, rax
       E8E8236D5F           call     CORINFO_HELP_CHECKED_ASSIGN_REF
       48B93817B060F97F0000 mov      rcx, 0x7FF960B01738
       48894E18             mov      qword ptr [rsi+24], rcx
       488D4E08             lea      rcx, bword ptr [rsi+8]
       488B09               mov      rcx, gword ptr [rcx]
       FF5618               call     qword ptr [rsi+24]System.Func`1[Int32][System.Int32]:Invoke():int:this
       90                   nop

G_M49993_IG03:
       4883C468             add      rsp, 104
       5E                   pop      rsi
       5F                   pop      rdi
       C3                   ret

; Total bytes of code 120, prolog size 20 for method X:Main():int

So... where to go from here.

First we likely need to inline the call to Invoke earlier. It currently happens in lower though I do not see any reason why it it can't be done sooner. That in turn might let us remove address exposure of V03 as V01 (the pointer to the delegate) will now be unused -- though we would only see that if we forward substituted the initial value for V01 (which is &V03 + 8) to its use(s). We might get tripped up to non-field stores to V03 (eg the vtable is not backed by a field, nor is the preheader). But if we can get that far then that should allow independent promotion of the fields and hopefully let us remove the unused delegate fields (which there are many) and propagate the closure pointer down to the call site. Then we might be set to also propagate the method pointer there and turn the call from indirect to direct.

And than if we somehow can pull off late inlining we can see the closure is not exposed either...

Note all this talk of "propagation" is premature as this early in the jit we have no way currently of doing anything like this.

AndyAyersMS commented 6 years ago

After a good deal more wrangling things are starting to look somewhat respectable.

The resulting method is now:

; Assembly listing for method X:Main():int
; Emitting BLENDED_CODE for X64 CPU with AVX
; optimized code
; rsp based frame
; partially interruptible
; Final local variable assignments
;
;  V00 tmp0         [V00,T00] (  3,  6   )     ref  ->  rax         class-hnd exact non-null
;* V01 tmp1         [V01    ] (  0,  0   )     ref  ->  zero-ref    class-hnd exact non-null
;* V02 tmp2         [V02    ] (  0,  0   )    long  ->  zero-ref
;* V03 tmp3         [V03    ] (  0,  0   )  struct (56) zero-ref
;* V04 tmp4         [V04    ] (  0,  0   )    long  ->  zero-ref    V03.??(offs=0x00) P-INDEP
;  V05 tmp5         [V05,T01] (  3,  3   )     ref  ->  rcx         V03.??(offs=0x08) P-INDEP
;* V06 tmp6         [V06    ] (  0,  0   )     ref  ->  zero-ref    V03.??(offs=0x10) P-INDEP
;* V07 tmp7         [V07,T02] (  0,  0   )    long  ->  zero-ref    V03.??(offs=0x18) P-INDEP
;* V08 tmp8         [V08    ] (  0,  0   )    long  ->  zero-ref    V03.??(offs=0x20) P-INDEP
;* V09 tmp9         [V09    ] (  0,  0   )     ref  ->  zero-ref    V03.??(offs=0x28) P-INDEP
;* V10 tmp10        [V10    ] (  0,  0   )    long  ->  zero-ref    V03.??(offs=0x30) P-INDEP
;  V11 OutArgs      [V11    ] (  1,  1   )  lclBlk (32) [rsp+0x00]
;
; Lcl frame size = 40

G_M49993_IG01:
       4883EC28             sub      rsp, 40

G_M49993_IG02:
       48B90054DD60F97F0000 mov      rcx, 0x7FF960DD5400
       E85DFD805F           call     CORINFO_HELP_NEWSFAST
       C7400864000000       mov      dword ptr [rax+8], 100
       488BC8               mov      rcx, rax
       48B83817EF60F97F0000 mov      rax, 0x7FF960EF1738
       FFD0                 call     rax
       90                   nop

G_M49993_IG03:
       4883C428             add      rsp, 40
       C3                   ret

; Total bytes of code 47, prolog size 4 for method X:Main():int

So we allocate and initialize the closure, then invoke a method on it (still indirectly).

Plausible next steps:

Am going to continue on in prototype mode for a while here and then circle back and try to describe what steps we'd actually take if we want to pursue this sort of thing for real.

Prototype bits are here master...AndyAyersMS:NonNullPlusStackAlloc in case anyone's curious what this looks like. Not guaranteed to work on any example other than the above.

AndyAyersMS commented 6 years ago

Fixed some issues enabling this by default for crossgen of corelib. Note I don't see any stack allocation conversions in corelib (Egor's notes indicated he was seeing some). Not sure why just yet, either I've introduced a bug or the patterns we use to find are no longer there.

We will not see conversions when crossgenning in R2R mode because the newobj in R2R is more abstract and if we converted we'd introduce a hard dependence on ref class size and layout and make R2R less resilient.

So perhaps (depending on what is going on in corelib) this is best thought of as a jit-only optimization.

AndyAyersMS commented 6 years ago

Also (note to self) jit-diffs must be run in two steps in two repos as this change modifies the jit interface but does not yet modify the GUID -- so running a stock baseline jit will trigger a guard CFG fault on Windows. Had me puzzled there for a while.

Running jit-diffs (once I got past that puzzle) shows the main impact is from removing null checks from delegate invokes. Not sure this is correct so will need to look at that closely.

AndyAyersMS commented 6 years ago

Scratch that, it's not a nullcheck, just folding an address mode. Current jit late delegate expansion causes it to miss a folding opportunity.

Somewhere along the line we lose the magic disassembly display. Current lowering for Invoke is a bit odd -- we still claim it is a direct call but have an indirect gtControlExpr. So likely it can use the method handle from the direct call to get the text.

;;; before
       lea      rcx, bword ptr [rax+8]
       mov      rcx, gword ptr [rcx]
       mov      rdx, rdi
       call     gword ptr [rax+24]System.Func`2[__Canon,__Canon][System.__Canon,System.__Canon]:Invoke(ref):ref:this

;;; after
       mov      rcx, gword ptr [rax+8]
       mov      rdx, rdi
       call     gword ptr [rax+24]
AndyAyersMS commented 6 years ago

Magic disassembly relies on the method handle, which lives in a union with the call target address. So we'd need a separate (perhaps debug only) field to tunnel this through. Not worth the trouble right now.

One other note -- you might have wondered why we don't tail call the delegate in any of the examples above. The VM won't let the jit make tail calls from Main so that the debugging experience doesn't get too weird. But if we put the code in a noinline helper method invoked from Main, it should tail call.

Mike-E-angelo commented 6 years ago

Hello all... this has been on my docket for a while and I am just now finding the time to attend to it. To start, I'd like to send a little thanks to @AndyAyersMS for pointing me to this issue from the comments of this recent great performance article in regards to .NET Core 2.1.

My primary motivator here is simply awareness. As also discussed in the comments of the above article, I have posted a question in regards to .NET Core's performance in this area on StackOverflow that provides my research in this matter, and also highlights how there could be further improvements in the performance for this particular scenario.

Essentially, the typical pattern I am finding in my code reflects very much what the above StackOverflow question presents. I code much in the way of delegates these days, and to me that delegate represents a method call site for lack of a better expression. It would seem that if a method body simply consists of a call to another delegate, that this would be inlined/optimized in some fashion so that the delegate gets called directly rather than having to call the owning/owner's call site first (if that makes sense).

Also, it is worth mentioning I am seeing this pattern in other ways. In fact, I just now finished this great post by @ploeh who provides another similar example of this scenario here. If you scroll up a bit from the latter link, you can see again that the body of the NaturalNumber implementation method is simply a call to another delegate.

So, as a further example in the case that we're describing here, assume that we're calling the NaturalNumber.Nine.Match method, there would be -- by my estimation, which of course could be totally wrong here! -- at least total 20 instructions: 10 that involve calling the "outer" containing Successor.Match (or Zero.Match) method first, and then 10 that involve invoking the actual "inner" delegate.

The optimization that I am pawing at here would essentially remove the 10 "outer" instructions that call the external (Successor.Match or Zero.Match) method and essentially inline the "inner" delegate, remaining with 10 instructions total, essentially improving performance by 50% ... in theory 😆.

Again, this is all based on my limited knowledge in this area so I could totally be wrong in my understanding and subsequent calculations in this regard. I would love feedback here to ensure my understanding is correct and of course any further correction to help align my understanding so that it is better intune with how everything here actually operates.

To summarize, my intention here is simply to provide a little more context and awareness from my perspective, and to raise my hand in a way to say that there might be a way to further optimize this pattern/scenario. I have been seeing it a lot in my personal coding adventures which led to the StackOverflow question as well as with other examples/prescriptions as noted with the blog post above.

Thank you for any and all consideration/conversation towards this!

svick commented 6 years ago

@Mike-EEE @AndyAyersMS Since this issue is about stack allocation, shouldn't discussion about delegate inlining continue somewhere else (maybe at https://github.com/dotnet/coreclr/issues/6737)? Though I realize the two problems are connected.

Mike-E-angelo commented 6 years ago

Sure @svick that sounds good. Not much more to discuss from my side -- as TBH this is way outside of my comfort zone ATM -- but more of just sharing some data points and additional context for added perspective. 👍

Probably a safe bet to say, though, that everyone here is probably already maxed out on all the data they already have, I'm guessing. 😆

AndyAyersMS commented 6 years ago

In the notes above I outline a bunch of engineering challenges to sort out before we even get in the neighborhood of being able to inline delegates. Many of those are interesting and valuable jit improvements in their own right (like stack allocation which is the subect of this issue). So we may spend quite a while working through the prerequisites.

An important thing to emphasize is that is that even if we do all that, and push forward to actually inline delegates, the scope of the optimization is still going to be limited -- the jit must see delegate creation and invocation end up in the same method before it can optimize. We can enhance that scope via inlining but there are limits there too.

So the likely outcome is that with all that work, only some subset of delegate invocations will get inlined (sort of like where we are now with devirtualization). We can tell already that a number of popular compositional patterns -- say like the way LINQ uses delegates, and the examples you point out -- are not really amenable to optimization this way.

But all hope is not lost. The way to get around these limitations is to start making speculative bets. And that ties back into tiered jitting and taking advantage of feedback from the running application.

So going forward we have two main avenues to explore: improving the ability of the jit to optimize when it can see all the pieces of the puzzle, and relying on feedback to make intelligent guesses when it can't.

JeroMiya commented 4 years ago

Are we talking about true stack allocation or are we talking about scalar replacement, as the JVM does?

Also, would it be possible to implement a separate thread-local heap allocator that could be used in place of the global GC for object trees determined to be thread-local and non-escaping in a particular scope? The JIT or AOT could then deterministically free those object trees when they go out of scope without adding to GC pressure, and maybe even be able to support additional scenarios that scalar replacement or stack allocation couldn't (for example being able to pass such an object to a function). Assuming it's possible to do the necessary escape analysis through a method call.

Suchiman commented 4 years ago

Are we talking about true stack allocation

yes

or are we talking about scalar replacement

The JIT currently only does this for structs, it's called struct promotion

Assuming it's possible to do the necessary escape analysis through a method call.

That is the primary problem: https://github.com/dotnet/runtime/issues/1661#issuecomment-573781871

JeroMiya commented 4 years ago

Interesting. What about compiler hinting? Could the C#/F#/VB compilers optionally do more extensive analysis than the JIT is capable of doing, then pass the results of the analysis to the JIT and/or AOT engines?

I hesitate to suggest it, because I'm hoping for a solution that wouldn't require any changes to existing code to take advantage of, but what about a language-level hint that the compiler could enforce? A while ago I toyed with the idea of a "local" keyword for this purpose (disclaimer: not a fully baked idea/proposal, just a random thought):

static MyClass staticObj;

class MyClass {
    public int Field { get; set; }
    [NoEscape]
    public void Mutate() {
        // staticObj = this; // <-- NOT allowed, because of [NoEscape]
        Field = Field * 2;
    }
}

local MyClass MethodWithLocalArgAndReturn(local MyClass param)
{
    // staticObj = param; // <-- NOT allowed
    local var localObj = param; // allowed to assign local to local
    localObj.Mutate(); // allowed, Mutate is marked with [NoEscape]
    return localObj; // allowed because return value is declared as local
}

void Method() {
    local var localObj = new MyClass();
    MethodWithLocalArgAndReturn(localObj); // Allowed because argument is local
    // localObj goes out of scope, AOT/JIT can be sure no references exist after that point
    // can optimize accordingly
}

This is not my preferred solution - I'd rather have the compiler and JIT/AOT do the work for us and optimize accordingly.

Also, worth considering that the types of applications that would need this optimization would be good candidates for AOT or slower-to-compile/faster-to-run style optimizations.

ygc369 commented 4 years ago

I think that C# compiler should do simple escape analysis, thus the generated IL code would have already been optimized, so JIT need not do this expensive work. Besides, AOT compiler should do full escape analysis, because it can see all the code to compile.

ygc369 commented 4 years ago

@AndyAyersMS @jkotas I heard that Microsoft teams have stopped escape analysis project. Is there an opportunity to restart it? If it is so difficult, I wonder how JVM implement it.

AndyAyersMS commented 4 years ago

There are some important differences between the JVM and the CLR, and these differences influence the relative priorities of optimizations.

I think it's likely we'll resume working on escape analysis someday, but there are some enabling technologies we need to work on first.

ygc369 commented 4 years ago

@AndyAyersMS Thanks for comment. But what are these enabling technologies? Could you give an example? Besides, since fully automatic escape analysis is so difficult, I suggest to provide some syntax to allow manually allocating objects from stack. For example,

class Foo
{
    private int[] arr;
    Foo(int n) { arr = new int[n]; }
}
var foo = stackalloc Foo(10); // reuse the syntax "stackalloc" to allocate the object "foo" from stack.
......

What to notice is that the internal array in "foo" should also be stack-allocated (although keyword "new" is used in the constructor), as long as the array does not exceed the stack space left, otherwise it will be still allocated from heap. CLR should do the decision about where to allocate the internal objects inside a stackalloc'ed outer object, according to the stack size. If the stackalloc'ed object is referenced from heap later, then it should be moved (or promoted) to the heap automatically. But this kind of promotion should be reported to the programmers in debug mode to help them find objects which are not suitable for stack allocation.

JeroMiya commented 4 years ago

Hmm, not sure if explicit stack allocation of objects is the answer. They would be so limited I wonder whether they'd even be useful for scenarios you'd want to use them for.

ygc369 commented 3 years ago

@AndyAyersMS I think that tiered JIT might be a solution for escape analysis. When the code is compiled by the JIT compiler for the first time, the JIT compiler could execute an aggressive stack allocation strategy, that is, when it can't be decided quickly whether an object should be stack allocated, just assume it is safe to allocate the object on stack. And then, if the stack-allocated object is referenced from the heap, just copy(move) it to the heap. Then, when the tiered JIT compiler does ReJIT, the IL code should be re-generated to directly heap-allocate these objects proved to escape.

AndyAyersMS commented 3 years ago

@ygc369 current thinking is that perhaps we could run escape analysis during an AOT phase where we have more time and larger scope (that is we can do an interprocedural analysis) -- then have the jit rely on this in some manner.

The other thought is to look into partial escape analysis where we "just in time" move objects to the heap if control reaches an escape point; this likely would need some form of profile-based feedback to be done profitably.

JeroMiya commented 3 years ago

@ygc369 @AndyAyersMS Couple of questions:

Can the compiler inject hints into the CIL to reduce the performance impact of escape analysis on the JIT engine?

Secondly, however and whenever the escape analysis runs, does the optimization always have to be in the form of stack-allocated objects or scalar replacement? Would it be possible for the runtime/JIT/AOT to allocate and automatically free non-escaping heap objects and their child references when they go out of scope without triggering a GC on either alloc or free? Kind of like a "scope and thread local heap". While this wouldn't be as efficient as a stack allocation or scalar replacement would it be more flexible and comprehensive in terms of where the optimization could be used? Would there be a performance impact in doing this (e.g. inserting an implicit exception handler to that scope to ensure the object is freed)?

ldematte commented 3 years ago

@AndyAyersMS is there anything already implemented on escape analysis, in the JIT or somewhere else? I'd like to learn more about it. Can you point me at code/docs/discussions?

sgf commented 1 year ago

its 2022,with more 3 years,thats will be 10 years,.net team will be do something for .net's ZGC?

We all know that the C++ committee delays and slows down C++ development, but the .net foundation is relatively young and shouldn't be. And GC is just an implementation, not a standard, this shouldn't be delayed too long.

JeroMiya commented 1 year ago

its 2022,with more 3 years,thats will be 10 years,.net team will be do something for .net's ZGC?

We all know that the C++ committee delays and slows down C++ development, but the .net foundation is relatively young and shouldn't be. And GC is just an implementation, not a standard, this shouldn't be delayed too long.

I think it's more along the lines of determining whether this is the right solution to the problem. The JVM's optimizations around allocations have a significant impact, but they are limited in ways that a developer won't necessarily have ready insight into. I'm not an expert on GC by any means, but my limited understanding leads me to believe some form of deterministic deallocation mechanism or a more aggressive object pool for temporary variables would be more feasible. Not least because it would allow you to continue passing "temporary" instances to methods that take them as arguments, including methods on the instance itself. After all, an optimization is no good if all the code you normally write falls outside the prerequisites.

PathogenDavid commented 1 year ago

@sgf I think you might be underestimating how complex this issue is. The GC isn't really the main issue here, it's the escape analysis required to ensure the optimization is safe to preform.

Either way, @AndyAyersMS has indicated in https://github.com/dotnet/runtime/issues/11192#issuecomment-1185855115 that this is potentially on deck to receive more attention after .NET 7.

sgf commented 1 year ago

@sgf I think you might be underestimating how complex this issue is. The GC isn't really the main issue here, it's the escape analysis required to ensure the optimization is safe to preform.

Either way, @AndyAyersMS has indicated in #11192 (comment) that this is potentially on deck to receive more attention after .NET 7.

If .net teams do not have the technical ability to solve the problem, then learn from the go team and hire experts from outside to solve the problem. Times have changed, and .net's current gc can only handle toy-level applications.

I don't know if this is a typical problem for a super company like Microsoft, but .net is moving too slowly and has missed too many opportunities. Although things are getting better with the current .net it seems to be too late.

The slow motion of .net even affected the development of Unity3d.This is only one of the few areas where .net is still popular. I don't want .net to miss out again and again like WindowsPhone.

My words don't sound good,maybe some people don't like it and are not happy.But that's the truth, isn't it?

sgf commented 1 year ago

In order to maintain the illusion of .net prosperity, as long as I see .net projects, I mark almost all of them as star . Although this is trivial, .net open source projects are also part of the .net ecosystem, if only to incentivize them to maintain open source projects.

Suchiman commented 1 year ago

@sgf

If .net teams do not have the technical ability to solve the problem, then learn from the go team and hire experts from outside to solve the problem. Times have changed, and .net's current gc can only handle toy-level applications.

Riiiight, scroll to the Garbage Collection headline

JeroMiya commented 1 year ago

If .net teams do not have the technical ability to solve the problem, then learn from the go team and hire experts from outside to solve the problem. Times have changed, and .net's current gc can only handle toy-level applications.

I don't know if this is a typical problem for a super company like Microsoft, but .net is moving too slowly and has missed too many opportunities. Although things are getting better with the current .net it seems to be too late.

The slow motion of .net even affected the development of Unity3d.This is only one of the few areas where .net is still popular. I don't want .net to miss out again and again like WindowsPhone.

My words don't sound good,maybe some people don't like it and are not happy.But that's the truth, isn't it?

There are many trillions of lines of code written on the .NET platform, in some cases within codebases that are already more than a decade old. The fact that the .NET ecosystem evolves as fast as it does is a testament to the "technical ability" of all the contributors to the .NET ecosystem. I've worked on applications that started out as Windows services running on premise and ended up running in a Kubernetes on a cloud platform. It speaks well of everyone involved that fundamental changes to core pieces, like GC, are done after very careful consideration.

Also, when you get a chance, I kindly suggest reading the Code of Conduct.

ygc369 commented 1 year ago

Hope this can be implemented in .Net 8. @AndyAyersMS

Zintom commented 1 year ago

I understand that allocating objects on the stack may be vastly more complex than it appears on the surface.

Perhaps a more doable idea is to enable a new type of heap allocation, using a new heap which is not managed by the garbage collector and requires manual allocation and freeing of memory.

We can be smarter than the GC at allocating/freeing memory, this new functionality would enable us to put this knowledge into action.

I'd love to hear people's thoughts on this, any obvious caveats that I've missed?

danmoseley commented 1 year ago

@Zintom why do you think a heap allocator would be cheaper? How would the GC detect roots from your heap?

En3Tho commented 1 year ago

Arena based allocators like "allocate temp heap, do whatever you want within it, remove heap entirely" have the same problem as object stack allocation - there should be an escape analysis done proving that no object refs "escape" to actual heaps of GC.

So I guess you would need

1) an escape analysis in Roslyn (prove refs won't escape, fail compilation otherwise or else there is no meaning in such allocator)

2) in JIT (because JIT can't rely on IL compiler's opinion only, it needs to be sure too)

3) also a mean to allocate a "real heap" objects to save the result of this "off-heap" computation and prove that data is strictly copied there and not referenced.

Zintom commented 1 year ago

How would the GC detect roots from your heap?

My idea is that any object referenced by this "unmanaged heap" would be marked as "possibly live, linked to an object in the unmanaged heap" and thus the GC would ignore them.

why do you think a heap allocator would be cheaper?

Considering most of the runtime is built around the idea of "the less allocations the better" in order to avoid GC pressure, I believe this could have massive potential in terms of the BCL and the wider ecosystem in general. I think most intermediate developers and above know how to manage their alloc's/free's. Maybe it wouldn't be used everywhere, but in critical sections of code or in libraries we could see major improvements, the Average Joe wouldn't have to know that an unmanaged heap even existed, but would benefit as his allocations wouldn't have such an impact on his program performance.

JeroMiya commented 1 year ago

If at all possible, I would prefer GC optimizations that didn't require explicit action by the developer. Ideally they should be transparent to the developer, like strategies where the JIT or AOT engine detects non-escaping allocations and implements optimizations, such as scalar replacement, without the developer needing to explicitly mark a variable as non-escaping or allocating it on a special heap.

There's a lot of existing code out there, including in the BCL, where such optimizations would be beneficial. It's pragmatically a non-starter to require that code to be updated or rewritten to explicitly make use of those optimizations.

My question for the experts would be: would it even be an optimization if hypothetically the runtime could simply deterministically deallocate some non-escaping heap allocations when they go out of scope? I mean, putting scalar replacement aside, if the JIT could deterministically deallocate a temp variable when it goes out of scope. That's not free from a performance perspective, yeah? I imagine it would negatively effect the throughput of the method.

I think the question suggests we should step back and analyze what it is we're actually optimizing. GC pressure is "bad" for a specific type of performance. For example, for real-time graphics applications or animated UI frameworks, a GC pause can cause visual stuttering. For high throughput applications like microservices and other backends, high GC pressure might cause throughput issues if the GC can't scale with the demands being placed on it. How can we solve these specific problems? Is deterministic deallocation for temporary objects really the best solution? I'd love to hear some insights from the experts.

Worth noting that both the real-time graphics/UI applications and high-throughput microservices tend to do relatively short bursts of work on a particular thread or thread pool and then give up control of the thread back to the thread pool or else the thread pauses until it's signaled again. Just a random thought: perhaps a more efficient way to deal with temporary allocations is some kind of thread-local "temp variable GC generation". That is to say, rather than allocation all temporary variables onto gen 0, when the JIT can prove a variable is non-escaping, it can safely allocate that variable onto a separate "gen T" (for Temporary). This pool ONLY contains objects that are non-escaping for a specific thread. Then, whenever a thread gives up control back to a thread pool or goes to sleep, the GC can free up everything in that thread's gen-t pool without a full scan.

hez2010 commented 1 year ago

Allocating small objects on the stack can not only reducing the GC pressure, but also can make small objects be able to participate in the struct promotion optimization, so that small objects can just be kept in the register (enregister) without need of being allocated at all, which is a significant performance improvement for .NET. A lot of legacy/benchmark code is not using structs today, which is also a great motivation to implement this optimization so that they can benefit without need of changing the source.

Just see how this can affect the binary tree benchmark:

public class BinaryTrees
{
    struct TreeNodeS
    {
        class Next { public TreeNodeS left, right; }
        readonly Next next;

        TreeNodeS(TreeNodeS left, TreeNodeS right) =>
            next = new Next { left = left, right = right };

        internal static TreeNodeS Create(int d)
        {
            return d == 1 ? new TreeNodeS(new TreeNodeS(), new TreeNodeS())
                          : new TreeNodeS(Create(d - 1), Create(d - 1));
        }

        internal int Check()
        {
            int c = 1;
            var current = next;
            while (current != null)
            {
                c += current.right.Check() + 1;
                current = current.left.next;
            }
            return c;
        }
    }

    class TreeNode
    {
        class Next { public TreeNode left, right; }
        readonly Next next;
        TreeNode(TreeNode left, TreeNode right) =>
            next = new Next { left = left, right = right };
        public TreeNode() { }
        internal static TreeNode Create(int d)
        {
            return d == 1 ? new TreeNode(new TreeNode(), new TreeNode())
                          : new TreeNode(Create(d - 1), Create(d - 1));
        }

        internal int Check()
        {
            int c = 1;
            var current = next;
            while (current != null)
            {
                c += current.right.Check() + 1;
                current = current.left.next;
            }
            return c;
        }

        internal void Hold() { }
    }

    internal class Program
    {
        static void Main(string[] args)
        {
            BenchmarkRunner.Run<Benchmarks>();
        }
    }

    [MemoryDiagnoser]
    public class Benchmarks
    {
        const int MinDepth = 4;

        [Arguments(18), Benchmark]
        public int Class(int maxDepth)
        {
            var longLivedTree = TreeNode.Create(maxDepth);
            var nResults = (maxDepth - MinDepth) / 2 + 1;
            for (int i = 0; i < nResults; i++)
            {
                var depth = i * 2 + MinDepth;
                var n = (1 << maxDepth - depth + MinDepth);

                var check = 0;
                for (int j = 0; j < n; j++)
                {
                    check += TreeNode.Create(depth).Check();
                }
            }

            return longLivedTree.Check();
        }

        [Arguments(18), Benchmark]
        public int Struct(int maxDepth)
        {
            var longLivedTree = TreeNodeS.Create(maxDepth);
            var nResults = (maxDepth - MinDepth) / 2 + 1;
            for (int i = 0; i < nResults; i++)
            {
                var depth = i * 2 + MinDepth;
                var n = (1 << maxDepth - depth + MinDepth);

                var check = 0;
                for (int j = 0; j < n; j++)
                {
                    check += TreeNodeS.Create(depth).Check();
                }

            }

            return longLivedTree.Check();
        }
    }
}
Method maxDepth Mean Error StdDev Gen0 Gen1 Gen2 Allocated
Class 18 2,604.3 ms 49.37 ms 107.33 ms 907000.0000 168000.0000 15000.0000 2561.34 MB
Struct 18 531.8 ms 11.37 ms 29.96 ms 390000.0000 40000.0000 1000.0000 1021.33 MB

Simply changing class to struct yields a 500% performance improvement and cut down 60% of the allocation for free.

sgf commented 1 year ago

Simply changing class to struct yields a 500% performance improvement and cut down 60% of the allocation for free.

struct is very good,but still less description ,some limit like :

cant define custom struct as fixed buffer , or cant use bit-field.

CodingMadness commented 9 months ago

Any news on this topic here so far?

ygc369 commented 1 month ago

Any progress?

AndyAyersMS commented 3 days ago

Any progress?

Yes, it's enabled now.

ygc369 commented 2 days ago

Any progress?

Yes, it's enabled now.

It is enabled by default in .NET 9, correct?

AndyAyersMS commented 1 day ago

Any progress?

Yes, it's enabled now.

It is enabled by default in .NET 9, correct?

Yes.