Closed tophed closed 5 years ago
Should be better in 2.2 with tiered compilation? https://github.com/dotnet/coreclr/issues/18973
@brianrob @billwert anything you can help with?
This looks like something to be investigated. I want to say that we did look into this, but I don't recall the results. Let me poke around a bit and see what I can find out.
@benaadams thanks for the suggestion. Turning on tiered compilation didn't make a noticeable improvement for me, although I haven't been able to run extensive tests.
I've also realized the issue is especially bad for me because I am using a VPC. I'm getting cold starts anywhere from 6-20 seconds when making multiple concurrent requests 😔.
@brianrob were you able to find anything? Do you plan to take any further action?
I was able to gather some information, but need to go through it and figure out what next steps look like. Also need to triage this against other performance areas/goals. It's good to see that there is interest here though, and that will help.
Let's leave this issue open and use it to track further progress here.
.net core cold start time is forcing us to use Python or Go in all our microservices.
This has been our company's #1 wish for .net for over a year now.
Any new development?
@DanielLaberge given the number of votes here and replies, I would not call this a number 1 pain point of .NET Core at this moment. That said, we do care about the scenario. @brianrob were you able to make any progress?
Unfortunately not yet. I've not forgotten about this one, but have not had time for this.
+1.
Tried to use .NET Core 2.1 on AWS lambda as a backend for the Slack Event API, but Slack will fail (or re-try delivery) on any request taking 3s or above. This rules out .NET Core for my scenario, unfortunately.
Any more details on what type of workload and what type of environment?
Looking at the ASP.NET Benchmarks https://aka.ms/aspnet/benchmarks (page 4)
On .NET Core 3.0 Plaintext Middleware has a cold start to first request of 350ms
MVC DB EF Multiquery has a cold start to first request of 1.5secs
Me? My setup is:
If I mock all the HTTP requests, I still get a init duration of 3000ms, but with sub 100ms executions following that.
I recently tried to optimize (or at least analyze) the cold start of .NET Core / ASP.NET Core lambda but I had to come to the sad conclusion that we have to expect at least 3 seconds of cold start for an ASP.NET Core lambda with 256 MB of RAM. As said by others, tiered compilation did not help significantly. Look here for the results. Hopefully, someone here can prove me wrong.
This is quite important for us. It is one of main argument used by some of my colleagues that are pushing back on a transition to a serverless REST API.
I would love to see this as well. My use case is to use a .NET Lambda as a backend for Alexa skills, which (like the Slack example) has a time limit from the Alexa service (8 seconds) for a request, plus a noticeable impact on users if the response is long.
I recently found a method from Zac Charles called LambdaNative that reports a 10x startup speedup by using the relatively recently announced AWS Custom Runtime and a CoreRT build that AOT compiles the code instead of using JIT. I haven't tried it yet, but sharing just in case it helps someone on the thread.
@seanfisher According to some people who tried Lambda Native, it's not usable for anything more than a simple demo since CoreRT is very limited:
Alas, I got it to compile exactly as Zac described. But after that, you’re extremely limited with what CoreRT is compatible with. If you follow the CoreRT web API sample, towards the end you’ll see that you can’t inherit your controller from Microsoft.AspNetCore.Mvc.Controller and pretty much have to compose your responses from scratch. That was a deal breaker for us… ultimately we still can’t get cold starts down to an acceptable level, and will probably be going back to EC2 to host our user-facing API endpoints for now.
This is a major issue for us. We're exploring Azure and Google Cloud functions but would rather stay in AWS.
With .NET Core 3 that is coming, will AWS lambdas support ReadyToRun images that are supposed to improve startup performances?
@mabead I imagine you should be able to run those using Lambda Custom Runtimes.
@nvcken this was already mentioned by seanfisher above. Anyway, it's using CoreRT which is not production ready and will not be since MS doesn't see any point continuing its development.
Have anyone tried .NET Core 3.0 Preview R2R on AWS Lambdas yet?
It seems possible already: https://aws.amazon.com/blogs/developer/announcing-amazon-lambda-runtimesupport/
Have anyone tried .NET Core 3.0 Preview R2R on AWS Lambdas yet?
It seems possible already: https://aws.amazon.com/blogs/developer/announcing-amazon-lambda-runtimesupport/
I will wait till the official release of .NET CORE 3.0 this month, and then will create a demo on github.
@rclarke2050 were you able to test this with 3.0? Any interesting findings? @brianrob @billwert anything you would like to add to this?
@rclarke2050 were you able to test this with 3.0? Any interesting findings? @brianrob @billwert anything you would like to add to this?
not just yet, but will try to look at it this week.
Closing due to lack of response; please open up a new ticket re: 3.0 with your measurements if you find it has unacceptable performance. We're hoping tiered JIT resolved this.
Read Zac's new Article on benchmarking .net core 3.0 R2R https://medium.com/@zaccharles/net-core-3-0-aws-lambda-benchmarks-and-recommendations-8fee4dc131b0
Should be better in 2.2 with tiered compilation? dotnet/coreclr#18973
@benaadams answering your 2018 message, with 3.0... no, it's actually worse through the benchmark :( ReadyToRun helped a little...
with 3.0... no, it's actually worse through the benchmark
So...
please open up a new ticket re: 3.0 with your measurements if you find it has unacceptable performance. We're hoping tiered JIT resolved this.
Can you open a new issue so is can be tracked?
Yes please, new issue with details, traces or ideally a repro.
Reopening the conversation about serverless cold start times from #1060 since this is still an issue with 2.1.
@jansabbe's test shows little improvement from 2.0 to 2.1 for cold starts:
Is there any hope for improvement on this in upcoming versions?