Open zaptree opened 2 years ago
How are people managing queue workers and alike with apprunner?
How are people managing queue workers and alike with apprunner?
@MPJHorner it looks like currently it's not an option to use apprunner for that. We moved over everything to ECS because of that.
We moved over everything to ECS because of that.
Same. I was excited to try out AppRunner, but for the (fairly simple) Ruby on Rails apps that I'm working on, I need 1 container for the app, and 1 for a background-job runner.
I'll now be looking at using ECS (Fargate) with CDK.
Definitely would like to see this implemented. Evaluating Apprunner now and this could be a deal breaker. Otherwise a very promising service.
Same as above comments. App runner is turning out to be a very simple (heroku-ish) service that's easy to get up and running with. I have a similar requirement with Ruby on Rails where I need to be able to run workers for Sidekiq background jobs.
Seems like this is currently a deal breaker, as efforts to move to copilot or ECS is a lot more and complex as compared.
App runner service would be more complete with the background jobs ability! Hoping to see it in near future.
Similar to above, but I have a growing app (really three deployable apps in a monorepo) currently running on Lambda with a mixture of API Gateway + EventBridge invocations. I'd like to consider moving to something like App Runner to improve portability and reduce latency, but I can't do so without background workers.
In addition to offering the ability to simply keep the CPU running for worker-initiated subscriptions, offering Lambda-like subscriptions-by-configuration to AWS services like SQS or Event Bridge would also be wonderful.
I think that fits the market category App Runner is situated in - closer to a fully managed service like Lambda than it is to infrastructure as a service. I also think that the likely increased latency from such an integration style is typically acceptable for background workers, while we'd still have the benefits of relatively low latency for synchronous HTTP requests.
As a workaround for this limitation, our team was considering creating a lambda that receives sqs jobs, converts them into http requests, and sends them to an additional AppRunner instance that is dedicated for worker jobs (our app has a /worker
end point that can receive jobs). Any downside with this approach?
@cgat one downside with this approach is that you might run into the 120 secs request limit (see https://docs.aws.amazon.com/apprunner/latest/dg/develop.html#:~:text=App%20Runner%20terminates%20the%20TLS,limit%20on%20the%20HTTP%20requests.)
@MatthiasWinzeler that's a good call out. FWIW, we did end up switching over to ECS instead (with a similar strategy)
One follow up question on this, what request time out limit are you looking for this use case?
I am excitedly waiting for worker support.
https://www.youtube.com/watch?v=Hw6-WiDWRzE&t=2100s
App Runner Feature... Upcomming
- Build pack Support
- Source code repositories(GitLab, BitBucket, CodeCommit)
- Bring-your-own-cert for custom domains
- FedRamp/HIPAA
- Web socket, HTTP 2, gRPC
- Worker workload support(Bakground processing, Queues, Notifications)
- Regional Expansion
- Service version with ability to rollback
- Edit Failed service configuration
- Auto-scaling configuration as top level resouce
- Application instance auto-refresh for secrets/configuration rotation
- Improved Logging for troubleshooting
I am excitedly waiting for worker support.
https://www.youtube.com/watch?v=Hw6-WiDWRzE&t=2100s
App Runner Feature... Upcomming
- Build pack Support
- Source code repositories(GitLab, BitBucket, CodeCommit)
- Bring-your-own-cert for custom domains
- FedRamp/HIPAA
- Web socket, HTTP 2, gRPC
- Worker workload support(Bakground processing, Queues, Notifications)
- Regional Expansion
- Service version with ability to rollback
- Edit Failed service configuration
- Auto-scaling configuration as top level resouce
- Application instance auto-refresh for secrets/configuration rotation
- Improved Logging for troubleshooting
Are these already implemented?
App Runner adding Web socket support will be a game changer for me. I wonder when will this be available π€
Question about AppRunner Scaledown event. Suppose someone has a fire-and-forget function built in their application which run beyond the lifetime of the request
Not a great solution, but what I've done to get around this is to poll an endpoint on the same server that keeps the request open for a minute while the background task is running. Instance remains active and CPU is not throttled down. This likely only works if there's no scaling out.
Similar issue. We need to run a long-running job on a background thread:
@app.route('/start_long_running_job')
def start_long_running_job():
threading.Thread(target=long_running_job).start()
return 'Started task', 200
Given the long-running job takes over 1 hour, it's not feasible to wait for the task to finish before returning.
As a temporary workaround we've gone for the polling approach, but we're probably going to have to migrate off App Runner unless the CPU throttling can be disabled.
The experience setting up and running App Runner has otherwise been great, and the simplicity was something we really wanted compared to other AWS services (e.g. Fargate), so we're really hoping to see this feature.
Very keen for worker services for AppRunner. We recently migrated to AppRunner purely for plug & play deployment of our existing EKS system (which was too complicated for our small team and use case) but have since faced throttling related issues to our many fire and forget endpoints (.NET/DotNet Masstransit Marten C# API)
Community Note
Tell us about your request App Runner is great for quickly getting your apps running without any hassle but it seems to only support applications that serve http traffic only.
Describe alternatives you've considered At first I just assumed that I would just not get auto scaling for a worker processes thus I would have to set my minimum/maximum to be the same number i.e. 4/4 and have to manually monitor when to scale it. Unfortunately this work around does not seem to work because as I found in the documentation it mentions:
The key here being that the actual instances will be throttled so my work around would not work as I confirmed with opening this question in re:post:
https://repost.aws/questions/QUEwbKE9jbTCyLn7OnqGhOrA/app-runner-scaling-for-background-service
Ideally the perfect solution for this is to allow autoscaling based of of memory/cpu usage for background workers.
I assume this might not be the easiest change to add, so an alternative that could possibly be easier would be a checkbox ☑ "Prevent CPU Throttling" in the auto scaling section would work.
Maybe to make it more intuitive to users who do not know, you have a checkbox ☑ "Is Worker Process" and if checked it auto checks the ☑ "Prevent CPU Throttling" checkbox and removes the minimum maximum setting and replaces it with a single input that you specify the number of instances you want and a warning saying something like "auto scaling is currently not supported for worker processes".