Closed ahungry closed 1 year ago
Right now I suggest just doing them sequentially. This is not because it is impossible to make them run concurrently, it is simply because I have not built any libs / tools to enable that yet.
The most probable solution I see right now is to add a proc to https://github.com/guzba/curly that would make use of libcurl multi requests. Then many requests could be done at once without any additional thread usage.
This is not without some implementation things to sort out -- if I use libcurl handles from the pool, I'd need to take the number needed (or some max) and then not be able to return them until all requests finished, which could cause issues if any of the requests in the multi request take a long time to complete. I'll need to think about how I want to have this work at some point.
If these are fast requests and there are only a couple, just doing them sequentially and moving on is by far the best idea.
Thanks @guzba (for the response here, and for what looks like a lot of nim packages you've provided the world)!
The example was a bit contrived - it could be 3, 10, 20, or even 100 aggregated API calls - the difference between sequential and concurrent is vast.
It sounds like there is no real way to accomplish this at the moment (at least, following mummy idioms? as it'd seem odd to try to pull async stuff on an explicitly non-async web server?) - but I look forward to the solution you proposed!
If you are doing that many separate HTTP RPC calls in an API endpoint handler I suggest that is a bit unusual and that is unfortunately not a use-case that aligns with what Mummy is really focused on.
I know how to improve things for more HTTP requests, however nothing will prevent such an endpoint from being trivially vulnerable to abuse which would be a concern for me.
Closing this since there's no action to take in Mummy specifically and creating an issue on Curly.
The actual use case (which I was hoping to POC to my work, as an alternative to python+FastAPI) would be in a heavy microservice architecture, where a middleware/backend-for-frontend serves data tailored to a page or endpoint, which happens to be comprised of data from many different microservices.
Rate throttling and abuse potential is really a concern unrelated to a web server/framework I think - as those things tend to be locked down via other mechanisms (authentication/authorization/WAF level rules).
Thanks for making a follow up on your HTTP client lib!
Ah yeah if you're going heavy microservices then Mummy is probably not a good fit. It's not an approach to server design I use so I haven't put in the work to facilitate it better. Other options out there though so hopefully you can find something that feels cozy.
If I want to build an endpoint that aggregates data from 3 different data sources (lets say, HTTP APIs) that do not depend on each other - would the concurrency model used with mummy allow me to make these requests at the same time, or would I need to perform them sequentially?
In a language like nodejs or python, I would use the equivalent of
Promise.all()
orasyncio.gather()
, but both of these are async/await type calls in their respective languages.