Open olsavmic opened 3 years ago
@olsavmic I think it's a more general issue than DB transactions since it will affect any resources with reference counting. Also, it can potentially DDoS problem, if you can find a request that fails fast but provokes a lot of memory allocation referenced by long-running resolvers you can run out of memory on the server.
But we need to be extra careful with execute
code to not affect performance or correctness.
So let's postpone this fix until we finish TS conversion and release it in the upcoming 16.0.0
.
Good point, thank you!
Hi @IvanGoncharov, I see this issue in the 16.0.0-alpha.1 milestone which has already been released. Is it still planned for the v16 or did you come over to some issues with this change?
Thanks a lot! :)
This feature is useful when the tasks are not dependent on each other.
@hamzahamidi Can you elaborate on this? I can't think of such situation as I'm not saying to run these resolvers in sequence but rather just way for all resolvers to finish before sending a response which allows for proper resource cleanup and prevents possible DDoS as @IvanGoncharov stated.
@IvanGoncharov can we consider this for v16?
Released in graphql executor 0.0.7 see above links
We ran into this issue and it was causing problems with our dependency injection as we didn't expect the behaviour of graphql to be continue processing after returning an error result.
This is partially handled by https://github.com/graphql/graphql-js/pull/4267
The execution tree will end, although the response will be returned first.
Hi, I recently came across this issue while trying to create a DB transaction per request.
The reason for using a DB transaction per request (which means 1 connection per client) is to keep the returned data consistent (
ISOLATION READ COMMITED
) as with increased traffic some inconsistencies started to occur on certain queries.The second reason is having more control over the db connections to support load balancing with dynamic number of read replicas (as the only available solution unfortunately does not support connection pool). However I suppose this can be solved with some effort.
The problem is that when one of the resolvers fails with error (which is not a casual case but happens sometimes), other resolvers keep running. However I need to release the db connection before I send the response which sometimes happens before the rest of resolvers finishes (most of them are data loaders which makes the problem more obvious as the default behaviour is to wait for the next tick until the batch operation runs).
The problem could be resolved by using
Promise.allSettled
instead ofPromise.all
. I can't come up with any problem this change could cause except longer time till response in case of error.Can you please correct me if I'm mistaken or consider this change? Or maybe transactional processing of resolvers is a bad idea in general (although I don't see any reason except the performance gain from using multiple connections per request)
Thanks a lot!