Open thenonameguy opened 4 years ago
In high concurrency cases, this prevents deadlock situations, so I think it's a good to keep as default, but I'm ok adding a configuration option to toggle this off, do you want to work on making this? We could call ::pp/disable-pending-check-safeguard? true
.
About rearchitect, this problem is likely to go away with new implementations, those were mechanism needed because the old way the parser worked, but the new one can consider all sibling things, so "waiting" thing may go away with that, still working this part out, but it's good to bring it to keep in mind.
I'm okay if reader3/ a new parallel parser or similar solves these problems in the future. Just wanted to put this on the radar of you/others. I would rather not add an option to that parser can make it lock up depending on the query. This should be solved generally. The other concurrency related timeout safeguards have the same code-smell to it as this one. I hope those get addressed too.
In the meantime, is the async or serial parser not a good option in case you have just small queries? not sure if this is just one specific case of your scenario, but its a good option to consider.
Our application is async end to end: async webserver, async ring handler, async remotes. Writing an async/thread
that blocks on the parser and put's the result in a channel is easy, but is against the point of all the work that we did to make our application scale/use resources better.
I mean using async-parser
instead of parallel-parser
, for simple queries the async-parser
has much less overhead, so unless you are dealing with big queries that can be parallelized, the async-parser
may be faster than the parallel-parser
, I wonder if that would be good for your scenario.
Thanks! async-parser
with reader3
has done the trick for now. Shaved off 1ms from the p99 as expected, also has way nicer GC properties as it creates less intermediate objects/watchers/etc.
reader3
still very alpha, if you are doing production releases I suggest you stick with async-reader2
for now.
In my usecase, I'm only calling a single async resolver that returns in 2ms, this 1ms sleep adds a 50% processing overhead that can be possibly avoided. In case of many-many resolvers this also adds up while also adding a lot of core.async go-block greenthread context switches.
The line in question: https://github.com/wilkerlucio/pathom/blob/c30ccb4d875ec2adbd157905d84e53e9d72f5129/src/com/wsscode/pathom/parser.cljc#L288