Closed aaronlehmann closed 2 years ago
I think concurrency limits makes sense, though not all async operations relate to gRPC client (only gateway client calls like ImageResolveConfig). I guess keeping it at the codegen.Register
level keeps things simple, but perhaps this could also just fixed for the image resolver.
I'm fine either way, but I thought it might make sense to do this at the codegen
level because we probably don't want unbounded concurrency even for operations that aren't causing this immediate problem. Also, I don't know if there are things besides the image resolver that might invoke solves. But I think just limiting concurrency of the image resolver will address my current problem, so I'm happy to do that if you prefer.
Ok sounds good. Happy to merge this in, but maybe increase the default limit a tad bit.
Thanks. Does 20 sound good?
The original approach turned out to cause deadlocks where all calls to SetAsync
would be blocked waiting for either the semaphore or for a lazyValue
to be populated (by a goroutine waiting for the semaphore). I've moved the concurrency limiting to solver.Build
instead. PTAL.
I think that with some complex build graphs, the new asynchronous resolution will cause very aggressive concurrency levels. While I haven't directly confirmed this, I suspect it can cause the gRPC client to hit the HTTP/2 maximum concurrent stream limit, which causes some RPCs to get queued and hit timeouts (presumably in the BuildKit client code) rather than completing successfully.
To avoid this problem, set a concurrency limit at the CodeGen level.