Closed akshaymankar closed 3 years ago
Hmm, more I stare at it, more I am convinced that the handler for bidirectional streaming client shouldn't be of type:
CompressMode -> IO (ConduitT v (GRpcReply r) IO ())
instead it should be two separate conduits: one for client stream and another for server stream. Having two separate conduits allows for concurrently writing into the client stream and reading from the server stream.
Another thing missing from current implementation is GRpcReply ()
which should indicate whether eventually the grpc server responded with GRpcOk or not.
Having this second GRpcReply
raises a question of why we have the first one in the conduit and what other information it conveys. It turns out, it is only conveying whether the IncomingEvent
from server was parsed successfully or not (It contains SomeException
so there could be more failures). Also, the documentation for IncomingEvent
says that the loop stops when it receives something invalid, so perhaps it is also fine to raise that exception in IO or simply return it as the return value of the conduit. So, I think the server stream should just be a stream of r
and not be wrapped in GRpcReply
.
In all, I think the type of the handler should be one of these:
GRpcReply ()
wrapped in a TMVar
for whenever server is done.
CompressMode -> IO (ConduitT v Void IO (), ConduitT() r IO (), TMVar (GRpcReply ()))
CompressMode -> IO (ConduitT v Void IO (), ConduitT() r IO (Maybe SomeException), TMVar (GRpcReply ()))
It may also make sense to return Async (GRpcReply ())
instead of TMVar
as it will give users ability to kill the thread in case it gets stuck, but maybe rawGeneralStream
takes care of that already and I am overthinking this.
I have a implemented option 1 to test things, It is trivial to move to option 2. Please let me know if this kind of breaking change is OK.
It makes a lot of sense, in fact for the server we also ended up exposing two conduits. I am fine with the breaking change, since it seems obvious that this functionality was not that much in use.
The only thing I don't fully understand is why the TMVar
should be exposed, and we cannot get away with simply detecting when the corresponding conduit is closed.
I see that when rawGeneralStream
is called, there can still be errors in the ClientIO
monad and the result is wrapped inEither TooMuchConcurrency
. There needs to be some way of communicating these errors to the users of the client. I am actually not sure what the best way to do that would be, TMVar
seems a bit odd to me too. One option could be to put it in the result of the response conduit, So something like:
CompressMode -> IO (ConduitT v Void IO (), ConduitT() r IO (GRpcReply ()))
In the end I decided that TMVar
was an ugly solution and added the GRpcReply ()
to result value of the second Conduit.
Testing a simple service defined like this:
The bidirectional function in the server is implemented like this:
The client code looks like this:
Running these prints this and gets stuck:
If I change the server so it responds before even consuming a request things move forward. Server looks like this:
Now, things work but they still get stuck at this:
So, there is another bug with how the closing of the server stream is handled in the client. Maybe it is related. Yet another interesting thing to note is that the server responses are always printed after client finishes streaming (even the response which doesn't depend on the request), this might also cause problems when client starts depending on server responses, but I haven't tested that yet.