Open mrx8 opened 1 year ago
Can you please share your test code?
My code is at:
https://github.com/mrx8/grpc-node-issue-2385
please do npm install
and afterwards start the server with node server.js
and then node call-stream-hello.js
.
What does the code:
Ther server produces a lot of messages. The client consumes only the first 10 messages, and then terminates prematurely.
My expectations:
The server should end the stream. It receives an "cancelled"-event, but the Readable-stream on server side is pushing messages further until the buffer is full and then hangs...
I expect that the following line is executed: https://github.com/mrx8/grpc-node-issue-2385/blob/main/server.js#L35 This line is executed if all goes well (the client consumes all messages without an error).
I ran your test code. I can run the client script repeatedly and each one will get a successful response. Since the server is still responsive to requests, I would not say that it "hangs".
The "cancelled" event is there to inform the server application that it should stop processing that request. I suggest stopping the stream that is producing responses when you see that event.
With "hang" I meant the stream hangs. It will never be successfully be finished. However I noticed that at least it is obviously no memory-leak (I tested it for an hour with an endless loop).
If this is the clear intention how it should work than I will have to wrap every readable-stream with a cancel mechanism before I use it with grpcjs and even then I cannot motivate my generators to execute code after their for-await-loop.
It would be way easier for developers if this client/server/bidi-streams were individual streams with their inherent semantics. So we could react on the usual events of streams. But anyhow, if it works as designed, than I have to find a solution for myself.
I noticed, that when breaking/return a for await-loop over a grpc-stream on client-side, the server does not get notified. I can integrate stream.cancel() for myself, ok, but I have customers, over which I don't have control. If they forget to call cancel() in the right places, this would hamper the server. I personally thought it was implemented as stream over a stream. The latter one with "autoDestroy: false". It feels a bit alien to me with this handling of premature-end-scenarios :)
The intention in the gRPC API is that cancellation is for exceptional situations. In a server streaming call, the initial request should indicate what messages the client wants the server to send. If you need the client to be able to decide what messages it wants after it has started processing the stream, the right way to do that is with a bidirectional stream.
With that being said, I may be able to improve the client-side behavior here. If you can share the code that demonstrates "when breaking/return a for await-loop over a grpc-stream on client-side, the server does not get notified", then I can take a look.
Problem description
I have a ServerStream-service which generates a readable-stream of about 100.000 messages and set it in ctx.res. The client consumes only the first message and then process.exit(). This triggers a cancel-event on server-side on ctx.call, but the stream in ctx.res hangs forever.
Reproduction steps
create a simple message with one item. Create an async generator which yields a never ending amount of this messages. Make a stream from it with stream.Readable.from() and put that in ctx.res. On client side make the call and consume the resulting stream with for await and process.exit() after you have successfully received the first message.
Environment
Additional context
Log on server side:
log on client-side: