Closed CooperWolfe closed 2 years ago
When my application launches, I immediately see the following logs, with no RPC calls made:
2022-05-31T12:39:50-0400 debug logging : connectivity_state=idle grpc_connection_id=0ABFF424-2158-4657-8D7A-E0B0F4BC34D7/0 shutdown.mode=forceful shutting down connection 2022-05-31T12:39:50-0400 debug logging : grpc_connection_id=0ABFF424-2158-4657-8D7A-E0B0F4BC34D7/0 new_state=shutdown old_state=idle connectivity state change
It seems to me that the connection is in the process of closing (first log is "shutting down connection") when you attempt to call RPC A. Because it shuts down and is replaced by a new connection we have a different connection ID.
The "vending multiplexer future" log does not mean that the RPC has started, it means that's the connection is attempting to provide a stream of the RPC. This will always fail because the connection is shutting down.
Does the second log not indicate the completion of that state change? I ask because if I never send a request those are the only two logs I see. I can then send RPC A minutes later to no avail.
Also, I can successfully send B and C and then unsuccessfully send A. Once again in this scenario, A logs vending multiplexer future
with the grpc_connection_id
that shut down and B and C use the working one.
Yes, you're right the second log indicates that. The third log also tells us that the connection is shutdown (connectivity_state=shutdown
) at that point. Shutdown is a terminal state, so B and C must be running on a connection provided by an entirely different object.
Is the connection being explicitly shutdown by your application?
That it apparently was. Turns out I was registering two of the class that maintained the client connection to a dependency container. The first one, which was used during the construction of client A, shut down the client connection during its deinit
, which was triggered by the registration of the second one.
Thank you very much for your help @glbrntt
Describe the bug
One RPC, sharing the same
ClientConnection
as others, contains a differentgrpc_connection_id
in its logs than the others.To reproduce
When my application launches, I immediately see the following logs, with no RPC calls made:
I then make an RPC call (RPC call A), which is followed by this log:
At this point, the server still shows no signs of having been contacted.
It's worth mentioning that this RPC call (RPC call A) lives in a service alongside another service (with RPC call B) on the same server. I also have another service (with RPC call C) living on another server. All these services are contacted through a gateway and I am able to make all three RPC calls as expected via BloomRPC.
RPC calls B and C are both working as expected in Swift, with logs similar to the following:
What is particularly interesting is that BOTH calls have the same
grpc_connection_id
each time I make them. However, every time RPC call A is executed, it uses thegrpc_connection_id
that was mysteriously forcefully shutdown at the start of the application. All three calls areBidirectionalStreamingCall
's. All three clients share the sameClientConnection
instance andCallOptions
. All three calls are made in the sameDispatchQueue
.Expected behaviour
I would expect RPC call A to have the same
grpc_connection_id
as B and C. However, I would expect to not have a forceful shutdown at the start of the application and mostly just want to get the call working.Additional information
N/A