Open Merovius opened 8 years ago
This was sadly deprioritized before I could get it finished. There's a bit-rotting prototype of all the client-side API changes and a shim in the existing transport here. Aside from one minor thing I'd like to change in the design (namely the use of Attributes
instead of []interface{}
for passing opaque parameters around), if someone wanted to pick this up, we would be willing to do reviews. The biggest remaining change from the prototype is that I don't want the transport to have the shim layer, but instead to directly expose the intended API.
EDIT: Also, the prototype does not have the server-side implementation done, which could be quite complex.
Thanks for this discussion. I actually would like to +1 on this heavily, however our use case might be different.
We are leveraging gRPC a lot in Thanos projects and it helps enormously. The thing is that Thanos, similar to Google Monarch (if you are familiar), has this hierarchical node API strategy with gRPC in between.
Let's take for example one gRPC service:
/// Store represents API against instance that stores XOR encoded values with
// label set metadata (e.g Prometheus metrics).
service Store {
rpc Info(InfoRequest) returns (InfoResponse);
rpc Series(SeriesRequest) returns (stream SeriesResponse);
rpc LabelNames(LabelNamesRequest) returns (LabelNamesResponse);
rpc LabelValues(LabelValuesRequest) returns (LabelValuesResponse);
}
Within this, we have many implementations, but one of those is a simple "fanout" that fanouts and merges responses together and proxies them to the caller (called proxy)
Long story short, we have many cases when one microservice want to either:
I am having a hard time understanding why there is no existing logic for this in the current gRPC generate code? Because in-process transport is one thing, but I don't care about all interceptors or sometimes about headers, trailers, and metadata when I invoke the server method in the same process. So something like ServerAsClient generated converter code would be easy to create, no?
What we use right now is something like this:
Can't we just ensure the Go gRPC generator will generate such well tested, benchmarked etc converter for each method? (it's trivial for unary rpcs, bit more complex for streaming). WDYT? 🤗
EDIT: Testing is another use case we leverage on a lot as well (thanks @glerchundi for reminding)
+1 to what @bwplotka is proposing. We're doing exactly that to cover two different use cases:
Thanks for raising 😊
@bwplotka did you gave a try to https://github.com/fullstorydev/grpchan/tree/master/inprocgrpc mentioned earlier in this thread? It's not exactly what you need but would simplify things as you could then just bind channels? Or maybe even just communicate directly between the far-ends, bypassing the proxy part at all. I did not spend much time reading through your current code so I may miss the idea of your architecture, but I'm very successful using grpchan (kudos @jhump !) and just thought you might find it helpful as I did
Been a while since looked at or thought about this issues, but I popped up in my notifications so I got caught up. This isn’t necessarily a solution, but it is interesting and could be learned from. I stumbled across the https://pkg.go.dev/google.golang.org/grpc/test/bufconn package this week and am using it in testing (as it was intended based on the package name). Anyway, it enables your client and service to run in a single process using an in memory transport. Works great (for testing at least). You can see my use if it here https://github.com/textileio/textile/blob/asutula/fil-rewards-bookkeeping/api/filrewardsd/service/service_test.go#L411
Thanks all for your responses.
grpchan
looks quite solid: https://github.com/fullstorydev/grpchan/blob/master/inprocgrpc/in_process.go
I think it's a good balance between not going into bytes (it does not marshal just passes message directly), but also solid transport of all trailers and metadata 🤗 I love it at first glance - we will take a look. In our case overhead matters, so we kind of care about each allocation here, so let's see. (: Thank You!
NOTE: I doubt in-process network for light e2e test purposes is a sensible request. For this, you want something like virtual net.Conn
that allows gRPC communication using in-process memory. Anything lighter and without marshaling might not be ... e2e test (: I would argue if that is needed TBH. You can use thousands of extra sockets on CI systems e.g free GitHub Actions, so I don't see a problem with starting full gRPC server in a separate goroutine unless I am missing something 🤔
Say I want, for example
So, I want to be able to implement a FooServer and then connect to it from the same process. Of course, I could just listen on localhost and connect to that or something like that, but then I'd pay the penalty of serializing and deserializing everything and running the bytes through the kernel (which is significant when it's in the path of talking to your database, for example).
Instead, it would be cool if grpc allowed me to get a "local" connection, like
func LocalPair() (*grpc.Client, *grpc.Server)
, which doesn't use a network at all and just directly passes theproto.Message
s around.I'd be willing to try to implement that myself, but first I wanted to ask if this is a use case you'd be willing to support.