Closed ravh-ciena closed 2 years ago
gRPC handles serialization and deserialization directly and you should not need to do so within the application. The tunnel is replacing the raw TCP transport and embedding it within another gRPC stream. There should be no need to modify any of the semantics within gNMI. This was the primary motivation of using a generalized tunnel approach. We've shown in examples that this tunnel is general enough to carry both gRPC and SSH but should be possible for any TCP based protocol given willingness to write the glue.
The tunnel.proto was designed such that other languages could be supported as either the tunnel client or tunnel server. We published a complete reference implementation in Go with several examples to demonstrate the end to end use. We welcome community contributions for reference implementations in other languages.
Thanks, Carl. I agree with the view and motivation beyond making the ‘data’ exchanged inside the tunnel to be general enough to carry data of any TCP based protocol. However, that super valid when grpctunnel is used ‘only’ as a tunnel allowing raw bytes exchanged in a new session and it poses a significant limitation for any applications that will be built on top of tunnel-server and tunnel-client – and by limitations here, I mean those applications not having any common interface to interpret request/response structure besides having to parse everything as raw bytes – and further not having a reference/common definition – those applications would either require another .proto defined on top of grpctunnel where the respective structures/format of request/response is aligned.
Specifically, in the case of using the grpctunnel for dial-out use-cases where on one-side of the tunnel, the application is a gNMI-client and on the other side of the tunnel, the application is gnmi server, and to picture it, using the grpc-tunnel as a tunnel for registerOp server as well as Tunnel service for actually streaming a ‘specific’ type of request/response as data within the tunnel - as below, would pose a limitation of not having a reference to define a vendor agnostic Request/Response – and leaves for using off-band understanding of what those request/response types could be and those definitions would be required to be established by some other means – other than tunnel.proto itself. In the examples of tunnel, all such information is essentially read from respective config files and the strings used as bytes/data to be sent/received over the tunnel. Attached snap-shot to visualize the illustration of using tunnel.proto for end-to-end dial-out use-cases.
To summarize the limitation I see with the grpctunnel to be used for dial-out with only bytes as data, below would be the points:
To illustrate this further, a below proposal to change the tunnel.proto might help visualize the limitations and one possible proposal to overcome it:
import "github.com/openconfig/gnmi/proto/gnmi/gnmi.proto"; //make tunnel.proto to import gnmi.proto
`
message Type {
oneof {
bytes data = 1 // raw bytes exchange
SubscribeRequest gNMISubRequest = 2 //introduce where SubscribeRequest is a message defined in gnmi.proto
SubscribeResponse gNMISubResponse = 3 //introduce where SubscribeResponse is a message defined in gnmi.proto
…
… // any other types of messages like capability request/response could as well be included to provide the easier
//separation of types of request/response used in dial-out collector – those that exists in gnmi.proto as well as any
//new/future message type additions between two end points using tunnel where raw bytes won’t provide implicit
//identification of a request/response type.
}
}
`
message Data { int32 tag = 1; Type type = 2; bool close = 3; }
If a bidi.copy of gNMISubscribeRequest and gNMISubscribeResponse – is provided and available to be used at (tunnel-client and tunnel-server) – then even in case of 4 separate binaries as mentioned above is used in end-to-end scenarios of application use-cases (or two binaries in case the tunnel-client and tunnel-server are embedded within the applications – either as a package/library for respective languages) – it would significantly eliminate a lot of clutter glue code which different vendors will end up hand-writing – where each of it will not necessarily be compatible with one another as there is no common interface embedded as reference at the parsing-glue-layer to use while converting bytes to application specific request/response. Also, it will eliminate the significant performance degradation/huge-latency that would otherwise occur – as the request/response would need to go thru multiple glue-layers and conversions over every message for an end-to-end use-cases. And the possibility that each applications as well as tunnel-client/tunnel-server itself could be language independent – on top of glue-layer itself, huge latency getting introduced due to independent choices of the separate language based glue-layer – could be minimized by embedding standard request/response types into tunnel.proto itself.
Pls let me know your thoughts/comments ?
I am experimenting with above changes and would be happy to push the change for the proto change as well as the reference implementation in Go for using it.
I think we are talking past one another.
We have no intention of having the tunnel.proto have any reference to the gnmi.proto. This is an intentional design decision. The examples we have published have demonstrated that the tunnel can be used to carry anything over it without bespoke marshaling and unmarshaling of specific proto messages because that interchange is handled directly by the gRPC library. When dealing with the tunnel, we only ever deal in raw bytes and never introspect within that data because we are using the tunnel bytes purely as transport. This modularity offers immense flexibility. It is true that one "could" attempt to introspect that data and unmarshal the raw bytes into a given proto, but one shouldn't. What we have done is to simply embed the TCP stream of the gNMI session as bytes within the tunnel and one shouldn't expect to interact with those bytes any more than one would read the raw TCP buffer of any gRPC stream.
This general design was put forth specifically to avoid needing to write any code tailored to a given RPC. I realize the context of this question is to handle the gNMI.Subscribe RPC but we are looking forward to the complexity of handling all of the various interfaces to a given device, gNMI including Get, Set and Subscribe, gNOI for all the operational RPCs, gRIBI, gNSI, gNPSI, SSH, P4 and others. The dial-out paradigm is applicable to them all. The most decoupled way to support the tunnel without any modification to existing server code is to deploy it as a separate binary as a local port forwarder where the raw TCP is embedded directly into the tunnel.proto bytes. This can be done with the existing Go implementation if integrated into the build for a given device.
A tighter coupling can be done by adding tunnel support directly a client or server, but for a binaries not already written in Go, the existing published example demonstrating this cannot be used directly and a port of the same concept would need to be made.
Thanks Carl. I was able to get the end to end dial-out working with decoupled tunnel-client and gnmi-server - without a need for the earlier mentioned changes or any other changes to tunnel.proto. Issue can be closed.
Hi Carl,
We are implementing the tunnel client/server and we are able to get the client/server session establishment and we have started with writing the C++ wrappers for the tunnel client/server. Now, I have few questions about using the gNMI subscribe request/response as Tunnel Data in the RPC - (rpc Tunnel(stream Data) returns (stream Data)). I am looking at reusing the gNMI SubscribeRequest (https://github.com/openconfig/gnmi/blob/master/proto/gnmi/gnmi.proto#L211) and SubscribeResponse (https://github.com/openconfig/gnmi/blob/master/proto/gnmi/gnmi.proto#L235) as raw 'bytes' - streamed between target and collector via the tunnel and so the below questions.
Questions/Issue 1: In Tunnel.proto Data exchanged is only as bytes – defined here
With our approach, this means, gNMI SubscribeRequest RPC and SubscribeResponse RPC needs to be converted to raw bytes before and after exchange, serialize and de-serialize.
Question is how to convert, serialize and de-serialize without losing the reference to the structure defined in gNMI.proto but used as raw bytes in tunnel.proto ? should importing gNMI.proto within tunnel.proto could be a solution to have a single stack that could do both tunnel as well as gNMI - with respective wrappers being autogenerated from the .proto ? or any other suggestions ?
Are there API’s in grpc c++ stack (or C stack) which does such conversion of gNMI SubscribeRequest/SubscribeResponse to raw-bytes that could be used as 'bytes' in the message Data used in the rpc Tunnel(stream Data) returns (stream Data) ?
Are there any examples from the unit-test client for tunnel.proto where this is attempted ? (I couldn’t find one in the repo for this use-case)
Issue 2: On a related note – will there be C++ or C or python (basically any other language wrapper support for tunnel client/server) based on tunnel.proto besides golang ?