Closed cpuguy83 closed 1 year ago
I'm interested in comments on the viability of this. Maybe the unbuffered reader should be opt-in... as in support for FD passing would be opt-in rather than implicit for performance reasons.
Updated the test to use an actual out of process ttrpc server.
Merging #75 (a19e82d) into master (8ecd516) will decrease coverage by
4.68%
. The diff coverage is43.84%
.
@@ Coverage Diff @@
## master #75 +/- ##
==========================================
- Coverage 69.77% 65.09% -4.69%
==========================================
Files 11 11
Lines 612 719 +107
==========================================
+ Hits 427 468 +41
- Misses 149 212 +63
- Partials 36 39 +3
Impacted Files | Coverage Δ | |
---|---|---|
types.go | 15.78% <16.66%> (+0.40%) |
:arrow_up: |
channel.go | 51.21% <17.50%> (-30.60%) |
:arrow_down: |
server.go | 69.11% <40.90%> (-4.55%) |
:arrow_down: |
client.go | 78.01% <77.50%> (-0.98%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 25f5476...a19e82d. Read the comment docs.
Example of Sendfd (client):
Example from service (server):
What would this allow you to do? What do you have in mind?
@crosbymichael Apologies, those links I posted started 404ing when I force pushed over them. I have an example here: https://github.com/moby/moby/pull/42019
I've been working on getting dockerd out of the I/O path of containers. In the case of the above mentioned PR everything related to container stdio is moved out of process: log drivers, exec I/O, even attaches. This allows dockerd to restart and keep HTTP clients attached to a container, as an example.
I would also like to be able to do similar things for containerd where a client is able pass the actual stdio pipes it is using directly to the executed container rather than the multiple levels of buffering in place today.
I'm also looking at being able to have a shim that can drain its ttrpc connections and pass them to a cooperating process.
All this can, of course, be done without ttrpc, but then we'd need to come up with a separate protocol/service for managing this.
We should be able to fix the buffered read mentioned in the original comment by wrapping wrapping the net.Conn to replace read
with readmsg
and set a fixed max number of fd's to be passed in a single rpc.
And another thing we may be able to do is support another message type like FileStream
that the client can use when the transport does not support FD passing.
Needs rebase
I think this could just be implemented at the service level. You could implement a service specifically for passing fd and only register it if supported by the underlying connection.
Going to close this one for now, would love to see an example of this done as a service though
@dmcgowan I'm not sure I understand what you are saying.
What I think you are saying is this would be better implemented as a service that explicitly handles FD passing via a side-channel.
Given that, I've definitely given this some thought and was initially like "ok yeah we could do this and it would be fine". However, with more time to think on this, I think it would be very beneficial to have it directly in the protocol so we don't need to have a side-channel.
I believe we should be able to work out the issues with buffering here by always reading the OOB data with some fixed size buffer.... so the caller would only ever be able to pass some number N of fd's at once, perhaps the buffer for this could be configurable when creating the server.
Of course I may have totally misunderstood what you were trying to say here.
You have probably spent more timing thinking about it than me. What are some of the downsides of having it as a side-channel service?
I might be looking at this differently, less about the more efficient way to do it and more about compatibility. I see the ttrpc client interface as mainly to be used for generated proto services so that services remain compatible between ttrpc and grpc. The tagline for this project is "GRPC for low-memory environments.", so it I think its fair to look at new features from the perspective of whether it is ttrpc only and what diverging from grpc (or more generally usage via proto service definitions) would mean for how ttrpc is used.
I think the main thing is you still need to come up with some kind of protocol for exchanging that information.
This adds a new message type for passing file descriptors. How this works is:
To accomplish this reliably (on unix sockets) I had to drop the usage of the bufio.Reader because we need to ensure exact message boundaries.
Within ttrpc this only support unix sockets and
net.Conn
implementations that implementSendFds
/ReceiveFds
(this interface is totally invented here).Something to consider, I have not attempted to do fd passing on Windows which will need other mechanisms entirely (and the conn's provided by winio are not sufficient for fd passing). I'm not sure if this new messaging will actually work on a Windows implementation.
Perhaps the message tpye should be specifically for unix sockets? I'm not sure how this would be enforced at the moment except by checking if the
net.Conn
is a*net.UnixConn
.Closes #74