Open zy1994-lab opened 4 years ago
Hey! Sorry, I lost track of this. So in general, blob transfer is a hard problem that unary RPCs are not well suited to:
Thanks Tim. Right now I'm splitting a large file into small chunks and send them in multiple RPC requests. Because in my understanding, this won't cause too many troubles given the size of each RPC is reasonable. Some system like Timely Dataflow automatically break large payload into small batches during data transfer, is it possible to add this feature in tarps?
BTW, what's the maximum payload/frame size I can send using tarpc? Can I config this number?
Max payload/frame size is up to the transport to decide. For example, if you're using an in-memory channel that doesn't serialize requests or responses, you probably don't want to enforce a payload size. Many serde serializers support max serialization size, like bincode.
tarpc provides a nice interface to program so I want to use tarpc to transfer data among different machines in a distributed cluster. Apart from the 'normal' use case of rpc which given as the following
I would like to also have the following method:
In this way, I suppose I can both get and send data to the server. The vector in my use case can be very large. My question are: 1) Will the upload and download stream cause congestion in the same channel because the data size can be very large in both direction 2) How would this
put
method compare to, say, using a handwritten TcpStream or something like MPI?I used tarpc before but I don't really understand how it work under the hood. Can anyone kindly help me? Thanks so much!