servo / ipc-channel

A multiprocess drop-in replacement for Rust channels
Apache License 2.0
857 stars 129 forks source link

ipc-channel makes assumptions about OS buffer sizes #101

Open vvuk opened 8 years ago

vvuk commented 8 years ago

ipc-channel seems to make assumptions about underlying OS buffer sizes; the tests want to send()/recv() in the same thread on the same channel, e.g.: https://github.com/servo/ipc-channel/blob/master/src/platform/test.rs#L97 . This is a problem; is this intended in the API, or is it unintentional due to the way the tests are written?

I'm going to end up needing to queue writes to happen from another thread for every sender as a result of this for the windows impl (probably a work queue).

nox commented 8 years ago

I'm pretty sure you are supposed to be able to send and recv on the same thread.

vvuk commented 8 years ago

The big_data test at https://github.com/servo/ipc-channel/blob/master/src/platform/test.rs#L128 explicitly makes a new thread to do the large send; if it didn't, it will likely deadlock as the others do. On linux, pipe capacity is 65536, and OSX apparently can switch to that too if large writes are made. But in both cases they can both drop down if too much memory is in use by kernel buffers.

emilio commented 8 years ago

Yeah, I think that same-thread sends and recvs shouldn't be allowed. Ideally this would be enforced, but this is hard.

I wonder if Servo uses that a lot... I hope not, but...

antrik commented 8 years ago

@vvuk I think the idea was that the "medium_data" test is supposed to be somewhat large, but small enough not to cause fragmentation and thus blocking. (Note that the linux backend uses sockets, not pipes -- and in my testing, the buffer size never drops below 104 KiB IIRC, even on serious memory pressure...)

What sizes are we talking about specifically? If it's not way smaller than 64 KiB, we could simply modify the medium_data test to stay below the limit...

As for the API, I introduced platform::OsIpcSender::get_max_fragment_size() for the benefit of the test cases -- not sure how "official" this should be considered...

antrik commented 8 years ago

@vvuk since your current implementation seems to pass the tests just fine, I assume this is no longer an issue?...

vvuk commented 8 years ago

Correct, but it only passes the tests since I chose a big enough buffer size :)

On Oct 19, 2016 6:50 PM, "Olaf Buddenhagen" notifications@github.com wrote:

@vvuk https://github.com/vvuk since your current implementation seems to pass the tests just fine, I assume this is no longer an issue?...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/servo/ipc-channel/issues/101#issuecomment-254963684, or mute the thread https://github.com/notifications/unsubscribe-auth/AAL5lXs4ALmfkR8IuYOvXcBK0k6_ZNmNks5q1p6sgaJpZM4J8_us .

antrik commented 8 years ago

@vvuk and you consider that a problem?...

vvuk commented 8 years ago

It's potentially surprising behaviour for a consumer of the library. They can write code like:

let (tx, rx) = channel();
tx.send(data);
let data1 = rx.recv().unwrap();

that will work fine until data.len() crosses over a threshhold. But given that there's probably code out there that already depends on this... not sure there's much we can do other than document and require that, say, buffer size is (at least) 64kb, and promise only 64kb.

antrik commented 8 years ago

Well, there is little we can do about large sends being blocking; so the ambiguity is only in whether the API officially promises small sends below some arbitrary limit to be non-blocking, or simply doesn't promise anything regarding this at all... (In the latter case, that would mean the small/medium test are "cheating" by using inside knowledge -- but considering the effort required to fix this, for no real gain, I'd rather leave it as is...)

sagudev commented 6 months ago

We got hit by this in WebGPU stack: https://servo.zulipchat.com/#narrow/stream/263398-general/topic/ipc_channel/near/432692650 (webgpu thread send msg to itself). Maybe for debugging purposes we could create cfg attr that would limit size of buffer to 1 msg, so we can expose this behavior faster.

sunshowers commented 1 week ago

FWIW I hit this while trying to port ipc-channel to illumos as well -- several of the tests are failing because the socket buffer was filling up.

It sounds like the goal of tests like medium_data is not really to see whether you can send or receive messages on the same thread, but instead to test out fragmentation of various kinds. If so would it be okay to spin up threads in the tests? I can put up a PR for this some time soon.

Thanks!