celluloid / celluloid-io

UNMAINTAINED: See celluloid/celluloid#779 - Evented sockets for Celluloid actors
https://celluloid.io
MIT License
879 stars 93 forks source link

Limit the number of tasks active at a time in the same Actor #96

Closed schmurfy closed 10 years ago

schmurfy commented 10 years ago

As far as I can tell there is no way currently to limit the number of fibers which can be spawned for each thread, is it something which is worked on ? I may have a look into it but I think I remember reading something about some work done in the direction already.

I think the more straightforward way of adding such limit would be to use a fiber pool class of some sort to cap the number of fibers created.

tarcieri commented 10 years ago

It seems like you have a problem with backpressure.

A "fiber pool" wouldn't really help the problem as fibers can be initiated asynchronously. Celluloid could potentially drop these messages on the floor, or raise an error in the sender's context.

To do what you're describing (block the sender, I think?) all messages in Celluloid would need to be synchronous sends.

tarcieri commented 10 years ago

Note there's ample discussion about similar problems in Rust here... food for thought:

https://www.mail-archive.com/rust-dev@mozilla.org/msg07394.html

schmurfy commented 10 years ago

to use a parallel with eventmachine what I would currently do is to use a pool of X fibers where X is the number of parallel requests I want to allow at maximum and then whatever the load on the calling side I can guarantee I won't hammer the database or whatever sits behind, I will have a look at your link but how would you implement this in celluloid ?

tarcieri commented 10 years ago

I updated the title to better reflect what you want. But the actual semantics of how this would work are unspecified. I'd love to know precise semantics, specifically how you 1) provide backpressure without 2) just deadlocking your program if the number of tasks active per Actor were capped at some fixed limit?

FWIW I strongly sympathize with these sort of problems. They're really hard to solve in a way that's actually helpful

schmurfy commented 10 years ago

your post is an interesting read indeed, until now the only way to handle this I have used in production is to tell the clients to go to sleep for a random interval and come back later by sending them a known error when the number of request queued was above a given threshold (we control both the client and the server in this case). In that system we use eventmachine + Fibers with one or more fiber pools (depending on the tasks), it works fairly well, I agree though that any solution has to be dependent on the specific needs of your application and it's hard to provide a one size fit all solution...

tarcieri commented 10 years ago

An interesting PR available here:

https://github.com/celluloid/celluloid/pull/369

Closing this issue as it's a core Celluloid issue and not a Celluloid::IO specific one