Closed ashkan-saeedi-mazdeh closed 9 years ago
In general, we opted for a co-operative multi tasking since that provides a much better overall system performance. Due to that we can easily run at very high CPU utilization (95% and more), fully utilizing the system.
However, Orleans actually already has a somewhat partial way to support preemptive multitasking. It can be controlled by setting the number of active threads. By default it is set to num cores, but if you set it to high, like 100 or 1000, you will effectively get an effect of OS time sharing the application threads (and thus grains). That way one grain has much less chance to impact responsiveness . The downside is that the overall system throughput (for well behaved apps) will go down quite a bit and in fact even the responsiveness can go down as a side effect. But for worst case, "untrusted"/potentially buggy code, it will provide some sort if guarantee on responsiveness.
In fact, we explored that option in our paper and you can read about it here: http://research.microsoft.com/pubs/210931/Orleans-MSR-TR-2014-41.pdf, section 5.1.
@gabikliot Well if you want to run each actor on a single thread then yes the cost would be too high but if like Erlang the VM had a mechanism much more lightweight than threads to context switch between them and could put them on pending and reexectue them then the cost would not be that high. I think you have done it partly in terms of actor isolation, because your runtime translates calls to actor references to messages actors can not reference each other even on the same machine and modify states of each other but if you want to stop one of your actors from executing its code and rerun it later on, CLR should provide the possibility for you or you have to use an OS thread for each actor which is damn expensive.
So you are right that using a thread per actor makes it less performant and probably even if you do it in Erlang like fashion it still will use some time for context switch which can become significant but it will provide a guarantee to the developer that even if the actor code is slow, it will not make the whole system slow and people could use other people's code easier or bigger teams with less experienced members could work on Orleans projects easier.
The possibility which I was asking about was implementing preemption without using OS threads and time sharing mechanisms like Erlang.
Also while reading this I got a question, what if I send a message to an actor and when the message is received, the executing actor dies in middle of execution, what will happen to the caller's waiting promise? Will it wait for ever, will the method be reexecuted on another instantiated grain automatically by the runtime or a timeout will happen?
@ashkan-saeedi-mazdeh I think I see where you are getting at, but I think to get what you want would require at least a pluggable scheduler for Orleans and perhaps other considerations. As it is, I find Orleans so interesting because of cooperative scheduling and the throughput it implies. In other words, perhaps one shouldn't do heavy computing with all the cores in a system geared towards cooperative scheduling (unless going out of cooperative scheduling, e.g. to thread pool or accelerator in this case).
I think Erlang uses roughly one thread per core (pinned) as Orleans too, and from there goes differently in that it maps these "userland threads" to the pinned kernel threads. It's geared towards low latency at the expense of throughput. I'm not sure if it is possible to build such a system on top .NET (fibers?). While searching for this, I came across what @VesaKarvonen writes here about cooperative and preemptive multitasking. He has also a small note with regard to Erlang style scheduling and .NET and how F# async style workflows are in some way similar to how Erlang operates – I may be quoting him a bit out of context, or come across differently that he himself would want, so take that a bit grain of salt. There are also some benchmarks with regard to these constructs on .NET, which might shed some light on expected performance:
@ashkan-saeedi-mazdeh and @veikkoeeva , a couple of clarifications: 1) what I suggested (controlling the number of threads) does not necessary mean we must have a 1 to 1 mapping between threads and actors. The scheme allows you to have M threads used for N actors. For example 100 threads for 10K actors.
2) Orleans does already have a pluggable scheduler. Currently there is no way to specify a different scheduler via configuration, so some code changes will indeed be required, but at the component/design level, we do have a well defined and isolated component of a Scheduler, which implements TPL's TaskScheduler
and uses its own thread pool and has its own runtime behavior. It should be relatively easy to write a new one and plug it in. The only restrictions/requirements on the new scheduler would be to guarantee a single threaded execution per actor and integrate with TPL TaskScheduler
(so async await
and the other TPL goodies work).
3) @ashkan-saeedi-mazdeh "The possibility which I was asking about was implementing preemption without using OS threads and time sharing mechanisms like Erlang." Do you have a concrete idea of how to do that? I would be interesting in hearing ideas on that front.
Regarding the question about the caller: the answer is "timeout will happen". http://dotnet.github.io/orleans/Runtime-Implementation-Details/Messaging-Delivery-Guarantees
@gabikliot @veikkoeeva I realize the better throughput of the cooperative scheduler and it's much lower memory usage and ...
@gabikliot The only way which I found out is by using fibers to be able to stop execution of a fiber while it is running, My knowledge of the CLR is very limited compared to you guys.
I've implemented a coroutines system like what the Unity game engine has on a C# based implementation of Unity's runtime. I used IEnumerables and yield and it was cooperative and the performance was good (at least compared to unity).
I could not find any way other than fibers to implement this. It might be a good option to implement a scheduler based on fibers and with preemptive multitasking for scenarios which focus on latency instead of throughput.
I imagine if one would like to implement a game server on Orleans and the game doesn't involve heavy processing of turns like the strategy combat games and game logic is mostly simple then it might be useful. On the other hand if the game logic is not CPU intensive then other than horrible developers what can bring the system down due to high CPU usage.
Now that I think more about this, maybe trying it is not that high priority in my mind anymore. Honestly speaking I did not find too much time to spend on Orleans yet but I'll do ASAP. The tech seems very interesting to me.
@gabikliot I wasn't sure if the system was pluggable. From a cursory look of it when I dug into relational storage, it looked like plausible. This bit information might become useful in some scenarios.
@ashkan-saeedi-mazdeh Well, it's hard to tell what is profound and what is limited when it comes to CLR. :) But in any event, you can do processing outside of Orleans in the silos, so it might not be that a big problem. You may have thought about this, but I'll put a few links here so that we can synchornize minds:
Interesting stuff for sure.
@veikkoeeva , it is pluggable conceptually and at the design level. At the code, we would need to do a bit of work to actually allow a different scheduler, like put a couple of things behind interfaces, a factory, all should be easy.
@ashkan-saeedi-mazdeh we give a warning if a grain is stuck on CPU operation for longer than a configurable amount if time. This helps find bugs or heavy CPU code pieces or erroneous blocking IO calls.
Also, it is possible to offload heavy CPU compute to a thread pool or another TaskScheduler
and then marshal back to Orleans scheduler. So effectively, you can combine multiple schedulers. This is one of the goodies on TPL TaskScheduler
.
@veikkoeeva , what a coincidence! The last paper you cited is by my advisor and I know it too well. It is actually a very interesting explanation of the connection between throughput and latency via queuing theory. A must read for anyone!
@veikkoeeva When I said "It might be a good option to implement a scheduler based on fibers and with preemptive multitasking for scenarios which focus on latency instead of throughput. " I actually should say availability instead of latency, I was wrong about latency as I've learned from the awesome stuff you put here but what I really wanted was the capability of safe guarding the system from bad user code which can bring the whole thing down. If we have warnings as @gabikliot said then there is no problems, we only need to monitor our systems with stress tests and detect code parts which are issuing warnings and fix them, no need for preemptive multi tasking. The only other place which it can happen is the code path which is long running and not detected by stress tests which I don't think is that much likely and even if it happens since everyone should log an in production system as well we can detect them as well.
Also availability can be guaranteed by adding a few more nodes as well for being safe against this.
As you guys said we can schedule the tasks which we know are long running and CPU intensive outside of Orleans using the TPL so actually Orleans is much more flexible compared to other actor implementations known to me regarding this, Unlike Erlang for example you can write CPU intensive apps with Orleans without moving to C (If they are not super high perf requirements and are doable in .NET) and use Orleans as a scheduler for them and to do the none CPU intensive part. Of course if most of one is doing is getting a request from a user and do image processing for 10 seconds then Orleans might not add too much to the game but if there are many other tasks available which are not that CPU intensive Orleans would help a lot.
Even for the case that all of the tasks other than receiving requests and sending results back are Highly CPU intensive and long (let's say for a video conversion web service) Orleans might be useful in ways which you guys imagine and I don't know about.
Sorry that I'm getting examples from different fields. Basically in my mind I'm thinking about actor systems and Orleans specifically to see what scenarios it can answer or not.
As another example I would imagine a cloud based AI game server which allows player game characters to do online learning using some RL techniques or something and receive player action requests from games and respond back with AI behaviours and in the mean time run their own online learning as well. I think Orleans would help a lot on this if one schedules the heavy lifting AI work outside of Orleans, there is still a lot to be done by Orleans.
@ashkan-saeedi-mazdeh I think you are having the right questions. I'm mulling some of the same too. For instance, have grains represent some business entities, then have "gatekeeper grains" to external accelerators (GPU, FPGA, DSP) etc. A bit of a problem here might be that likely this leads to a scenario wherein not all silos have a homogenous accelerator set (or one can just make accelerator farms and send the data there, but you can imagine it creates some other problems), so one needs to know where grains using the accelerators need to be placed (you can extrapolate from here that a catalogue built when silo starts would be nice). Ideas like this pop up on occassion on the Orleans Gitter channel. If you feel inclined, chime in, throw ideas and see what sticks. Ideas might not stick now, but people do read, think and something might eventually turn out.
@ashkan-saeedi-mazdeh , sounds like we answered all questions here. Can this issue be closed?
@gabikliot Yes , very well indeed, Lots of great material. I wanted to close it myself but did not have permission to do so.
Actually when I looked at help wanted items to potentially start doing something, it seemed to me that a clean up is required.
Cross-referencing https://github.com/dotnet/corefx/issues/37755 so discussions will be a continuum in the spirit of web.
Currently Orleans uses co-operative multi tasking and it means the grain itself is responsible for releasing the CPU and wait. In runtimes like Erlang VM actors can not execute more than a certain amount of computation at a time so system's responsiveness goes up because of this even if some bad actors/processes want to do a hell lot of work.
I think implementing this in Orleans might require CLR support but am not fully sure about it so felt asking.
Is it possible to implement such thing in Orleans to stop grains by the runtime with their state and then rerun them again after other grains get a chance of doing some work?