At the moment the number of parallel workers in meta-modelling is limited to 10. Mainly to not overload the system.
This ticket exists to investigate what are the bottlenecks and investigate in how far we can increase the performance of the system to increase the number of workers. (I think our goal should be at least 100 to close this ticket).
At the moment when running the relatively issue task with 10 parallel workers (a python runner that evaluates a script in seconds) (study called "SelStim Main WVG" (f24a8250-12b5-11ef-b7b3-0242ac17139b), we get the following load (the study started at 16:04 on May 22) as seen from grafana on osparc.io:
Individual tasks for improving performance and robustness
At the moment the number of parallel workers in meta-modelling is limited to 10. Mainly to not overload the system. This ticket exists to investigate what are the bottlenecks and investigate in how far we can increase the performance of the system to increase the number of workers. (I think our goal should be at least 100 to close this ticket).
At the moment when running the relatively issue task with 10 parallel workers (a python runner that evaluates a script in seconds) (study called "SelStim Main WVG" (f24a8250-12b5-11ef-b7b3-0242ac17139b), we get the following load (the study started at 16:04 on May 22) as seen from grafana on osparc.io:
Individual tasks for improving performance and robustness