Closed jp8 closed 4 years ago
Thanks for the info. Next step is to find out how to reduce the OMP overhead to get the CPU load much lower. Let's see if that is possible...
Maybe the solution is to identify a higher loop/fork point earlier in the process, and let every thread process it's own timer (or maybe use the OpenMP Task directive somehow?). That way there's the fork/join thread creations/destructions are just processed once per server session.
Would it be possible in the interim to link 2 instances
See my above comment: https://github.com/corrados/jamulus/issues/339#issuecomment-640183350
or maybe use the OpenMP Task directive somehow?
I am looking for a simple solution. Are you experienced with OpenMP Task directives?
or maybe use the OpenMP Task directive somehow?
I am looking for a simple solution. Are you experienced with OpenMP Task directives?
Not really, but I'll keep looking for a workable solution on threads reusability and run some tests.
@corrados I was looking to make some test cases for multithreads. Could you please create a multithread
branch with your changes at https://github.com/corrados/jamulus/commit/db7a7599b6a6d9e89164fb2d5227e9f35862cc5f so all changes/experiments are properly contained if multiple collaborators include code or build the branch for testing?
Sure. Here it is: https://github.com/corrados/jamulus/tree/feature_multithread
Please note: Please try to stick to what I said above: "I am looking for a simple solution.".
Have you tried out the current OMP implementation with multiple CPU cores and a lot of connected clients? I know that the OMP overhead is significant but I would also like to know how good the spreading of CPU tasks is now spread over multiple cores.
I just implemented a scaling of the instrument picture in case Compact skin is chosen to avoid that the instrument picture makes the channel wider:
Sure. Here it is: https://github.com/corrados/jamulus/tree/feature_multithread
Thanks @corrados I was looking at the code these past days and thinking about the cpu-i/o load increase when a high number of clients is connected, and wonder if anyone run a profiling of the app on that test case to verify where the critical points are?
BTW, should we move to an specific thread on server performance to discuss specifically everyone findings?
BTW, should we move to an specific thread on server performance to discuss specifically everyone findings?
Just create a new one if you like.
I plan to do further modifications to the Compact view. If the text is too long, I use a smaller font size. I know that this is hard to read but it leads to a very slim channel: What do you think about this implementation?
I like it as extremely compressed UI for specific use cases (like large ensembles), but as the controls' label aren't explicit on the actions anymore, it will be good if you can add hover tooltips. Would it make sense to leave the Compact view as it is today for (not so) large groups, and name this as Extremely Compact?
will be good if you can add hover tooltips
These hover tooltips are already implemented.
Would it make sense to leave the Compact view as it is today for (not so) large groups, and name this as Extremely Compact?
I would want to avoid adding a new skin for that. If you only have a few musicians connected than you can use the Normal skin.
To make a cross reference: brynalf successfully served 100 clients in his local area network with his 32 logical processor PC using the latest Git master code: https://github.com/corrados/jamulus/issues/455#issuecomment-683303548
There is a new experimental server mode in developement to support large ensembles, see: https://github.com/corrados/jamulus/issues/599.
Support large ensembles (> 100 connected clients) [...] I would like to open a discussion about improving the Jamulus user experience for large ensembles. [...] A third potential solution would be to have the server use multiple threads to generate mixes in parallel.
With the latest changes to the multithreading code it is now possible to support >100 clients. So the initial request of this issue is solved.
One potential solution could be a server mode in which a single mix is generated, then potentially the server would have less work to do, and could therefore handle more connected clients.
This has been worked on here: https://github.com/corrados/jamulus/tree/feature_singlemixserver.
Of course we still have outstanding issues in that area but these should be discussed in this Issue: https://github.com/corrados/jamulus/issues/455.
So I'll close this issue now. Please continue the discussion about this topic in the Issue https://github.com/corrados/jamulus/issues/455.
For anyone trying to start Jamulus Server on MacOS with more than 10 participants, this is the command you need to run in your terminal
/Applications/JamulusServer.app/Contents/MacOS/JamulusServer --numchannels 30
I would like to open a discussion about improving the Jamulus user experience for large ensembles. My understanding is that the current Jamulus server will use only a single CPU core, and that it generates a personal mix for each connected client.
One potential solution could be a server mode in which a single mix is generated, then potentially the server would have less work to do, and could therefore handle more connected clients. I image the client who occupies the first space on the server would be in control of the mix for all participants.
A second potential solution would be the ability for a server (with mixer controls on the server UI) to also act as a client to another server. In this case all the violins could join server A, all the cellos could join server B, and servers A and B could join server Z. The conductor would connect his client to server Z and have a mixer control for each section. In this solution, larger ensembles would simply require more servers. Delay would be mitigated by having multiple servers at the same hosting centre, even indeed on the same multi-core VM, so the ping time among all the servers is 0.
A third potential solution would be to have the server use multiple threads to generate mixes in parallel.
I would appreciate hearing what people think of these approaches, and I would like to hear about any other approaches that people can think of.