BOINC / boinc

Open-source software for volunteer computing and grid computing.
https://boinc.berkeley.edu
GNU Lesser General Public License v3.0
2.01k stars 445 forks source link

Memory Access & IO Priority #1392

Open dioguerra opened 9 years ago

dioguerra commented 9 years ago

Scenario

Usually i have boinc running at max 90% CPU core speed, entering idle mode when my other processes go up 30%. Sometimes, when i'm working with my vm (2 some times) the computer with boinc running gets really sluggish. I noticed that this was mainly because boinc was running applications that made use of vm's too. One time i had 4 vm's running at the same time, which is overkill for my 8GB of RAM, and mainly, the the Main Storage Drive (that couldn't handle all the IO).

Idea

Knowing that boinc starts processes with CPU priority by default:low i don't know if this applies also to IO access priority.

This would be a mechanism similar to the IDLE/ACTIVE used for CPU usage. I think that there is no thread like this.

ChristianBeer commented 9 years ago

The main question here is: Is there a way to prioritize Memory and I/O access? If yes is it available on all platforms (Windows, Mac, Linux)? If not, is there a general way to get the load information from and schedule accordingly?

You would still have to distinguish between BOINC generated Memory and I/O operations and from other apps. This seems very tricky.

dioguerra commented 9 years ago

As windows was already answered in dev_mail i made a quick look in google (for linux OS) and found some stuff.

One of this is IOTOP here http://guichaz.free.fr/iotop/

AND

http://serverfault.com/questions/169676/howto-check-disk-i-o-utilisation-per-process

one can just run the process and read the stdout!? Have to check licensing though.

ChristianBeer commented 7 years ago

Reading disk IO utilization is very cumbersome on Linux. And even tools like ionice and cgroups do not work all the time. I'm keeping this in case someone wants to dig more into it but this is probably a time sink.