starpu-runtime / starpu

This is a mirror of https://gitlab.inria.fr/starpu/starpu where our development happens, but contributions are welcome here too!
https://starpu.gitlabpages.inria.fr/
GNU Lesser General Public License v2.1
58 stars 13 forks source link

Question about stuck condition, probably due to constant GPU memory purging #38

Open Muxas opened 5 months ago

Muxas commented 5 months ago

Hi, again!

As you already know, I like StarPU! This time I got a stuck condition. Setting WATCHDOG_TIMEOUT=1000000 (1 second) showed, that for some reason on a server with GPUs no tasks are finished in such a huge time. I believe the problem is within constant loading and purging of some memory buffers. I mean a task requires two input buffers, but the memory is not enough. So memory manager purges buffer number 1 to make space for a buffer number 2. Then it purges buffer number 2 to make space for a buffer number 1. In the end, what I observe, no task is done in several minutes (more than 100 messages from the watchdog about not finishing a task in last second with a delay of 1 second). It starts happening as I increase problem size (number of tasks increases while size of each task remains the same) while keeping the same hardware. The more tasks are to be computed, the more probability of getting stuck becomes. Any advice how to solve the issue? Did you experience such a problem before? How did you solve it?

During the stuck period I see no changes in nvidia-smi: memory utilization remains the same, while no computing is done (0 percent).

Thank you!

sthibaul commented 5 months ago

To understand what is happening, it would be useful to produce traces, see https://files.inria.fr/starpu/doc/html/OfflinePerformanceTools.html#GeneratingTracesWithFxT

Muxas commented 5 months ago

Here is the trace for such a situation (starpu-1.4 branch, commit f1f915c7e622e8ead7feb7c044947c8bf2b29a3a, remote gitlab.inria.fr/starpu/starpu.git): datarace.paje.trace.tar.gz

sthibaul commented 5 months ago

Mmm, the trace does not show any long period of idleness, only some cases where 500ms are apparently spent in a single allocation or free. How did you end the trace? Normally we do have a SIGINT handler that writes the end of the trace.

Muxas commented 5 months ago

How did you end the trace?

Environment STARPU_WATCHDOG_CRASH=1 did it for me. Watchdog timeout was set to 1 second.