Closed daryltucker closed 2 years ago
Thank you for posting the info! That error message is a little misleading - it usually just means that Demucs failed and didn't generate any output (which is why the output folder couldn't be found). I haven't been able to test this app on Linux yet, even though it should work, so I'm sorry that it's unstable. I'll try to test it eventually. Did it fail immediately or did it spend a long time processing before failing?
Same issue here! It process for ~15 seconds after the split button is clicked and fails...
Yes, it seems to be doing something for some time (reads "Processing") before I receive this error. Thanks for your insight.
Are there any commands I can run using demucs
only to troubleshoot any problems there? I'd be happy to test a bit, but would need some guidance as I've never really used demucs
before.
Thank you for your interest and support running on Linux. Just want to help where I can.
demucs /tmp/ID.mp3 -n mdx_extra_q -j 4
RuntimeError: CUDA out of memory. Tried to allocate 480.00 MiB (GPU 0; 3.95 GiB total capacity; 2.03 GiB already allocated; 243.94 MiB free; 2.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
demucs /tmp/ID.mp3 -n mdx_extra_q -j 4
Selected model is a bag of 4 models. You will see that many progress bars per track.
Separated tracks will be stored in /home/daryl/separated/mdx_extra_q
Separating track /tmp/ID.mp3
100%|████████████████████████████████████████████████████████████████████████| 297.0/297.0 [00:22<00:00, 13.22seconds/s]
100%|████████████████████████████████████████████████████████████████████████| 297.0/297.0 [00:23<00:00, 12.41seconds/s]
100%|████████████████████████████████████████████████████████████████████████| 297.0/297.0 [00:23<00:00, 12.45seconds/s]
100%|████████████████████████████████████████████████████████████████████████| 297.0/297.0 [00:22<00:00, 13.18seconds/s]
My computer wants to die, though.
Seems this isn't a fix enough to get stemroller working all the way. but felt it was good progress to share.
Interesting, thanks for sharing. Looks like it's an issue with Demucs running out of memory; probably would work better running it on the CPU (no CUDA) with a large swapfile. I'm hoping to add support for switching between CPU and GPU mode soon; for now it just uses whichever your build of Demucs defaults to but this should be a toggle switch in a future version.
Yes, demucs
did work in CPU mode (I believe -d cpu
or something).
I didn't test if using CPU w. demucs
fixed stemroller.
I did test if exporting the variable that worked w. demucs
fixed stemroller, and it wasn't enough.
I am interested if there is something more that can be done to get GPU working on Linux. Seems like this variable gets lost within stemroller somewhere. Please let me know if you can think of anything I can try out.
I think that whether or not Demucs works on your GPU is basically determined by how much VRAM you have available. So if your GPU doesn't have enough memory to run Demucs, it will crash. (Unless it run it with -d cpu
of course...)
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
This allowed me to run demucs
successfully, when using demucs
directly. It stops working when using stemroller
. It seems like the problem is something like this env var gets eaten up by stemroller, and not provided to the subshell running demucs
(?).
So, I'm not sure if your project would prefer to make a check for minimum requirements, and provide an error to the user if their graphics card doesn't have enough RAM...
Or, detect low RAM and provide demucs
with proper configuration values to allow stemroller to run on lower-end GPUs.
Thanks for the info. I'm glad you pointed out the issue with seeing the env var, I'll need to make sure those get passed to demucs
correctly at some point. Checking for GPU memory is more involved than just checking regular RAM (difficult to make it cross-platform), so I'm not sure if that will be a practical solution although I like the idea. Whenever GPU support lands officially, I'll add a toggle switch so it can be enabled or disabled manually (in case it consistently fails you can fall back to CPU).
I even created
~/Music/StemRoller
but I think it's complaining about/tmp
directories. Not really sure what could be causing this issue.I tried looking but ran out of time.
entries
onprocessQueue.cjs
:140 is an empty list, thus no entries are evaluated.