stemrollerapp / stemroller

Isolate vocals, drums, bass, and other instrumental stems from any song
https://www.stemroller.com
Other
2.62k stars 103 forks source link

Error: Unable to find Demucs output directory (Linux) #22

Closed daryltucker closed 2 years ago

daryltucker commented 2 years ago
 [ develop | ✚ 1 ]
✘ 11:09 daryl@nifflheim ~/src/stemroller $ npm run dev

> stemroller@1.1.1 dev
> cross-env NODE_ENV=dev STEMROLLER_RUN_FROM_SOURCE=true npm run dev:all

> stemroller@1.1.1 dev:all
> concurrently -k -n=svelte,electron -c='#ff3e00',blue "npm run dev:svelte" "npm run dev:electron"

[electron] 
[electron] > stemroller@1.1.1 dev:electron
[electron] > electron .
[electron] 
[svelte] 
[svelte] > stemroller@1.1.1 dev:svelte
[svelte] > vite dev
[svelte] 
[svelte] 
[svelte]   VITE v3.0.7  ready in 345 ms
[svelte] 
[svelte]   ➜  Local:   http://localhost:5173/
[svelte]   ➜  Network: use --host to expose
[svelte] files in the public directory are served at the root path.
[svelte] Instead of /static/fonts/Mukta/Mukta-Bold.ttf, use /fonts/Mukta/Mukta-Bold.ttf.
[svelte] files in the public directory are served at the root path.
[svelte] Instead of /static/fonts/Mukta/Mukta-Regular.ttf, use /fonts/Mukta/Mukta-Regular.ttf.
[electron] BEGIN processing video "39acedf8e552cea8" - "ID"
[electron] Splitting video "39acedf8e552cea8"; 4 jobs using model "mdx_extra_q"...
[electron] Trace: Error: Unable to find Demucs output directory
[electron]     at findDemucsOutputDir (/home/daryl/src/stemroller/main-src/processQueue.cjs:148:9)
[electron]     at async _processVideo (/home/daryl/src/stemroller/main-src/processQueue.cjs:205:26)
[electron]     at async processVideo (/home/daryl/src/stemroller/main-src/processQueue.cjs:267:5)
[electron]     at processVideo (/home/daryl/src/stemroller/main-src/processQueue.cjs:269:13)

I even created ~/Music/StemRoller but I think it's complaining about /tmp directories. Not really sure what could be causing this issue.

tmpfs                                           /tmp            tmpfs       defaults,noatime,nosuid,mode=1777      0 0

I tried looking but ran out of time. entries on processQueue.cjs:140 is an empty list, thus no entries are evaluated.

iffyloop commented 2 years ago

Thank you for posting the info! That error message is a little misleading - it usually just means that Demucs failed and didn't generate any output (which is why the output folder couldn't be found). I haven't been able to test this app on Linux yet, even though it should work, so I'm sorry that it's unstable. I'll try to test it eventually. Did it fail immediately or did it spend a long time processing before failing?

EGMartins commented 2 years ago

Same issue here! It process for ~15 seconds after the split button is clicked and fails...

daryltucker commented 2 years ago

Yes, it seems to be doing something for some time (reads "Processing") before I receive this error. Thanks for your insight.

Are there any commands I can run using demucs only to troubleshoot any problems there? I'd be happy to test a bit, but would need some guidance as I've never really used demucs before.

Thank you for your interest and support running on Linux. Just want to help where I can.

daryltucker commented 2 years ago
demucs /tmp/ID.mp3 -n mdx_extra_q -j 4
RuntimeError: CUDA out of memory. Tried to allocate 480.00 MiB (GPU 0; 3.95 GiB total capacity; 2.03 GiB already allocated; 243.94 MiB free; 2.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
daryltucker commented 2 years ago
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
demucs /tmp/ID.mp3 -n mdx_extra_q -j 4
Selected model is a bag of 4 models. You will see that many progress bars per track.
Separated tracks will be stored in /home/daryl/separated/mdx_extra_q
Separating track /tmp/ID.mp3
100%|████████████████████████████████████████████████████████████████████████| 297.0/297.0 [00:22<00:00, 13.22seconds/s]
100%|████████████████████████████████████████████████████████████████████████| 297.0/297.0 [00:23<00:00, 12.41seconds/s]
100%|████████████████████████████████████████████████████████████████████████| 297.0/297.0 [00:23<00:00, 12.45seconds/s]
100%|████████████████████████████████████████████████████████████████████████| 297.0/297.0 [00:22<00:00, 13.18seconds/s]

My computer wants to die, though.

Seems this isn't a fix enough to get stemroller working all the way. but felt it was good progress to share.

iffyloop commented 2 years ago

Interesting, thanks for sharing. Looks like it's an issue with Demucs running out of memory; probably would work better running it on the CPU (no CUDA) with a large swapfile. I'm hoping to add support for switching between CPU and GPU mode soon; for now it just uses whichever your build of Demucs defaults to but this should be a toggle switch in a future version.

daryltucker commented 2 years ago

Yes, demucs did work in CPU mode (I believe -d cpu or something). I didn't test if using CPU w. demucs fixed stemroller. I did test if exporting the variable that worked w. demucs fixed stemroller, and it wasn't enough.

I am interested if there is something more that can be done to get GPU working on Linux. Seems like this variable gets lost within stemroller somewhere. Please let me know if you can think of anything I can try out.

iffyloop commented 2 years ago

I think that whether or not Demucs works on your GPU is basically determined by how much VRAM you have available. So if your GPU doesn't have enough memory to run Demucs, it will crash. (Unless it run it with -d cpu of course...)

daryltucker commented 2 years ago
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128

This allowed me to run demucs successfully, when using demucs directly. It stops working when using stemroller. It seems like the problem is something like this env var gets eaten up by stemroller, and not provided to the subshell running demucs (?).


So, I'm not sure if your project would prefer to make a check for minimum requirements, and provide an error to the user if their graphics card doesn't have enough RAM...

Or, detect low RAM and provide demucs with proper configuration values to allow stemroller to run on lower-end GPUs.

iffyloop commented 2 years ago

Thanks for the info. I'm glad you pointed out the issue with seeing the env var, I'll need to make sure those get passed to demucs correctly at some point. Checking for GPU memory is more involved than just checking regular RAM (difficult to make it cross-platform), so I'm not sure if that will be a practical solution although I like the idea. Whenever GPU support lands officially, I'll add a toggle switch so it can be enabled or disabled manually (in case it consistently fails you can fall back to CPU).