Closed MethanJess closed 4 months ago
Hi there! Thank you for the feedback! First of all, you have CUDA, but has it worked at all with other ML libraries/projects? I've had a hell of a time making it work with pytorch, but will update the readme and this issue when I have a more robust process for getting it to work on windows. Next, some videos have subtitles, but sometimes they're in a bitmap-based format instead of a text-based one. I need to add a feature that can use OCR to turn these subs into text sometime, or put an actual error message in for the meantime.
Finaly, which Coqui model are you using? Since the switch to the @idiap/coqui-ai-TTS fork, I've noticed multispeaker models that have support for a speaker wav have behaved slightly differently. I will look into making this easier to use.
And finally, setting up OCR requires the video_ocr library, which requires py-tesserocr which requires a lot of fiddling and research to set up on Windows. I had to install a wheel file and set a path variable, but haven't added any instructions to make it work yet, because I was thinking of rewriting the library or forking it and switching it to use a more modern, compatible tesseract library.
You have CUDA, but has it worked at all with other ML libraries/projects?
Hi yes, the GPU has worked for other projects, this is the only one that hasn't
which Coqui model are you using
The default one called tts_models/en/vctk/vits
. which one should I download? I feel like the list is unnecessarily bloated :c
I've never seen a models option on any coqui repository before... the one called 2.0.2 has always worked pretty well, but i don't see it in the options
I just tried to install it in WSL hoping that it may fix the issues, but, doing
pip install -r requirements-linux310.txt
Gives me this error at the end
× Encountered error while trying to install package.
╰─> wxPython
Hi, I'm not sure about that particular coqui model, it just lists everyone since one coqui shows as available, grouped by language. Many of which are pretty lackluster. But I'm not sure why having vctk is u selecting the voice. I changed something related to that while updating for xtts so I'll test it again more thoroughly. And I've never tried with WSL. Does your WSL have a DE that supports gtk applications? It might have something to do with that, because on Linux, you need to have this installed to install wxpython:
-f https://extras.wxpython.org/wxPython4/extras/linux/gtk3/ubuntu-22.04
And tonight I'll probably look into CUDA for Windows, I've run out of storage space pretty hard so reinstalling large things has been a pain these days haha Thanks for your patience and I'm sorry you're having issues. This is truly my first time managing a project of this scale :O
When i run weeablind.py - I get this error: torchvision is not available - cannot save figures
Then it shows that the GPU Is not detected, and that OCR is not supported (Everything else is supported)
Choosing a different coqui voice shows the voice name being selected for a second then the name slowly fades out, and leaves me with no voice selected ([!] Looks like you are using a multi-speaker model. You need to define either a
speaker_idx
or aspeaker_wav
to use a multi-speaker model.)Clicking "Run Dubbing" Outputs this error: UnboundLocalError: local variable 'i' referenced before assignment Then it locks me into an inescapable Progress bar
Also importing a video gives three errors Output file does not contain any stream Error opening output file C:\Apps\weeablind\output\01 Setting Up an Optimized Environment for Drawing.srt. Error opening output files: Invalid argument {'status': 'subless'} (Even though i imported a video with subtitles within it)
I have FFmpeg, MSVC Build Tools, and Cuda all installed... (I also did the setup and all) (I also had problems with generating subtitles)