DanRuta / xVA-Synth

Machine learning based speech synthesis Electron app, with voices from specific characters from video games
GNU General Public License v3.0
590 stars 54 forks source link

README.md instructions are not sufficient for a fresh install #11

Open harshhpareek opened 3 years ago

harshhpareek commented 3 years ago

I tried setting this up on a fresh linux install and found that the steps in the readme are incomplete to set this up from scratch. The steps mentioned are:

npm install
npm start
# source $VIRTUALENV_HOME/bin/activate # optional
pip3 install -r requirements.txt

I needed to follow the following additional steps to get this working:

Without doing this, the loading fastpitch model dialog appears but does not progress. Electron should show an error dialog here instead of just printing the error in a log file, but this is a separate issue from the one being discussed here. I believe the above steps are sufficient because the DEBUG* files and the xVA-Synth\server.log file do not have any errors when I try to generate sounds. (Side note: I am on linux, so the \ above caused the file to be created in the parent directory. use pathlib in python 3 to make this work across platforms)

Resolution:

harshhpareek commented 3 years ago

In v1.0.3, I had to add models/waveglow_256channels_universal_v4.pt from the nexus page manually to get this working.

DanRuta commented 3 years ago

Thank you for your detailed post regarding this. I admit, this section was a little bare.

It is true, CUDA and pytorch are necessary dependencies. I am unsure what the Electron quick-start is required for.

Models can be downloaded from the nexus pages, and they have the correct file structure required. The gpu error is caused by an earlier error in the code causing some other code not to run. The earlier error is the mising waveglow checkpoint which you correctly point out must be downloaded from nvidia and placed in the models folder.

I use v2.0.0 (this is an old project). Keep it there for behaviour consistent to mine. I use CUDA 10.1 but this shouldn't matter, so long as it matches the pytorch version you download.

harshhpareek commented 3 years ago

Regarding Electron quick start: i meant that some dependency is missing, which that project installed

datatypevoid commented 1 year ago

Thank you @harshhpareek, this really got me rolling. A few more notes for anyone else who steps down this path:

DanRuta commented 1 year ago

For the ffmpeg thing, I ship the ffmpeg.exe in the compiled version on Steam / Nexusmods. For linux, I suppose it wouldn't help anyway even if I did include it in the repo. I don't use linux, but maybe an easier thing to do rather than changing code might be to place a simlink between the place where the exe would be, and wherever your ffmpeg is installed on linux (maybe - I'm not sure).

The waveglow models are a legacy thing, they are no longer needed (not since v2.0). You can still use them if you wish, however. Nowadays, the per-voice individual HiFi-GAN vocoders are preferable, as they are fine-tuned specifically to the voice, rather than using some off-the-shelf waveglow one-size-fits-all vocoder (and they are faster). In upcoming model versions, I think this will go away also, with the models being end-to-end.

I'm working on the next major version (v3), I'll finally update the Electron version to make that easier. Are you sure about tqdm not being available on linux?

datatypevoid commented 1 year ago

Yeah, you're probably right about symlinking, I was going for quick and dirty to make sure it would work on my setup. You are also correct about tqdm, that misunderstanding was the result of something else -- having installed on a second machine this morning, I realized that.

On the topic of waveglow, I don't have options for any models besides WaveGlow, Big WaveGlow, and quick and dirty? How can I enabled the HiFi-GAN vocoders (sorry if I've just missed the instructions, I will have another look of course)? For testing, I am only using the Female Dunmer voice from Morrowind, just to note in case that has something to do with it?

EDIT: Also, how rude of me to not lead with this, but thank you for open-sourcing this incredible project. That is mighty generous, and then sharing your models and research tidbits too. Words can't even express the gratitude.

DanRuta commented 1 year ago

The hifigan models are the xxxxx.hg.pt files, where the xxxx is the voice ID. Most if not all new voices since about v1.1 of the app have been trained and released together with this additional file (on nexus, the description will show "Model: FastPitch1.1+HiFi"). When using the app, the default behaviour (can be changed in settings) is to use hifigan when the voice is loaded if available, though you can change the vocoder in the "Vocoder" dropdown (the lightning bolt next to it shows that it's available).

datatypevoid commented 1 year ago

Thank you kindly. I will try to route any other questions through discord to prevent cluttering github with non-issues.