daniilrobnikov / vits2

VITS2: Improving Quality and Efficiency of Single-Stage Text-to-Speech with Adversarial Learning and Architecture Design
https://vits-2.github.io/demo/
MIT License
503 stars 53 forks source link

ModuleNotFoundError | Step 4 of Custom Dataset #7

Open 641i130 opened 1 year ago

641i130 commented 1 year ago

I've gotten to step 4 of making a custom dataset (skipping the LJ Speech and VCTK steps) and I've stumbled across a ModuleNotFoundError. I'm not too sure how this is happening.

(vits2) root@hugeserver:/mnt/vits2# python preprocess/mel_transform.py --data_dir audio/ -c datasets/custom_james_voices/config.yaml
Traceback (most recent call last):
  File "/mnt/vits2/preprocess/mel_transform.py", line 13, in <module>
    from utils.hparams import get_hparams_from_file, HParams
ModuleNotFoundError: No module named 'utils.hparams'; 'utils' is not a package
(vits2) root@hugeserver:/mnt/vits2# ls
audio     data_utils.py  inference_batch.ipynb  LICENSE    model       README.md         text         train.py
datasets  figures        inference.ipynb        losses.py  preprocess  requirements.txt  train_ms.py  utils
(vits2) root@hugeserver:/mnt/vits2# 

Other useful information that might help: Ubuntu 22.04.3 LTS RTX 3090 We're using conda as the steps showed in the readme.

# echo $PYTHONPATH results in: /mnt/vits

641i130 commented 1 year ago

Additionally, it seems config.yaml is being parsed as a json file still.

p0p4k commented 1 year ago

Make an empty init.py file under utilts folder.

K2O7I commented 10 months ago

You can try to put sys.path.append("mnt/vits2") inside mel_transform.py

HuuHuy227 commented 9 months ago

any solution for this?

brambox commented 5 months ago

if its jupyter notebook and colab try import os

os.environ['PYTHONPATH'] = "yourpatchtovits2folder"