nguyenhoanganh2002 / XTTSv2-Finetuning-for-New-Languages

11 stars 3 forks source link

Custom dataset #7

Closed Utk-bot closed 3 days ago

Utk-bot commented 6 days ago

@nguyenhoanganh2002 @anhnhorai CUDA_VISIBLE_DEVICES=0 python train_dvae_xtts.py --output_path=checkpoints/ --train_csv_path=datasets/metadata_train.csv --eval_csv_path=datasets/metadata_eval.csv --language="hi" --num_epochs=5 --batch_size=512 --lr=5e-6 /disk/XTTSv2-Finetuning-for-New-Languages/train_dvae_xtts.py:89: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. dvae.load_state_dict(torch.load(dvae_pretrained), strict=False) /disk/XTTSv2-Finetuning-for-New-Languages/TTS/tts/layers/tortoise/arch_utils.py:336: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. self.mel_norms = torch.load(f) [BaseDatasetConfig(formatter='coqui', dataset_name='large', path='datasets', meta_file_train='metadata_train.csv', ignored_speakers=None, language='hi', phonemizer='', meta_file_val='metadata_eval.csv', meta_file_attn_mask='')] This modu;e is <module 'TTS.tts.datasets' from '/disk/XTTSv2-Finetuning-for-New-Languages/TTS/tts/datasets/init.py'> | > [!] 4808 files not found The data in meta data formatiing is [] Traceback (most recent call last): File "/disk/XTTSv2-Finetuning-for-New-Languages/train_dvae_xtts.py", line 208, in trainer_out_path = train( File "/disk/XTTSv2-Finetuning-for-New-Languages/train_dvae_xtts.py", line 96, in train train_samples, eval_samples = load_tts_samples( File "/disk/XTTSv2-Finetuning-for-New-Languages/TTS/tts/datasets/init.py", line 125, in load_tts_samples assert len(meta_data_train) > 0, f" [!] No error her ehre her hertraining samples found in {root_path}/{meta_file_train}" AssertionError: [!] No error her ehre her hertraining samples found in datasets/metadata_train.csv

nguyenhoanganh2002 commented 5 days ago

@nguyenhoanganh2002 @anhnhorai CUDA_VISIBLE_DEVICES=0 python train_dvae_xtts.py --output_path=checkpoints/ --train_csv_path=datasets/metadata_train.csv --eval_csv_path=datasets/metadata_eval.csv --language="hi" --num_epochs=5 --batch_size=512 --lr=5e-6 /disk/XTTSv2-Finetuning-for-New-Languages/train_dvae_xtts.py:89: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. dvae.load_state_dict(torch.load(dvae_pretrained), strict=False) /disk/XTTSv2-Finetuning-for-New-Languages/TTS/tts/layers/tortoise/arch_utils.py:336: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. self.mel_norms = torch.load(f) [BaseDatasetConfig(formatter='coqui', dataset_name='large', path='datasets', meta_file_train='metadata_train.csv', ignored_speakers=None, language='hi', phonemizer='', meta_file_val='metadata_eval.csv', meta_file_attn_mask='')] This modu;e is <module 'TTS.tts.datasets' from '/disk/XTTSv2-Finetuning-for-New-Languages/TTS/tts/datasets/init.py'> | > [!] 4808 files not found The data in meta data formatiing is [] Traceback (most recent call last): File "/disk/XTTSv2-Finetuning-for-New-Languages/train_dvae_xtts.py", line 208, in trainer_out_path = train( File "/disk/XTTSv2-Finetuning-for-New-Languages/train_dvae_xtts.py", line 96, in train train_samples, eval_samples = load_tts_samples( File "/disk/XTTSv2-Finetuning-for-New-Languages/TTS/tts/datasets/init.py", line 125, in load_tts_samples assert len(meta_data_train) > 0, f" [!] No error her ehre her hertraining samples found in {root_path}/{meta_file_train}" AssertionError: [!] No error her ehre her hertraining samples found in datasets/metadata_train.csv

Are your metadata_train.csv, metadata_eval.csv files formatted as instructed in my readme?

audio_file|text|speaker_name
wavs/xxx.wav|How do you do?|@X
wavs/yyy.wav|Nice to meet you.|@Y
wavs/zzz.wav|Good to see you.|@Z
Utk-bot commented 5 days ago

@nguyenhoanganh2002 @anhnhorai .Thank you for reply and making this repo available, huge help. Yes, I was able to train the model in Hindi langauge but I want to make results better, can you tell me which hyper-parameters to choose like default values is only 5 epoch for both dvae (batch size = 512) and gpt training. I want to make comparable results as bark clone.Just that making presets in bark for new speaker is not good. Sending the training logs in short time

nguyenhoanganh2002 commented 5 days ago

@nguyenhoanganh2002 @anhnhorai .Thank you for reply and making this repo available, huge help. Yes, I was able to train the model in Hindi langauge but I want to make results better, can you tell me which hyper-parameters to choose like default values is only 5 epoch for both dvae (batch size = 512) and gpt training. I want to make comparable results as bark clone.Just that making presets in bark for new speaker is not good. Sending the training logs in short time

the default hyper parameters for dvae and gpt training are chosen based on my experience. ddditionally, checkpoints are saved based on the best loss in the validation set (metadata_eval.csv)

Utk-bot commented 1 day ago

@nguyenhoanganh2002 @anhnhorai Hi , again the model is 1.9 GB original file and after training it is 5.6 GB I cannot use it with coqui tts

nguyenhoanganh2002 commented 1 day ago

@nguyenhoanganh2002 @anhnhorai Hi , again the model is 1.9 GB original file and after training it is 5.6 GB I cannot use it with coqui tts

after training, coqui saves additional tensor (dvae, ... something else). however, when loading the model to inference, only the gpt and hifigan weights are loaded. Could you provide the code, error logs indicating issues you're experiencing with the model after training with Coqui TTS