Open doryashar opened 1 month ago
Which model did you use?
i don't see any specific model that you are referring to, if you are talking about the pretrained_model then i use the cosyvoice-300m
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
echo "Run inference. Please make sure utt in tts_text is in prompt_data"
for mode in sft zero_shot; do
python cosyvoice/bin/inference.py --mode $mode \
--gpu 0 \
--config conf/cosyvoice.yaml \
--prompt_data data/test-clean/parquet/data.list \
--prompt_utt2data data/test-clean/parquet/utt2data.list \
--tts_text pwd
/tts_text.json \
--llm_model $pretrained_model_dir/llm.pt \
--flow_model $pretrained_model_dir/flow.pt \
--hifigan_model $pretrained_model_dir/hift.pt \
--result_dir pwd
/exp/cosyvoice/test-clean/$mode
done
fi
actually i don't even see the model.train function anywhere in this repo. is that expected?
I couldn't find model.train too . I understand that the transition between train and inference modes is done differently. Would you try other variants of CosyVoice-300m for your problem?
i would expect to see the backprop/train function inside the model definition.
I think If I remember right. These are in executor.py
change model.inference to model.tts, we will fix it
This issue is stale because it has been open for 30 days with no activity.
trying to run training (CosyVoice/examples/libritts/cosyvoice/run.sh), while doing the inference step i get this error: 'CosyVoiceModel' object has no attribute 'inference' when looking inside inference.py i see: model = CosyVoiceModel(configs['llm'], configs['flow'], configs['hift']) while in cli/model.py there is the CosyVoiceModel class without inference method.