Open Darcy0218 opened 3 months ago
I can't make sense of it. I see from your codebase that a style embedding is required, which is loaded from.style.pt format. I guess, have you utilized a model to generate style embedding(.style.pt) before training? If so, what is that model?
I can't make sense of it. I see from your codebase that a style embedding is required, which is loaded from.style.pt format. I guess, have you utilized a model to generate style embedding(.style.pt) before training? If so, what is that model?
ok, I took a closer look (this baseline is the second act to help run), this should be style emdedding(bert extracted the style representation). I think you have two ways to deal with it, one is to pre-extract relevant representations according to its original logic and save them, the other is to directly load the bert pre-trained model in the training process. References can be shown below.
In addition, having to work overtime on weekends is frustrating.
I can't make sense of it. I see from your codebase that a style embedding is required, which is loaded from.style.pt format. I guess, have you utilized a model to generate style embedding(.style.pt) before training? If so, what is that model?
ok, I took a closer look (this baseline is the second act to help run), this should be style emdedding(bert extracted the style representation). I think you have two ways to deal with it, one is to pre-extract relevant representations according to its original logic and save them, the other is to directly load the bert pre-trained model in the training process. References can be shown below.
In addition, having to work overtime on weekends is frustrating.
哈哈哈,辛苦了兄弟,谢谢你的回复。
I can't make sense of it. I see from your codebase that a style embedding is required, which is loaded from.style.pt format. I guess, have you utilized a model to generate style embedding(.style.pt) before training? If so, what is that model?
ok, I took a closer look (this baseline is the second act to help run), this should be style emdedding(bert extracted the style representation). I think you have two ways to deal with it, one is to pre-extract relevant representations according to its original logic and save them, the other is to directly load the bert pre-trained model in the training process. References can be shown below.
In addition, having to work overtime on weekends is frustrating.
您好,请问是直接用inference.py中的get_style_embed函数得到style_embed吗?style_prompt文本是否有什么前处理?
FileNotFoundError: [Errno 2] No such file or directory: '/data4/wuyikai/data/TextrolSpeech/LibriTTS/LibriTTS/train-clean-360/6300/39660/6300_39660_000014_000000.style.pt'