-
### Steps to reproduce
Install `espeak-ng`. `spd-conf -u`, set `espeak-ng` as default speech synthesis engine.
### Obtained behavior
`spd-say -)` lists `espeak-ng-mbrola`. I never installed `…
-
I am trying to convert deepvoice3 single speaker tts model to onnx format. Got pre-trained model trained over LJSpeech dataset from deepvoice3 github: https://github.com/r9y9/deepvoice3_pytorch.
Refe…
-
### Describe the bug
I trained an Tacotron2 GST model on LJspeech dataset and own Emotional dataset for 100k steps using `use_gst=True,
gst=GSTConfig(),` options in training.
### To Reproduce
…
-
안녕하세요 Pre-trained model을 통해서 inference 결과를 확인해보고 싶어서 코드를 돌려보았지만
text 폴더 내에 cmudict가 존재하지 않아서 오류가 뜨는데 이 부분은 어떻게 해결을 해야할지 문의드립니다.
Traceback (most recent call last):
File "synthesis.py", line 1,…
-
### News
- Conferences
- CVPR 2022: 6.19 ~ 24 (New Orleans)
- 대기업 중심의 AI 투자 관련 (SK, LG, KT 등등)
- [스캐터랩, 정부와 AI 윤리점검표 개발 추진](https://n.news.naver.com/mnews/article/092/0002259047?sid=105)
### …
-
how to train a multi-emotional vocoder?
I have English multi-emotional audio data, and I have a Chinese TTS model, how can I transfer emotional to Chinese model with English data
wac81 updated
2 years ago
-
For keeping a track of proposed overlays and effects ideas to be used in VN Mode.
-
Hi, Mr Zhou,
I have read the paper, the idea of emotion density control is very attractive.
I am not Enligsh native, but I feel the samples of proposed in https://kunzhou9646.github.io/Emovox…
-
Hi, thanks for your share!
Could you share a Readme in English?
-
### News
- 컨퍼런스
- ICLR 2022:
- 네이버 클로바 발표 스케쥴: https://naver-career.gitbook.io/en/teams/clova-cic/events/naver-clova-iclr-2022
- ML in Korea: https://naver-career.gitbook.io/en/teams/cl…