-
The extension currently supports only the English language. To make it accessible to a broader global audience, implement multi-language support using Chrome’s i18n API. This will allow users to inter…
-
This is something I _haven’t_ implemented … yet.
It would be great to have epresent open 2 frames, so the main one could be put on a second display, and a secondary frame could stay on the laptop, sh…
-
### Describe the feature you'd like to request
To make our blog website more inclusive and accessible to a global audience, we propose adding multi-lingual support. This feature will allow users to…
-
Hi, @Wendison Thank you so much for your excellent work. very nice paper.
When I saw this reply on the below issues, it helped me to motivate to go further.
https://github.com/Wendison/VQMIVC/i…
-
## 論文タイトル(原文まま)
TRANSFORMER-BASED MULTI-ASPECT MULTI-GRANULARITY NON-NATIVE ENGLISH SPEAKER PRONUNCIATION ASSESSMENT
## 一言でいうと
トランスフォーマーモデルを用いて、多面的かつ多粒度で非ネイティブ英語話者の発音評価を行う手法の提案と検証。
### 論文リンク
…
-
I'm trying to finetune from pretrained model pflow-2000.ckpt on custom multi speaker dataset in German, in the first 16 epoch we trained without issue, but now i get many RuntimeError like below. Does…
-
See: https://github.com/eschulte/epresent/issues/37
-
This issues contains the discussion for supporting multiple tracks.
In my eyes, supporting multi-tracks has multiple facets:
## Purpose
What are multi-track recordings used for / what use-cas…
-
Hi, I read about your multi_speaker implemention of Tacotron2. It means different speakers correspond to different text inputs, and you did not use the speaker embedding. Am i right ? If so, the speak…
-
First of all, thank you for your excellent work,The data set I currently have contains 8 speaker, each of which is expected for about 20 minutes. Train according to your method and the results of mode…