-
In file `modules/attention.py` line 434-435
```
if atten_weights_ph is not None: # used for emotional gst tts inference
atten_weights = atten_weights_ph
```
When I run in…
-
### agentId
111
### avatar
https://r2.vidol.chat/agents/vidol-agent-lilia/avatar.jpg
### cover
https://r2.vidol.chat/agents/vidol-agent-lilia/cover.jpg
### systemRole
Your every a…
-
Hello,
400 Hz is fine for read speech, but not suitable for expressive / emotional TTS applications. For example,
[0001_001491](https://github.com/espnet/espnet/files/11870982/0001_001491.txt) (p…
-
Mr. end-4, I know you want it too
-
[This github entry is from the Accessibility for Children Community Group]
Although more research is needed to specify which types of voices would be best for which applications at the content-leve…
-
It seems to be running in CPU only, even though I used this prompt to start it in Docker:
docker run -it --rm --gpus all -p 7860:7860 athomasson2/ebook2audiobookpiper-tts:latest
Am I doing somet…
-
I am trying to train a TTS but I am wondering about the style of the speakers? My dataset contains multiple speakers with different speaking styles. Does the model retain the style for each voice or i…
-
This issue is to track how to get German working and what options one need to consider.
# dotnet-examples
https://github.com/k2-fsa/sherpa-onnx/tree/master/dotnet-examples
- [ ] keyword-spotting-…
-
Congratulate and many thanks first! I think the project has great potential into becoming a popular foundation.
If you deem appropriate, would you support GPT-SoVITS as well?
I know there has alr…
-
It would be amazing if emotion markers can be supported (or if they already are, documentation on how to use them), for example providing indicators like ``, ``, etc. or use of emoji's for the same.
zclch updated
8 months ago