-
I replicated the results of VITS and Matcha-TTS on a single speaker Chinese dataset and found that the timbre similarity of Matcha-TTS is lower than that of VITS, especially in the high-frequency deta…
-
app/src/main/assets
├── frontend
│ ├── final.ort
│ ├── frontend.flags
│ ├── g2p_en
│ │ ├── README.md
│ │ ├── cmudict.dict
│ │ ├── model.fst
│ │ └── phones.sym
│ ├── le…
-
各位大佬
我把gpt-vits和metahuman-stream放在同一个rtx 3090的机器上,占用显存20GB左右,还不错。但是gpt-vits的耗时是edgetts的3-4倍,结果一句话说完,人物都要等下一句tts完成才能继续。
我单独测试gpt-vits,nvidia-smi显示10-12%的使用率,3-4GB的显存,速度和同时运行metahuman,速度没有明显差别,应该不是资源冲…
-
使用官方提供的bert vits包推理, 内存持续增长; 而本项目不会, 做了哪方面的改进?
另外,如果启动该项目时只想使用bert vits的部分而不加载其他的模块, 我应该怎么做?
感谢作者开源
-
使用onnx_export.py脚本无法导出v2模型:
```shell
python onnx_export.py
```
输出:
```text
G:\GPT-SoVITS\.venv\Lib\site-packages\gradio_client\documentation.py:103: UserWarning: Could not get documentation grou…
-
It might be helpful for your project to integrate the code from Vits.cpp into yours
https://github.com/maxilevi/vits.cpp
This would allow you to run vits / piper models
-
GPT返回的内容有太多对vits来说无用空格,会影响vits的性能,还会导致机器人直接回复“返回文本消息过长”
-
作者您好,我按以下的教程进行推,
https://github.com/DepthAnything/Depth-Anything-V2/tree/main/metric_depth
depth 全是0,是有哪里不对的吗?
以下是代码
import cv2
import torch
from depth_anything_v2.dpt import DepthAnythingV2…
-
--> Config model
done
--> Loading model
I It is recommended onnx opset 19, but your onnx model opset is 13!
I Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export fo…
-
Hey, thank you for this project. It works great, and real fast with homeassistant.
I wanted to let my family members use voice assist but they are not english speaking, and so i was wondering if p…