-
作者您好,我按以下的教程进行推,
https://github.com/DepthAnything/Depth-Anything-V2/tree/main/metric_depth
depth 全是0,是有哪里不对的吗?
以下是代码
import cv2
import torch
from depth_anything_v2.dpt import DepthAnythingV2…
-
Hi @Muzammal-Naseer and @cgarbin and @kahnchana
I've been reading the paper and looking into the code too. But was not able to fins the piece of codes related to visualization of figures Like F…
-
Hello, I'm trying to convert this VITS pytorch model to TFLite version, I'm getting stuck while converting to ONNX and then to TF. Issues are listed below with all the required, anyone can solve this …
-
Just a quick FYI..
I've VERY QUICKLY (so bugs beware) added sherpa-onnx to this python tts-wrapper..
https://github.com/willwade/tts-wrapper?tab=readme-ov-file#sherpa-onnx
We do fun things l…
-
accuracy compares with VITS? does it faster and accurator?
-
![sovtis_test_training_time_gt](https://github.com/user-attachments/assets/ac0d4041-da25-4c6d-ad27-15dadc9d2976)
![sovtis_test_training_time_16000](https://github.com/user-attachments/assets/2b949c07…
-
I know I previously mention edge-tts, which is cloud-based, fast and free, but under the GPL. I have recently been trying out https://github.com/rhasspy/piper/, which uses the VITS model and is under…
-
I really enjoyed reading your work! Was curious if you plan on open sourcing the code? Would love to test this in a VITS setting and replacing MAS.
-
>=36k steps and acapella slicing of samples from here (about 60>=)
-
Hi all, I noticed that these two model tags link to the same download. Is there a pre-trained ljspeech vits model with space/pauses?
kan-bayashi/ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_s…