-
PS D:\Documents\GPT-SoVITS-beta0706> & d:/Documents/GPT-SoVITS-beta0706/runtime/python.exe d:/Documents/GPT-SoVITS-beta0706/GPT_SoVITS/onnx_export.py
kmeans start ...
100%|██████████████████████…
-
Hi, I was trying to adapt K-BERT for RoBERTa and tried using the pre-trained model for RoBERTa from Huggingface for that. But somehow, the model never seems to converge at all and gives very poor scor…
-
When I try running Word2Vec (BERT) on my corpus, I get a Key Error on Terminal regarding my K-means minimum. Regardless of what my k-means is, I keep getting the same error.
To replicate:
…
-
After pretraining with transformer_iwslt_de_en in fairseq using the iwslt14 dataset, I tried to fine-tune with transformer_s2_iwsltde_en in bert-nmt, but got Exception: Cannot load model parameters fr…
-
I have followed all the step mentioned I can able to execute the ./rag_demo --help command but when i try to run the main command to interact with model its telling the illegal instruction
![image…
-
### 🐛 Describe the bug
### Summary
This issue had been discussed in https://github.com/pytorch/pytorch/pull/113004#discussion_r1383666731.
The pattern-matching process seems to be sensitive to t…
-
- 版本:V2
- 分割方式:webui 不切 api不传入任何切割符 则也为不切
- 其余参数完全一致
- 情况:api产生的音频wav格式 比 webui中的音频噪音要大
- 测试:api.py加上了webui中的音频归一 效果也不行,webui中生成的音频效果是最好的,即使把webui中所有的推理代码都copy过来也不行
- 期望解答:api.py应该做什么才能达到webui中…
OriX0 updated
3 weeks ago
-
Writeup on prover for machine learning
- https://github.com/zkonduit/ezkl, proving nano-chatGPT
- https://github.com/ddkang/zkml, proving GPT-2, BERT and CLIP
- https://medium.com/@danieldkang/ve…
-
I use k-bert run_ner.py but cannot get the result as high as the paper say, and I dont know how to change dataset to finetune
![image](https://user-images.githubusercontent.com/60349378/219858129-7a9…
-
### 🐛 Describe the bug
CI test `dynamo\test_model_output.py` is failing on aarch64 platform because the device is hardcoded to cuda in one of the subtests.
Reproducer:
` python test/dynamo/test_…