-
在https://github.com/RVC-Boss/GPT-SoVITS/issues/1175 的分享上进一步完善,增加了对轻音的支持。
在函数调用的传参时,增加polyphone_dict多音字字典参数。
```
inputs={
"gpt_path":new_gpt_path,
"sovits_path":sovits_path,
…
-
我想用clip做细粒度分类任务,但是效果不是很好,于是就想能冻结text_encoder权重,单独训练vit部分。我参考了visual encoder部分的代码是
for k, v in model.visual.named_parameters():
v.requires_grad = False
我想依照上面的方法分别冻结self.tokenizer, self.bert和…
-
After I finetuned a pretrained model on my custom data, I tried to eval it.
My code is as below
```
from neuspell.seq_modeling.helpers import load_data, train_validation_split
from neuspell.se…
-
Thank you for creating a great repository.
I wonder why there is no bert when converting a pytorch model of MeloTTS to an Onnx model.
https://github.com/k2-fsa/sherpa-onnx/blob/963aaba82b01a425ae8…
-
### 🐛 Describe the bug
issue:
When I use torch.nn.parallel to distribute data onto multiple gpus, I found the Dataparallel obj `filter_model` only return output of one gpu.
```
def train_mode…
-
Hello author, thx to the great work! i want to use ALBEF to train another language-image multi model, i am a little confused about the finetune procedure.
Here's my options below:
1. load your r…
-
感谢你如此好的代码实现,他对我的帮助很大,但是我在使用load_from_name 函数时,我发现并不支持flash-attn ,因此我自己实现了这一块的代码,但是我不确定实现是否正确,尽管它可以正常运行。
以下是代码片段
```
###### ------- ps: add use_flash_attention keyword ------- ######
def load_fro…
-
Hi there,
We are trying to use the new Tensorflow Hub module for BERT-base (https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1) in Keras, similarly to what we currently were able to do for ELM…
-
필요
-
I've built two TensorRT engine plans for BERT Offline as follows:
```
$ make generate_engines RUN_ARGS="--benchmarks=bert --scenarios=offline"
$ make generate_engines RUN_ARGS="--benchmarks=bert --…