-
import torch
from PIL import Image
import cn_clip.clip as clip
from cn_clip.clip import load_from_name, available_models
print("Available models:", available_models())
# Available models: ['…
-
Hi,
Thanks for providing the code. :)
I have a question regarding training the classifiers. What do you mean by replacing GPT2-large embeddings with roberta-base? I'm not sure if I totally und…
-
### 🐛 Describe the bug
## Description
Some combinations of arguments lead to errors of `train_prompts.py`.
## Details
- Error of `train_prompts.py`
These errors can be reproduced by m…
-
### This issue is part of our **Doc Test Sprint**. If you're interested in helping out come [join us on Discord](https://discord.gg/J8bW9u5abB) and talk with other contributors!
Docstring examples …
-
While compiling models like [HuggingFace protectai/xlm-roberta-base-language-detection-onnx](https://huggingface.co/protectai/xlm-roberta-base-language-detection-onnx) or [mistralai/Mistral-7B-v0.1](h…
-
Hello. Is it possible to train (using the official google colab) the GPT-SoVITS-V2 with audios longer than 10 seconds, not splitting them?
Also, what about inference? Why it is limited to `3
-
I run the code in " gts", but I find a mistake when using the Roberta which in "contextual_embeddings.py ".
`
class RobertaEncoder(nn.Module):
def __init__(self, roberta_model = 'roberta-base', de…
-
Hi @NielsRogge,
in your notebook [Fine_tune_LiLT_on_a_custom_dataset%2C_in_any_language.ipynb](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LiLT/Fine_tune_LiLT_on_a_custom_datas…
piegu updated
4 months ago
-
https://huggingface.co/distilroberta-base
https://huggingface.co/roberta-base
Both are cased
-
I have tried with my current installation and here the error
C:\punctuation-restoration\src>python inference.py --pretrained-model=roberta-large --weight-path=roberta-large-en.pt --language=en --in…