-
My server cannot connect to the Hugging Face website, so I manually downloaded the pretrained model used in the code and placed it in the `img2img-turbo-main` folder. After executing the command `pyth…
-
Cool package.
Wanted to try this with better and newer models
-
作者,你好,非常感谢能够开源这个棒的模型。
我这边想要对xlm-roberta-large+retroMAE进行预训练,这边有两个问题想要咨询一下:
(1)xlm-robert-large模型中config.json的**max_position_embeddings**是514,这边想要扩展成8192,是直接将其写成8194吗?还需要其他什么操作不?
(2)在本地使用训练时,爆内存了,GP…
-
My steps:
```
git clone https://github.com/microsoft/LoRA.git
cd LoRA
pip install -e .
cd examples/NLU
pip install -e .
```
Change `export num_gpus=8` to `export num_gpus=1` in `roberta_la…
-
Hi there,
I'm having trouble loading your backdoored model from Huggingface. I got errors like the one below:
```
Traceback (most recent call last):
File "src/glue.py", line 715, in
ma…
-
When creating a vector database, we use embedding models such as bge-m3. The problem is that if the size of the text sent for vectorization does not fit into the context window of the model, the data …
-
-
Tried to load a finetuned roberta model with
```
import mii
pipe = mii.pipeline("roberta fine-tuned model path")
```
But it shows error
```
ValueError: Unsupported model type roberta
```
-
### Feature request
This RFC proposes integrating a lossless compression method called ZipNN into Hugging Face Transformers to reduce latency and traffic for downloading models. ZipNN is specifically…
-
### Feature request
I am trying to train off-line RL using decision transformer, convert to .onnx.
```
from pathlib import Path
from transformers.onnx import FeaturesManager
feature = "seq…