-
After converting `distilbart-mnli-12-1` to ONNX, while testing the onnx model, I get this issue:
```
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: \[ONNXRuntimeError\] : 2 : INVALID_A…
-
transformers ver: 4.8.2
torch: 1.9.1
the `pipeline` method does not offer cache_dir option. Therefore, users can only cache the model in default path.
```
from transformers import pipeline
nli…
-
I'm using this BART tokenizer code
```python
from transformers import BartForConditionalGeneration, BartTokenizer
model_name = "facebook/bart-large-cnn"
tokenizer = BartTokenizer.from_pretrained(m…
-
https://huggingface.co/facebook/bart-large-mnli
this page is about the
> facebook/bart-large-mnli model.
However, in the section of
> With manual PyTorch
the example show the loaded …
-
- `transformers` version: 4.11.0.dev0
- Platform: Linux-4.9.253-rt168-tegra-aarch64-with-debian-buster-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU…
-
Hi
How do I make sure I am utilising gpu ?
Code:
`from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers_interpret import ZeroShotClassificationExplaine…
-
# 🚀 Feature request
* I'd like to implement a feature to export KoBART to ONNX Runtime
* Of course, transformers officially supports exporting to ONNX Runtime [[here]](https://huggingface.co/t…
-
Hi
Running the below code, getting an error when using multiprocessing. Please help
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers_interpret i…
-
Hi,
Code :
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers_interpret import ZeroShotClassificationExplainer
tokenizer = AutoTokenizer.from…
-
i was using AWS batch transform with this model and worked fine but since after last week, somehow it complains with this
message:
```valhalla/distilbart-mnli-12-3' is a correct model identifier …