-
Similar to the very helpful NLI-based zero-shot classification pipeline using a ModelForSequenceClassification, it would be great to have zero-shot on audio data.
Pass wav file(s) with candidate l…
-
Hi
I want to use captum ai to interpret bart large mnli pre-trained model for zero shot classification task.
I am only using this for inference and no training is involved.
Can you please hel…
-
```
def get_text_classifications(text, ref_labels):
text_dict={}
text_dict = labels(text, ref_labels)
return text_dict
```
```
text = "Game of Thrones Season 7: Long Walk Official…
-
Hi there,
Thanks for a fantastic repo.
I'm running biencoder training with bart-large and training on the mnli dataset.
To start with it's training pretty well, if slowly, but then I get a…
-
example/ner/few_shot/run.py main函数中最后一行 writer.close()
wandb.Run 对象没有close方法
另外,setup.py依赖项的倒数第二行'wandb==0.12.7'少了个逗号
-
## Environment info
- `transformers` version: 4.4.dev / 4.3.3 / 4.3.2
- Platform: Ubuntu 18.04/ Windows 10
- Python version: 3.6.2
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version…
-
After converting `distilbart-mnli-12-1` to ONNX, while testing the onnx model, I get this issue:
```
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: \[ONNXRuntimeError\] : 2 : INVALID_A…
-
Hi, I am trying to train/finetune the BART large model pretrained on MNLI on Financial Phrasebank but completely lost as I'm just a beginner.
from transformers import AutoModelForSequenceClassifica…
-
I'm using this BART tokenizer code
```python
from transformers import BartForConditionalGeneration, BartTokenizer
model_name = "facebook/bart-large-cnn"
tokenizer = BartTokenizer.from_pretrained(m…
-
transformers ver: 4.8.2
torch: 1.9.1
the `pipeline` method does not offer cache_dir option. Therefore, users can only cache the model in default path.
```
from transformers import pipeline
nli…