-
**Describe the bug**
i refer to [GPT-2 Model conversion
](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers#gpt-2-model-conversion), and try to convert [dial…
-
Hello,
Is there a way to use a custom reranker for the 90M model? If I understand correctly, behind the hood the generative model generates a number of responses and the outputs only one back. If s…
-
First of all, kudos for this nice work. I really liked your work.
I am trying to reproduce the results of your paper. It will be very helpful if you could share the evaluation script for the automate…
-
It would be an interesting challenge with some potentially fun results to get the bot to try and hold a conversation. Could possibly even give it multiple personality selections. I've done zero resear…
-
如果使用dialoGPT执行步骤也是一样的么 ,
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
There are several dead links in this repository, see the list below (some of the result…
-
In this issue, I will maintain a few ML and NLP research papers that I have identified to be worth summarizing. Read more about this initiative here #23.
If you are interested in working together o…
-
Hello,
I'm using the Dialo-GPT model:
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "microsoft/DialoGPT-small"
tokenizer = AutoTokenizer.from_pretr…
-
Posted this issue to the HuggingFace forums without a response.
Having a weird issue with DialoGPT Large model deployment. From PyTorch 1.8.0 and Transformers 4.3.3 using model.save_pretrained and …
-
**Describe the bug**
GPT2-XL cannot be quantized when converting to int8
`ValueError: Message onnx.ModelProto exceeds maximum protobuf size of 2GB: 6552187990`
**Urgency**
I would like to get …