neuralmagic / deepsparse

Sparsity-aware deep learning inference runtime for CPUs
https://neuralmagic.com/deepsparse/
Other
3.03k stars 173 forks source link

Purpose of exporter.export_onnx(sample_batch=torch.randn(1, 1, 28, 28)) #1632

Closed pradeepdev-1995 closed 7 months ago

pradeepdev-1995 commented 8 months ago

To convert a Roberta finetuned model to onnx format, I have done the given code

import os
import torch
from sparseml.pytorch.utils import ModuleExporter
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained('/finetuned roberta pytorch model path')
exporter = ModuleExporter(model, output_dir=os.path.join(".", "onnx-export"))

then finally there should be one more line in the official documentation

exporter.export_onnx(sample_batch=torch.randn(1, 1, 28, 28))

actually what should I provide in my case?what is the purpose of this line?

bfineran commented 7 months ago

Hi @pradeepdev-1995, the purpose of the sample batch is to provide pytorch a representative tensor to trace the dynamic graph with for export to ONNX which is a static graph. You should provide a tensor with the same requirements as an input to your model (shape, dtype, values, etc). For your model, you should be able to generate this from running an empty string through your tokenizer (tokenizer("")).

pradeepdev-1995 commented 7 months ago

@bfineran I tried the given command

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("roberta-large")
tokenizer("")

so I am getting the output

{'input_ids': [0, 2], 'attention_mask': [1, 1]}

so what should I put now in export_onnx section?

exporter.export_onnx(sample_batch=torch.randn("<dimension>"))
bfineran commented 7 months ago

hi @pradeepdev-1995 you should set the sample_batch to the dictionary you get back from the tokenizer

jeanniefinks commented 7 months ago

Hi @pradeepdev-1995 As there is no further update here, I am going to go ahead and close out this issue. Feel free to re-open if you would like to continue the conversation. Regards, Jeannie / Neural Magic