Closed himanshuitis closed 2 years ago
Sorry @himanshuitis. We are unfamiliar with ONNX and are not able to solve this problem.
Is it possible to convert model using torchscript? Running into some errors when using torch.jit.script
When trying to use torch.jit.trace(mymodel, (sample_source_id, sample_source_mask))
, I am getting TracerWarning -
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if self.nextYs[-1][i] == self._eos:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if self.nextYs[-1][0] == self._eos:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if self.nextYs[-1][i] == self._eos:
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
self.finished.sort(key=lambda a: -a[0])
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if tok == self._eos:
Though we do get the converted model after trace, we always get the same prediction for different examples and the prediction if for the example that used as sample inputs during torch.jit.trace
Hi @himanshudhawale , I have asked my colleagues but they all don't know torch.jit.script
. Therefore, we can't help you about this issue.
But you have used flag for support of torchscript at https://github.com/microsoft/CodeXGLUE/blob/3e7bfe6dc4a88534c7803ce1bd8d1733c1d16888/Code-Text/code-to-text/code/model.py Line no. 42.
The code is directly copied from here. I guess the problem comes from beam search. I will try to use greedy search and do a test.
I try to reproduce the error you mentioned. But it seems I load the model model.onnx.zip in ONNXRuntime successfully and don't encounter any errors in my server.
Can you share the code you used to save the pytorch model convert it into ONNX and how you loaded it into ONNXRuntime. Also, request you to also share the versions of relevant libraries. Want to debug which part is causing error in my case.
Also, can you verify if you getting different predictions given different input to verify the model has converted to ONNX properly. Thanks.
I am currently working on the ONNX, too. And there are errors when I try to convert the fine-tuned [code-to-text] model into ONNX. Here are my codes exporting to ONNX.
```
symbolic_names = {0: 'batch_size', 1: 'max_seq_len'}
torch.onnx.export(model, # model being run
args=tuple(inputs.values()), # model input (or a tuple for multiple inputs)
f=export_model_path, # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['source_ids', # the model's input names
'source_mask'],
output_names = ['output'], # the model's output names
dynamic_axes={'source_ids': symbolic_names, # variable length axes
'source_mask' : symbolic_names,
'output' : symbolic_names})
The exported model is not right when I visualize it in the Netron, so I thought I might set the wrong parameters here.
I try to reproduce the error you mentioned. But it seems I load the model model.onnx.zip in ONNXRuntime successfully and don't encounter any errors in my server.
Could you share the code you used to save the PyTorch model and convert it into ONNX and how you loaded it into ONNXRuntime? Also, request you to also share the versions of relevant libraries. Want to debug which part is causing the error in my case. Thank you!
We are not familiar with ONNX. However, I am not sure whether you have used GPU. In beam search, the model uses GPU by default. If you use CPU, it will encounter error.
I try to reproduce the error you mentioned. But it seems I load the model model.onnx.zip in ONNXRuntime successfully and don't encounter any errors in my server.
Thank you so much for your quick reply. So would you mind telling me how to generate the model in the ONNX format you attached? And I am confused about which type of dummy input I should choose to apply. Would you mind giving me some hints for that?
Thank you for your quick reply. By the way, what shape and types of input does the code-to-text model accept? Is [source_ids, source_mask]
(from run.py) good to be the input?
source_ids
and source_mask
. Their shapes are [batch_size, max_length]
and types are torch.long
Thank you so much for the clarification!
Trying to convert model to ONNX using -
model does get converted to model.onnx but loading it in ONNXRuntime throws error: _Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from model.onnx failed:Type Error: Type parameter (T) of Optype (Concat) bound to different types (tensor(int64) and tensor(float) in node (Concat335).
code used to load the model in ONNXRuntime -
Similar issue raised at https://github.com/microsoft/onnxruntime/issues/1764 suggests some problem with model or in conversion process. Kindly help, Thanks!