I succeeded in extracting the output tensor value for the example input text using the above example. Furthermore, I need advice on how to run text generation using output tensor values. Is there a code or a link I can refer to? (Pytorch or python code..)
The code I tried is as follows. But it didn't work.
My final goal of the project is to load the onnx model using onnx runtime's C/C++ API and write the C/C++ code to generate text using output tensor values.
I'll be waiting for your reply. (looking forward to...)
Thank u very much.
Further information
Relevant Area (e.g. model usage, backend, best practices, pre-/post- processing, converters):
Is this issue related to a specific model? Model name (e.g. mnist): gpt-2
Model opset (e.g. 7):
Ask a Question
Question
text/machine_comprehension/gpt-2/dependencies/GPT2-export.py
I succeeded in extracting the output tensor value for the example input text using the above example. Furthermore, I need advice on how to run text generation using output tensor values. Is there a code or a link I can refer to? (Pytorch or python code..)
The code I tried is as follows. But it didn't work.![image](https://user-images.githubusercontent.com/42932221/126604698-df03ff77-a70c-46d0-b00f-4852cbfeb133.png)
'ort_outputs_exmodel' above image is same as 'res' link below https://github.com/onnx/models/blob/ad5c181f1646225f034fba1862233ecb4c262e04/text/machine_comprehension/gpt-2/dependencies/GPT2-export.py#L110
My final goal of the project is to load the onnx model using onnx runtime's C/C++ API and write the C/C++ code to generate text using output tensor values.
I'll be waiting for your reply. (looking forward to...) Thank u very much.
Further information
Relevant Area (e.g. model usage, backend, best practices, pre-/post- processing, converters):
Is this issue related to a specific model?
Model name (e.g. mnist): gpt-2 Model opset (e.g. 7):
Notes
Any additional information, code snippets.