Closed thetonus closed 2 years ago
This is OnnxRuntime's limitation, OnnxJavaType doesn't have uint8 data type.
@frankfliu Thanks. This is quite unfortunate though. This issue also affects float16 (half precision) values as well. 😞
@hammacktony I created PR in onnxruntime to add uint8 support: https://github.com/microsoft/onnxruntime/pull/8401
Description
I am using a heavily quantized Onnx model where the Onnx Runtime is expecting a unit8 tensor. I think I transformed the pipeline tensor to uint8, but the error says the model input is int8 instead of uint8.
Other models work well. No issues with pipelines that spit out float32 model input.
_Note: The code posted below is Kotlin. For visual debugging, it should not be an issue for that conversion.
Expected Behavior
Casting pipeline tensor to uint8 should allow a uint8 model to be able to process the model input.
Error Message
How to Reproduce?
Steps to reproduce
What have you tried to solve it?
Try casting in other parts of input