Open Nicolas-Gsln opened 2 days ago
:octocat: cibot: Thank you for posting issue #4645. The person in charge will reply soon.
(uint8) 0-255 --> (int8) -128-127 conversion is not a trivial C type casting nor a trivial type conversion. Actually, calling this "format conversion" is misleading. You will need some arithmetic (adding 128 with overflow ignoring) or bitwise ops (flipping the largest bit, e.g., x ^= 0x80 for all x) anyway.
You may create another op mode for tensor-transform and upstream it.
However, if you want a quick solution that works efficiently, you may try "NNS_custom_easy_register": http://ci.nnstreamer.ai/nnstreamer/html/tensor__filter__custom__easy_8h.html
, which allows you to slap a C function into a pipeline directly. (e.g., uint8->int8 custom-easy function that do output[i] = input[I] ^ 0x80;
)
Anyway, because this is not a trivial casting op (you need
@anyj0527 : Please consider adding input-type=arbitrary-type for tensor-converter with media types.. or adding alternative mode for typecast or bitwise-op in tensor-transform, after reviewing this issue. (adding xor to arith-op should be enough when we really need this)
We need to see if this is generally required and we will need this issue in the future. Otherwise, we can wait for someone who needs this to implement and upstream.
Hello,
I am trying to run a pipeline containing a model with int8 input format, but it seems that the conversion uint8 -> int8 is not direct and a bit under optimized.
I can only get uint8 tensor format from the tensor_converter from a RGB image, even using the input-type=int8 option. So I assume the only way to get the uint8 -> int8 conersion is to use a tensor_transform element.
First, I tried naive typecasting:
... ! tensor_converter ! tensor_transform mode=typecast option=int8 ! tensor_filter ...
which is working without error in the pipeline, but the result is not correct because it's not a real linear conversion from [0, 255] to [-127, 128], instead values between 128 and 255 will respectively be between -128 and -1, and values between 0 and 127 will stay the same.To do the conversion correctly, it requires to subtract 128 to all the values while in uint8 format before typecast to int8. So I tried to subtract 128 before converting to int8 using arithmetic mode:
... ! tensor_converter ! tensor_transform mode=arithmetic option=add:-128,typecast:int8 ! tensor_filter ...
but this way the pipeline couldn't be launched because typecast can't be done after the operation:** (gst-launch-1.0:1376): CRITICAL **: 22:30:38.952: tensortransform0: arithmetic: [typecast:TYPE,] should be located at the first to prevent memory re-allocation: typecast(s) in the middle of 'add:-128,typecast:int8' will be ignored
So I tried to do it in 2 steps:
tensor_transform mode=arithmetic option=add:-128 ! tensor_transform mode=typecast option=int8 !
But the input tensor is still not correct, and if we convert it back to an image then it looks whiter/brighter than the original image (which seems to show that pixel values are increased with this -128 operation)I looked at the tensor_transform code and, if I have well understood, the add method only handle additions, which allows to subtract for signed integers (by adding a negative number) but not for unsigned as there are no negative numbers in this format. So it seems that adding a negative number has no sense for a uint8 number here, and it seems to add a positive number instead. I think it would be interesting to have an arithmetic SUB method to allow this operation for unsigned tensors (assuming my understanding is correct and sub it is not currently supported), if you also agree I could do try to do some tests in my side see if it improves the processing.
note: reverse the operation order is working (no pipeline error):
... ! tensor_converter ! tensor_transform mode=arithmetic option=typecast:int8,add:-128 ! tensor_filter ...
But the result is not a linear uint8 -> int8 conversion so input tensors are still wrong.I found a way to do the correct conversion, which requires to typecast to int16, then apply the -128 and finally typecast to int8. This way I got the correct input tensor I'm expected to have. However it couldn't be done in one line, or the second typecast will be ignored:
** (gst-launch-1.0:1408): CRITICAL **: 22:31:18.702: tensortransform0: arithmetic: [typecast:TYPE,] should be located at the first to prevent memory re-allocation: typecast(s) in the middle of 'typecast:int16,add:-128,typecast:int8' will be ignored (the latest typecast need to be done in a second tensor_transform)
So I have done it in two lines (which is not the optimized way according to the doc):... ! tensor_converter ! tensor_transform mode=arithmetic option=typecast:int16,add:-128 ! tensor_transform mode=typecast option=int8 ! tensor_filter ...
I am using a pretty fast model (a few ms per inference) and this extra pre-processing step slows down the pipeline also by a few ms in my case, that's why I'm asking if there is any quicker method to do this uint8 -> int8 format conversion.Thanks.