I'm using the "PytorchDemoApp" example to run in the "Vision" section a custom classifier model instead resnet18. The model is correctly converted because I tested it on desktop. During the preprocessing phase I have a doubt: on desktop through the ToTensor() operation the image is converted in the format and in such a way that the range is between [0-1.0]; then the normalization is applied with the values mean [0.485, 0.456, 0.406] and std [0.229, 0.224, 0.225].
In android when the function "imageYUV420CenterCropToFloatBuffer" is called, I didn't understand if the ToTensor() function is also called; it's called or not?.
Also then I noticed that if I place the phone so that the camera "sees" only black, the output after "imageYUV420CenterCropToFloatBuffer" which is visible via the "getDataAsFloatArray()" function only returns "0" values, how is this possible? In theory shouldn't I have ([Pixel] - [Mean]) / [STD]? i.e. ([0, 0, 0] - [0.485, 0.456, 0.406]) / [0.229, 0.224, 0.225].
I'm using the "PytorchDemoApp" example to run in the "Vision" section a custom classifier model instead resnet18. The model is correctly converted because I tested it on desktop. During the preprocessing phase I have a doubt: on desktop through the ToTensor() operation the image is converted in the format and in such a way that the range is between [0-1.0]; then the normalization is applied with the values mean [0.485, 0.456, 0.406] and std [0.229, 0.224, 0.225]. In android when the function "imageYUV420CenterCropToFloatBuffer" is called, I didn't understand if the ToTensor() function is also called; it's called or not?. Also then I noticed that if I place the phone so that the camera "sees" only black, the output after "imageYUV420CenterCropToFloatBuffer" which is visible via the "getDataAsFloatArray()" function only returns "0" values, how is this possible? In theory shouldn't I have ([Pixel] - [Mean]) / [STD]? i.e. ([0, 0, 0] - [0.485, 0.456, 0.406]) / [0.229, 0.224, 0.225].