Closed krn-sharma closed 2 years ago
Hi @krn-sharma,
Thank you for reaching out. I am guessing you are following this guide? You should be able to load your floating point model in the same way as the guide armnn::INetworkPtr network = armnnparser->CreateNetworkFrom BinaryFile(yourModelPath);
. The only difference might be how you prepare the inputs. "For floating-point models, you must scale the input image values to a range of -1 to 1. For example, if the input image values are between 0 to 255, you must divide the image values by 127.5 and subtract 1." This has been taken from point 2 in the "Load and pre-process an input image for the quantized model" section of this guide so more information can be found there.
Have you tried running it with your floating point model, if so did you get any errors? Thanks!
Kind regards,
Matthew
Thanks for your reply. followings are my doubts-
During training, I normalize the input in the range of 0 to 1. Shouldn't I also normalize the input in the range 0 to 1 instead of -1 to 1?
Does the following normalization parameters are correct?
In the guide, input is load using Tcontatiner
using TContainer = boost::variant<std::vector<uint8_t>>;
Should I replace unit8_t with some floating data type?
Also, I think data type should be change in output data container.
std::vector<TContainer> outputDataContainers = { std::vector<uint8_t>(outputNumElements) };
getting the following error when I replace uint8_t to float
mobilenetv1_quant_tflite.cpp: In function ‘int main(int, char**)’:
mobilenetv1_quant_tflite.cpp:127:5: error: no matching function for call to ‘std::vector<boost::variant<std::vector~
/usr/include/c++/8/bits/stl_vector.h:543:2: note: template argument deduction/substitution failed:
mobilenetv1_quant_tflite.cpp:127:5: note: candidate expects 3 arguments, 1 provided
};
^
In file included from /usr/include/c++/8/vector:64,
from /usr/include/boost/filesystem/path_traits.hpp:26,
from /usr/include/boost/filesystem/path.hpp:25,
from /usr/include/boost/filesystem.hpp:16,
from model_output_labels_loader.hpp:4,
from mobilenetv1_quant_tflite.cpp:6:
/usr/include/c++/8/bits/stl_vector.h:515:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::initializer_list<_Tp>, const allocator_type&) [with _Tp = boost::variant<std::vector~
/usr/include/c++/8/bits/stl_vector.h:515:7: note: no known conversion for argument 1 from ‘std::vector~
/usr/include/c++/8/bits/stl_vector.h:490:7: note: candidate expects 2 arguments, 1 provided
/usr/include/c++/8/bits/stl_vector.h:480:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(const std::vector<_Tp, _Alloc>&, const allocator_type&) [with _Tp = boost::variant<std::vector~
/usr/include/c++/8/bits/stl_vector.h:480:7: note: candidate expects 2 arguments, 1 provided
/usr/include/c++/8/bits/stl_vector.h:476:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>&&) [with _Tp = boost::variant<std::vector~
/usr/include/c++/8/bits/stl_vector.h:476:7: note: no known conversion for argument 1 from ‘std::vector~
/usr/include/c++/8/bits/stl_vector.h:458:7: note: no known conversion for argument 1 from ‘std::vector~
/usr/include/c++/8/bits/stl_vector.h:427:7: note: candidate expects 3 arguments, 1 provided
/usr/include/c++/8/bits/stl_vector.h:415:7: note: candidate: ‘std::vector<_Tp, _Alloc>::vector(std::vector<_Tp, _Alloc>::size_type, const allocator_type&) [with _Tp = boost::variant<std::vector~
/usr/include/c++/8/bits/stl_vector.h:415:7: note: no known conversion for argument 1 from ‘std::vector~
/usr/include/c++/8/bits/stl_vector.h:402:7: note: no known conversion for argument 1 from ‘std::vector~
/usr/include/c++/8/bits/stl_vector.h:391:7: note: candidate expects 0 arguments, 1 provided
inference_test_image.cpp:9:10: fatal error: ../third-party/stb/stb_image.h: No such file or directory
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated. make: *** [Makefile:10: mobilenetv1_quant_tflite] Error 1
Hi @krn-sharma
have you been able to overcome this issue? We have removed Boost from Arm NN
Kindest Regards
Hi @krn-sharma,
I am going to close this issue as it has been over one month when the latest activity has occurred. If you are still experiencing problems, please do not hesitate to reopen this ticket or create a new issue.
Kind Regards, Cathal.
Documentation have source code for running inference on quantized tflite model. I want to know how to run floating tflite model in armnn.