Closed PrinceP closed 4 years ago
Hi @fanzhanggoogle, is there any issue with tflite calculator. Is pre-processing fine?
Hi, I don't see any reason it won't work. It should be fine at least on CPU unless you have custom TFlite ops. On GPU it might be trickier if some ops are not support. Could you share your log/error so that I can provide some guidance?
The code doesn't crash.
For CPU, I have to check and get back.
For GPU, the tflite_inference_calculator.cc gets stuck at the following code
`
if (gpuinput) {
// Get input image sizes.
const auto& inputindices = interpreter->inputs();
gpu_datain.resize(input_indices.size());
for (int i = 0; i < inputindices.size(); ++i) {
const TfLiteTensor* tensor = interpreter->tensor(input_indices[0]);
gpu_datain[i] = absl::make_unique
I was able to run the Efficient net on the pipeline. The fp32 model works fine.
Models input is not getting loaded properly. The MnasNet is trained using Auto ML. Attached is the sample graph used and the tflite model. The tflite model is renamed to .zip for upload error.
hand_gesture_gpu.txt
mobile_net.zip