Open eduardkieser opened 5 years ago
Hi there, I has anyone had a chance to look at this yet? Is there any additional info that you need from my end to help simplify the task of resolving this?
@eduardkieser did you solve this?
Nope, but I also haven't looked at it in a while.
dragon_labels_33.txt Resnet20_0_to_20.tflite.zip data20_mini.zip
I have made a couple of tf model to recognise numbers. After converting to tflite, saving and loading back into python, I benchmark them in python and get classification accuracies of >99%. If I load that same tflite model into a flutter app and benchmark it there on the same data I get accuracies of around 10 - 60 % on the same images. I have noticed that deeper networks seem more affected by this than shallow networks. Shallower networks that benchmark at around 96% in python, get around 75 ish percent in flutter and the Deeper Resnet based model (which I will try to attach somehow) which benchmarks at >99% only gets around 60ish % on the phone, for a 21 class classification (0-20). The model has three one-hot outputs with len=11 (0-9 +nan), with correspond to 100's 10's and 1's.
My python benchmarking code is as follows:
The flutter benchmarking is a bit more involved, but at it's core looks as follows:
I have tried various combinations of
imageMean
andimageStd
. I'm a bit out of my depth here, any help would be greatly appreciated. E