Open peterbullmatti opened 1 year ago
If your input is not a single tensor, try useinterpreter.runForMultipleInputs(input, output)
, rather than interpreter.run([input], [output])
.
And also, if it's a single tensor, try useinterpreter.run(input, output)
directly, and do not embrace them with [].
I also encountered the same issue, but for object detection models. I am following the example in the tflite_flutter repo. After playing around a bit, I realized my outputs are null beyond index 0. Is there something I am missing?
Future<void> processImage() async {
if (imagePath != null) {
final imageData = File(imagePath!).readAsBytesSync();
image = img.decodeImage(imageData);
setState(() {});
final imageInput = img.copyResize(
image!,
width: WIDTH,
height: HEIGHT,
);
final imageMatrix = List.generate(
imageInput.height,
(y) => List.generate(
imageInput.width,
(x) {
final pixel = imageInput.getPixel(x, y);
return [img.getRed(pixel), img.getGreen(pixel), img.getBlue(pixel)];
},
),
);
runInference(imageMatrix);
}
}
and here is the runInference method
Future<void> runInference(
List<List<List<num>>> imageMatrix,
) async {
final input = [imageMatrix];
//output shape [1,100, 4]
final output = [
List.generate(
100,
(index) => List<double>.filled(4, 0.0),
)
];
// Run inference
interpreter.run(input, output);
// Get first output tensor
final result = output.first;
setState(() {});
interpreter.close();
}
@Mroles
I believe it's a bug with the code, I forked and fixed. The problem is it's trying to load a single buffer for each input even though there's multiple buffers for the output, so runForMultiple is expecting something like
Map<int, Buffer>
But Object detection is
Map<int, Map<int, Buffer>>
I forked and fixed for my purposes, but hope that helps. This repo is no longer maintained and the TF team at Google has a new repo, though the fix isn't there yet either...
Im newbie to Tensorflow world, I learned tutorial and creat such method for text embedding, the print looks ok like this:
Here is my method:
But now I met error:
I have no idea where is this error from, here is how I use this metho, the model I tried mobilebert_1_default.tflite and albert_lite_base_squadv1_1.tflite and lite-model_universal-sentence-encoder-qa-ondevice.tflite for text embedding, all of them have this error