Open phamquyhai opened 10 months ago
Hey - this is currently not supported in Fast TFLite. Shouldn't be too tricky to add though
Do you plan to add this or do you have a quick tip to self-implement it? I'm not that familiar with C++ and didn't find where the input tensors are fed into the interpreter/where the tensors are allocated. Thanks already so much!
The link from the docs should be a hint on how to get started, I personally don't have any plans to implement this right now unless someone pays me to :)
I managed to fixed this using this patch:
diff --git a/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp b/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
index fbdc44f..81372c7 100644
--- a/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
+++ b/node_modules/react-native-fast-tflite/cpp/TensorflowPlugin.cpp
@@ -170,13 +171,6 @@ TensorflowPlugin::TensorflowPlugin(TfLiteInterpreter* interpreter, Buffer model,
std::shared_ptr<react::CallInvoker> callInvoker)
: _interpreter(interpreter), _delegate(delegate), _model(model), _callInvoker(callInvoker) {
// Allocate memory for the model's input/output `TFLTensor`s.
- TfLiteStatus status = TfLiteInterpreterAllocateTensors(_interpreter);
- if (status != kTfLiteOk) {
- [[unlikely]];
- throw std::runtime_error("Failed to allocate memory for input/output tensors! Status: " +
- tfLiteStatusToString(status));
- }
-
log("Successfully created Tensorflow Plugin!");
}
@@ -213,9 +207,17 @@ void TensorflowPlugin::copyInputBuffers(jsi::Runtime& runtime, jsi::Object input
}
for (size_t i = 0; i < count; i++) {
- TfLiteTensor* tensor = TfLiteInterpreterGetInputTensor(_interpreter, i);
auto value = array.getValueAtIndex(runtime, i);
auto inputBuffer = getTypedArray(runtime, value.asObject(runtime));
+ int inputDimensions[] = {static_cast<int>(inputBuffer.length(runtime))};
+ TfLiteInterpreterResizeInputTensor(_interpreter, i, inputDimensions, 1);
+ TfLiteStatus status = TfLiteInterpreterAllocateTensors(_interpreter);
+ if (status != kTfLiteOk) {
+ [[unlikely]];
+ throw std::runtime_error("Failed to allocate memory for input/output tensors! Status: " +
+ tfLiteStatusToString(status));
+ }
+ TfLiteTensor* tensor = TfLiteInterpreterGetInputTensor(_interpreter, i);
TensorHelpers::updateTensorFromJSBuffer(runtime, tensor, inputBuffer);
}
}
@@ -230,6 +232,7 @@ jsi::Value TensorflowPlugin::copyOutputBuffers(jsi::Runtime& runtime) {
TensorHelpers::updateJSBufferFromTensor(runtime, *outputBuffer, outputTensor);
result.setValueAtIndex(runtime, i, *outputBuffer);
}
+
return result;
}
However, this solution has some drawbacks: Only one-dimensional input data for each input tensor is supported, as I wasn't able to determine the dimensions of the inputBuffer.
Interesting - yea maybe we can expose this differently by making the tensors in inputs
read-write as well? Not sure if that'd be a good API design as it's probably not what users expect:
const model = loadTensorFlowModel(..)
model.inputs[0].size = 4 // resize it
Or somthing like
const model = loadTensorFlowModel(..)
model.resizeInputTensors([4])
any updates on this?
Hi bro, I have a model that supports dynamic shape, but tflite does not support it when converting, but it does support it from the code.
Doc from TFL https://www.tensorflow.org/lite/guide/inference#run_inference_with_dynamic_shape_model
How to operate with react-native-fast-tflite