tensorflow / tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
Apache License 2.0
1.9k stars 818 forks source link

Compilation error: no matching function for call to 'tflite::MicroInterpreter::MicroInterpreter(const tflite::Model*&, tflite::AllOpsResolver&, uint8_t [2048], const int&, tflite::ErrorReporter*&)' #1763

Closed pv-98 closed 1 year ago

pv-98 commented 1 year ago

I am currently working on trying to run a quantized classification model using MNIST dataset on arduino nano 33 ble sense. While trying to compile my arduino sketch I'm getting this error. Can someone please help me. I have provided my sketch below

`#include "TensorFlowLite.h"

include "tensorflow/lite/micro/all_ops_resolver.h"

include "tensorflow/lite/micro/tflite_bridge/micro_error_reporter.h"

include "tensorflow/lite/micro/micro_interpreter.h"

include "tensorflow/lite/micro/system_setup.h"

include "tensorflow/lite/schema/schema_generated.h"

include "image_data.h"

include "model_data.h" / quantized model /

// Define the input shape and number of classes for the MNIST model const int kInputTensorSize = 1 28 28 1; const int kNumClasses = 10; namespace{ tflite::ErrorReporter error_reporter = nullptr; const tflite::Model model = nullptr; tflite::MicroInterpreter interpreter = nullptr; TfLiteTensor input = nullptr; TfLiteTensor output = nullptr; int inference_count = 0;

constexpr int kTensorArenaSize = 2*1024; uint8_t tensor_arena[kTensorArenaSize]; }

void setup() { Serial.begin(115200); tflite::InitializeTarget(); memset(tensor_arena, 0, kTensorArenaSize*sizeof(uint8_t));

// Set up logging. static tflite::MicroErrorReporter micro_error_reporter; error_reporter = µ_error_reporter;

// Map the model into a usable data structure.. model = tflite::GetModel(model_data); if (model->version() != TFLITE_SCHEMA_VERSION) { Serial.println("Model provided is schema version "

} void loop() {

// Define the input image array const uint8_t kImageDataPtr = kImageData; // Pointer to start of image data uint8_t input_image[kInputTensorSize]; for (int i = 0; i < kInputTensorSize; i++) { input_image[i] = (kImageDataPtr++); }

for(int i=0; i<kInputTensorSize; i++){ input->data.f[i] = (float)input_image[i] / 255.0; }

// Run inference interpreter->Invoke();

// Print the predicted class int predicted_class = -1; float max_score = -1; for (int i = 0; i < kNumClasses; i++) { float score = output->data.f[i]; if (score > max_score) { predicted_class = i; max_score = score; } } Serial.println(predicted_class);

}`

ddavis-2015 commented 1 year ago

Thank you for submitting this issue regarding the TFLM Arduino examples.

Your codebase appears to be out of date. The ErrorReporter class was removed from TFLM. The MicroPrintf method should now be used for all application logging. Please update from the official TFLM Arduino examples repository: https://github.com/tensorflow/tflite-micro-arduino-examples

The README for the tflite-micro-arduino-examples repository contains the commands to clone and update your local copy of the repository.