Open pv-98 opened 1 year ago
I met the same error report, not solved yet...
I get the same error when using micro_log
(MicroPrintf) instead of micro_error_reporter
(MicroErrorReporter), with TensorFlow 2.15:
tensorflow/lite/micro/micro_log.cpp:31: undefined reference to `DebugLog'
The problem comes from the Arduino_TensorFlowLite
library which is quite outdated compared to the newest TFLM features.
The DeBugLog
is defined in tensorflow/lite/micro/system_setup.cpp
as:
extern "C" void DebugLog(const char* s) { DEBUG_SERIAL_OBJECT.print(s); }
but is declared in tensorflow/lite/micro/debug_log.h
as:
void DebugLog(const char* format, va_list args);
and used in tensorflow/lite/micro/micro_log.cpp
as:
DebugLog(format, args);
The implementation in tensorflow/lite/micro/system_setup.cpp
seems to be wrong although adding the va_list args
as argument does not solve the undefined reference error.
I found a solution, although not fully tested yet, my version of micro_speech.ino
compiles now without errors, just with many warnings. I hope this will help others trying to solve this error.
The problem comes from the Arduino_TensorFlowLite library which is quite outdated compared to the newest TFLM features.
In system_setup.cpp
, my first problem and reason for the undefined reference to 'DebugLog'
error was due to the lines:
#if defined(ARDUINO) && !defined(ARDUINO_ARDUINO_NANO33BLE) &&
#define ARDUINO_EXCLUDE_CODE
#endif // defined(ARDUINO) && !defined(ARDUINO_ARDUINO_NANO33BLE)
the ARDUINO_EXCLUDE_CODE
was defined and all the rest of the code was not used.
After correcting the above to make sure ARDUINO_EXCLUDE_CODE
is not defined:
First, I have modified the code to have:
extern "C" void DebugLog(const char* s, va_list args) { DEBUG_SERIAL_OBJECT.print(s); }
to match the declaration in tensorflow/lite/micro/debug_log.h
Second, I have added explicit include for the header file where the RingBufferN
is defined. Without this I was getting the error about ... error: expected template-name before '<' token
.
#include <api/RingBuffer.h>
right after the line:
#include <Arduino.h>
I am working with Arduino_TensorFlowLite-2.4.0-ALPHA-precompiled library and trying to compile my arduino sketch. But I keep getting this error
Library Arduino_TensorFlowLite has been declared precompiled: Using precompiled library in C:\Users\prane\Documents\Arduino\libraries\Arduino_TensorFlowLite-2.4.0-ALPHA-precompiled\src\cortex-m4\fpv4-sp-d16-softfp C:\Users\prane\Documents\Arduino\libraries\Arduino_TensorFlowLite-2.4.0-ALPHA-precompiled\src\cortex-m4\fpv4-sp-d16-softfp\libtensorflowlite.a(micro_error_reporter.cpp.o): In function
tflite::MicroErrorReporter::Report(char const*, std::__va_list)': /home/arduino/workspace/Libraries-Google-Tensorflow-scraper/Arduino/libraries/tensorflow_lite_mirror/src/tensorflow/lite/micro/micro_error_reporter.cpp:35: undefined reference toDebugLog' /home/arduino/workspace/Libraries-Google-Tensorflow-scraper/Arduino/libraries/tensorflow_lite_mirror/src/tensorflow/lite/micro/micro_error_reporter.cpp:36: undefined reference to
DebugLog' collect2.exe: error: ld returned 1 exit statusexit status 1
Compilation error: exit status 1`
I've included my sketch below. Any help would be greatly helpful. Thanks
`#include "TensorFlowLite.h"
include "tensorflow/lite/micro/all_ops_resolver.h"
include "tensorflow/lite/micro/micro_error_reporter.h"
include "tensorflow/lite/micro/micro_interpreter.h"
//#include "tensorflow/lite/micro/system_setup.h"
include "tensorflow/lite/schema/schema_generated.h"
include "tensorflow/lite/version.h"
include "image_data.h"
include "model_data.h"
const int kInputTensorSize = 1 28 28 1; const int kNumClasses = 10; namespace{ tflite::ErrorReporter error_reporter = nullptr; const tflite::Model model = nullptr; tflite::MicroInterpreter interpreter = nullptr; TfLiteTensor input = nullptr; TfLiteTensor output = nullptr; int inference_count = 0;
constexpr int kTensorArenaSize = 2*1024; uint8_t tensor_arena[kTensorArenaSize]; }
void setup() { Serial.begin(115200); // tflite::InitializeTarget(); // memset(tensor_arena, 0, kTensorArenaSize*sizeof(uint8_t));
// Set up logging. static tflite::MicroErrorReporter micro_error_reporter; error_reporter = µ_error_reporter;
model = tflite::GetModel(model_data); if (model->version() != TFLITE_SCHEMA_VERSION) { Serial.println("Model provided is schema version "
String(TFLITE_SCHEMA_VERSION)); return; } else { Serial.println("Model version: " + String(model->version())); }
// This pulls in all the operation implementations we need. static tflite::AllOpsResolver resolver;
// Build an interpreter to run the model with. static tflite::MicroInterpreter static_interpreter( model, resolver, tensor_arena, kTensorArenaSize, error_reporter); interpreter = &static_interpreter;
// Build an interpreter to run the model with. // tflite::MicroInterpreter* static_interpreter_ptr = new tflite::MicroInterpreter( // model, resolver, tensor_arena, kTensorArenaSize, error_reporter); // interpreter = static_interpreter_ptr;
// Allocate memory from the tensor_arena for the model's tensors. TfLiteStatus allocate_status = interpreter->AllocateTensors(); if (allocate_status != kTfLiteOk) { Serial.println("AllocateTensors() failed"); return; } else { Serial.println("AllocateTensor() Success"); }
size_t used_size = interpreter->arena_used_bytes(); Serial.println("Area used bytes: " + String(used_size)); input = interpreter->input(0); output = interpreter->output(0);
/ check input / if (input->type != kTfLiteFloat32) { Serial.println("input type mismatch. expected input type is float32"); return; } else { Serial.println("input type is float32"); }
Serial.println("Model input:"); Serial.println("input->type: " + String(input->type)); Serial.println("dims->size: " + String(input->dims->size)); for (int n = 0; n < input->dims->size; ++n) { Serial.println("dims->data[n]: " + String(input->dims->data[n])); }
Serial.println("Model output:"); Serial.println("dims->size: " + String(output->dims->size)); for (int n = 0; n < output->dims->size; ++n) { Serial.println("dims->data[n]: " + String(output->dims->data[n])); }
} void loop() {
// Define the input image array const uint8_t kImageDataPtr = kImageData; // Pointer to start of image data uint8_t input_image[kInputTensorSize]; for (int i = 0; i < kInputTensorSize; i++) { input_image[i] = (kImageDataPtr++); }
for(int i=0; i<kInputTensorSize; i++){ input->data.f[i] = (float)input_image[i] / 255.0; }
// Run inference interpreter->Invoke();
// Print the predicted class int predicted_class = -1; float max_score = -1; for (int i = 0; i < kNumClasses; i++) { float score = output->data.f[i]; if (score > max_score) { predicted_class = i; max_score = score; } } Serial.println(predicted_class);
}`