PacktPublishing / TinyML-Cookbook_2E

TinyML Cookbook, 2E_Published by Packt
MIT License
42 stars 19 forks source link

Chapter 3- 09_classifier.ino cant compile on raspberry pico #4

Closed zoldaten closed 4 months ago

zoldaten commented 5 months ago

hello. firstly, it was hard to find Adafruit library for dht library in arduino ide, so to fix it: Adafruit_Unified_Sensor should be installed in arduino ide library manager

secondly, i spent hours searching for Arduino_TensorFlowLite lib as it removed from arduino ide and found it here - https://www.ardu-badge.com/Arduino_TensorFlowLite/zip

thirdly, i cant compile 09_classifier.ino as it gives:

In file included from C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\09_classifier.ino:11:0:
C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\model.h:1:50: error: redefinition of 'const unsigned char snow_model_tflite []'
 alignas(8) const unsigned char snow_model_tflite[] = {
                                                  ^
In file included from C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\09_classifier.ino:7:0:
C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\model.h:1:32: note: 'const unsigned char snow_model_tflite [2088]' previously defined here
 alignas(8) const unsigned char snow_model_tflite[] = {
                                ^~~~~~~~~~~~~~~~~
In file included from C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\09_classifier.ino:11:0:
C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\model.h:177:14: error: redefinition of 'unsigned int snow_model_tflite_len'
 unsigned int snow_model_tflite_len = 2088;
              ^~~~~~~~~~~~~~~~~~~~~
In file included from C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\09_classifier.ino:7:0:
C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\model.h:177:14: note: 'unsigned int snow_model_tflite_len' previously defined here
 unsigned int snow_model_tflite_len = 2088;
              ^~~~~~~~~~~~~~~~~~~~~
C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\09_classifier.ino: In function 'void setup()':
C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\09_classifier.ino:132:11: error: no matching function for call to 'tflite::MicroInterpreter::MicroInterpreter(const tflite::Model*&, tflite::AllOpsResolver&, uint8_t [4096], const int&)'
       t_sz);
           ^
In file included from C:\Users\Al\Desktop\TinyML-Cookbook_2E-main\Chapter03\ArduinoSketches\09_classifier\09_classifier.ino:16:0:
C:\Users\Al\Documents\Arduino\libraries\Arduino_TensorFlowLite\src/tensorflow/lite/micro/micro_interpreter.h:92:3: note: candidate: tflite::MicroInterpreter::MicroInterpreter(const tflite::Model*, const tflite::MicroOpResolver&, tflite::MicroAllocator*, tflite::ErrorReporter*, tflite::Profiler*)
   MicroInterpreter(const Model* model, const MicroOpResolver& op_resolver,
   ^~~~~~~~~~~~~~~~
C:\Users\Al\Documents\Arduino\libraries\Arduino_TensorFlowLite\src/tensorflow/lite/micro/micro_interpreter.h:92:3: note:   no known conversion for argument 3 from 'uint8_t [4096] {aka unsigned char [4096]}' to 'tflite::MicroAllocator*'
C:\Users\Al\Documents\Arduino\libraries\Arduino_TensorFlowLite\src/tensorflow/lite/micro/micro_interpreter.h:82:3: note: candidate: tflite::MicroInterpreter::MicroInterpreter(const tflite::Model*, const tflite::MicroOpResolver&, uint8_t*, size_t, tflite::ErrorReporter*, tflite::Profiler*)
   MicroInterpreter(const Model* model, const MicroOpResolver& op_resolver,
   ^~~~~~~~~~~~~~~~
C:\Users\Al\Documents\Arduino\libraries\Arduino_TensorFlowLite\src/tensorflow/lite/micro/micro_interpreter.h:82:3: note:   candidate expects 6 arguments, 4 provided
C:\Users\Al\Documents\Arduino\libraries\Arduino_TensorFlowLite\src/tensorflow/lite/micro/micro_interpreter.h:73:7: note: candidate: constexpr tflite::MicroInterpreter::MicroInterpreter(const tflite::MicroInterpreter&)
 class MicroInterpreter {
       ^~~~~~~~~~~~~~~~
C:\Users\Al\Documents\Arduino\libraries\Arduino_TensorFlowLite\src/tensorflow/lite/micro/micro_interpreter.h:73:7: note:   candidate expects 1 argument, 4 provided

exit status 1

Compilation error: redefinition of 'const unsigned char snow_model_tflite []'

But 'hello world' example from Arduino_TensorFlowLite works well on raspberry pico:

===========================================================================*/

#include <TensorFlowLite.h>

#include "main_functions.h"

#include "tensorflow/lite/micro/all_ops_resolver.h"
#include "constants.h"
#include "model.h"
#include "output_handler.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

// Globals, used for compatibility with Arduino-style sketches.
namespace {
tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;
int inference_count = 0;

constexpr int kTensorArenaSize = 2000;
uint8_t tensor_arena[kTensorArenaSize];
}  // namespace

// The name of this function is important for Arduino compatibility.
void setup() {
  // Set up logging. Google style is to avoid globals or statics because of
  // lifetime uncertainty, but since this has a trivial destructor it's okay.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter = &micro_error_reporter;

  // Map the model into a usable data structure. This doesn't involve any
  // copying or parsing, it's a very lightweight operation.
  model = tflite::GetModel(g_model);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
    TF_LITE_REPORT_ERROR(error_reporter,
                         "Model provided is schema version %d not equal "
                         "to supported version %d.",
                         model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  // This pulls in all the operation implementations we need.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::AllOpsResolver resolver;

  // Build an interpreter to run the model with.
  static tflite::MicroInterpreter static_interpreter(
      model, resolver, tensor_arena, kTensorArenaSize, error_reporter);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
    return;
  }

  // Obtain pointers to the model's input and output tensors.
  input = interpreter->input(0);
  output = interpreter->output(0);

  // Keep track of how many inferences we have performed.
  inference_count = 0;
}

// The name of this function is important for Arduino compatibility.
void loop() {
  // Calculate an x value to feed into the model. We compare the current
  // inference_count to the number of inferences per cycle to determine
  // our position within the range of possible x values the model was
  // trained on, and use this to calculate a value.
  float position = static_cast<float>(inference_count) /
                   static_cast<float>(kInferencesPerCycle);
  float x = position * kXrange;

  // Quantize the input from floating-point to integer
  int8_t x_quantized = x / input->params.scale + input->params.zero_point;
  // Place the quantized input in the model's input tensor
  input->data.int8[0] = x_quantized;

  // Run inference, and report any error
  TfLiteStatus invoke_status = interpreter->Invoke();
  if (invoke_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed on x: %f\n",
                         static_cast<double>(x));
    return;
  }

  // Obtain the quantized output from model's output tensor
  int8_t y_quantized = output->data.int8[0];
  // Dequantize the output from integer to floating-point
  float y = (y_quantized - output->params.zero_point) * output->params.scale;

  // Output the results. A custom HandleOutput function can be implemented
  // for each supported hardware target.
  HandleOutput(error_reporter, x, y);

  // Increment the inference_counter, and reset it if we have reached
  // the total number per cycle
  inference_count += 1;
  if (inference_count >= kInferencesPerCycle) inference_count = 0;
}

how to fix this ?

zoldaten commented 5 months ago

ok. i fix it. Screenshot_10 *sensor inhouse so temp >0.

09_classifier.ino:

// Set to 1 if you are using the Arduino Nano 33 BLE Sense Rev2
#include <DHT.h>

// Note: Make sure you have tweaked the DHT sensor library, as reported
// in the TinyML Cookbook 2E

//#include "model.h"

#include <TensorFlowLite.h>
#include "model.h"

//#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include <tensorflow/lite/micro/all_ops_resolver.h>
#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

//#include <tensorflow/lite/micro/micro_interpreter.h>

//#include <tensorflow/lite/micro/micro_log.h>
//#include <tensorflow/lite/micro/system_setup.h>
//#include <tensorflow/lite/schema/schema_generated.h>
//#include "tensorflow/lite/version.h"

// Set to 1 if you want to see whether your model can forecast the snow
#define DEBUG_SNOW 0

// Set to 1 if you are using the Arduino Nano 33 BLE Sense Rev2
#define ARDUINO_ARDUINO_NANO33BLE_REV2 0

tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* tflu_model            = nullptr;
tflite::MicroInterpreter* tflu_interpreter = nullptr;
TfLiteTensor* tflu_i_tensor                = nullptr;
TfLiteTensor* tflu_o_tensor                = nullptr;

float tflu_i_scale = 1.0f;
int32_t tflu_i_zero_point = 0;
float tflu_o_scale = 1.0f;
int32_t tflu_o_zero_point = 0;

constexpr int t_sz = 2088;//4096
uint8_t tensor_arena[t_sz] __attribute__((aligned(16)));

tflite::AllOpsResolver tflu_ops_resolver;

constexpr int num_hours = 3;
int8_t t_vals [num_hours] = {0};
int8_t h_vals [num_hours] = {0};
int cur_idx = 0;

#if defined(ARDUINO_ARDUINO_NANO33BLE)
#if ARDUINO_ARDUINO_NANO33BLE_REV2 == 0
#include <Arduino_HTS221.h>
#define SENSOR HTS
#else
#include <Arduino_HS300x.h>
#define SENSOR HS300x
#endif

void setup() {
  Serial.begin(9600);

  while (!Serial);

  if (!SENSOR.begin()) {
    Serial.println("Failed sensor initialization!");
    while (1);
  }

  Serial.print("Test Temperature = ");
  Serial.print(SENSOR.readTemperature(), 2);
  Serial.println(" °C");
  Serial.print("Test Humidity = ");
  Serial.print(SENSOR.readHumidity(), 2);
  Serial.println(" %");

  tflu_model = tflite::GetModel(snow_model_tflite);

  static tflite::MicroInterpreter static_interpreter(
      tflu_model,
      tflu_ops_resolver,
      tensor_arena,
      t_sz);

  tflu_interpreter = &static_interpreter;

  tflu_interpreter->AllocateTensors();
  tflu_i_tensor = tflu_interpreter->input(0);
  tflu_o_tensor = tflu_interpreter->output(0);

  const auto* i_quant = reinterpret_cast<TfLiteAffineQuantization*>(tflu_i_tensor->quantization.params);
  const auto* o_quant = reinterpret_cast<TfLiteAffineQuantization*>(tflu_o_tensor->quantization.params);

  tflu_i_scale      = i_quant->scale->data[0];
  tflu_i_zero_point = i_quant->zero_point->data[0];
  tflu_o_scale      = o_quant->scale->data[0];
  tflu_o_zero_point = o_quant->zero_point->data[0];
}
#endif

#if defined(ARDUINO_RASPBERRY_PI_PICO)

#include <DHT.h>

// Arduino pin number
const int gpio_pin_dht_pin = 10;
DHT dht(gpio_pin_dht_pin, DHT22);

#define SENSOR dht

void setup() {

  Serial.begin(9600);
  while(!Serial);
  SENSOR.begin();
  delay(2000);

  Serial.print("Test Temperature = ");
  Serial.print(SENSOR.readTemperature(), 2);
  Serial.println(" °C");
  Serial.print("Test Humidity = ");
  Serial.print(SENSOR.readHumidity(), 2);
  Serial.println(" %");

  tflu_model = tflite::GetModel(snow_model_tflite);

  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter = &micro_error_reporter;

  if (tflu_model->version() != TFLITE_SCHEMA_VERSION) {
    TF_LITE_REPORT_ERROR(error_reporter,
                         "Model provided is schema version %d not equal "
                         "to supported version %d.",
                         tflu_model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  static tflite::MicroInterpreter static_interpreter(
      tflu_model,
      tflu_ops_resolver,
      tensor_arena,
      t_sz,
      error_reporter);

  tflu_interpreter = &static_interpreter;

  tflu_interpreter->AllocateTensors();
  tflu_i_tensor = tflu_interpreter->input(0);
  tflu_o_tensor = tflu_interpreter->output(0);

  const auto* i_quant = reinterpret_cast<TfLiteAffineQuantization*>(tflu_i_tensor->quantization.params);
  const auto* o_quant = reinterpret_cast<TfLiteAffineQuantization*>(tflu_o_tensor->quantization.params);

  tflu_i_scale      = i_quant->scale->data[0];
  tflu_i_zero_point = i_quant->zero_point->data[0];
  tflu_o_scale      = o_quant->scale->data[0];
  tflu_o_zero_point = o_quant->zero_point->data[0];
}
#endif

constexpr int num_reads = 3;

void loop() {
  float t = 0.0f;
  float h = 0.0f;

#if DEBUG_SNOW == 1
  t = -10.0f;
  h = 100.0f;
#else

  for(int i = 0; i < num_reads; ++i) {
    t += SENSOR.readTemperature();
    h += SENSOR.readHumidity();
    delay(3000);
  }

  t /= (float)num_reads;
  h /= (float)num_reads;
#endif

  // Use the mean and standard deviation
  // extracted from your dataset
  constexpr float t_mean = 2.08993f;
  constexpr float h_mean = 87.22773f;
  constexpr float t_std  = 6.82158f;
  constexpr float h_std  = 14.21543f;
  t = (t - t_mean) / t_std;
  h = (h - h_mean) / h_std;

  t = (t / tflu_i_scale);
  t += (float)tflu_i_zero_point;
  h = (h / tflu_i_scale);
  h += (float)tflu_i_zero_point;

  t_vals[cur_idx] = t;
  h_vals[cur_idx] = h;

  cur_idx = (cur_idx + 1) % num_hours;

  int idx0 = cur_idx;
  int idx1 = (cur_idx - 1 + num_hours) % num_hours;
  int idx2 = (cur_idx - 2 + num_hours) % num_hours;
  tflu_i_tensor->data.int8[0] = t_vals[idx2];
  tflu_i_tensor->data.int8[1] = t_vals[idx1];
  tflu_i_tensor->data.int8[2] = t_vals[idx0];
  tflu_i_tensor->data.int8[3] = h_vals[idx2];
  tflu_i_tensor->data.int8[4] = h_vals[idx1];
  tflu_i_tensor->data.int8[5] = h_vals[idx0];

  tflu_interpreter->Invoke();

  float out_int8 = tflu_o_tensor->data.int8[0];
  float out_f = (out_int8 - tflu_o_zero_point);
  out_f *= tflu_o_scale;

  if (out_f > 0.5) {
    Serial.println("Yes, it snows");
  }
  else {
    Serial.println("No, it does not snow");
  }

  delay(2000);
}

additionally i have to include debug_log.cpp from Documents\Arduino\libraries\Arduino_TensorFlowLite\examples\hello_world

Screenshot_1

jomoengineer commented 4 months ago

The Adafruit DHT sensor library should work. This is listed as "DHT sensor library" It works with the Pico. DHT sensor library

jomoengineer commented 4 months ago

You can find the Arduino_TensorFlowLite.zip under the following: https://github.com/PacktPublishing/TinyML-Cookbook_2E/tree/main/ArduinoLibs

To install them, follow the Arduino IDE Install Libraries doc: https://docs.arduino.cc/software/ide-v1/tutorials/installing-libraries/

gmiodice commented 4 months ago

Thanks a lot @jomoengineer for your support. Much appreciated! I hope @zoldaten you managed to fix your issue. I believe the main problem was the missing Arduino TensorFlow Lite lib that you can find in this repo. Do not hesitate to open a new issue if you need any further help

zoldaten commented 3 months ago

You can find the Arduino_TensorFlowLite.zip under the following: https://github.com/PacktPublishing/TinyML-Cookbook_2E/tree/main/ArduinoLibs

To install them, follow the Arduino IDE Install Libraries doc: https://docs.arduino.cc/software/ide-v1/tutorials/installing-libraries/

this lib gives error:

C:\Users\{user}\Desktop\TinyML-Cookbook\Chapter05\ArduinoSketches\07_indoor_scene_recognition\07_indoor_scene_recognition.ino:9:10: fatal error: tensorflow/lite/micro/micro_error_reporter.h: No such file or directory
 #include <tensorflow/lite/micro/micro_error_reporter.h>
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
exit status 1

Compilation error: tensorflow/lite/micro/micro_error_reporter.h: No such file or directory