tensorflow / tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
Apache License 2.0
1.96k stars 834 forks source link

Strange outputs #287

Closed AdamMiltonBarker closed 3 years ago

AdamMiltonBarker commented 3 years ago

Arduino 33 BLE Sense Tensorflow 2.5.0

I have the following model, on Linux this model classifies all 20 images in an Acute Lymphoblastic Leukemia test set correctly:

micro_op_resolver.AddAveragePool2D(); micro_op_resolver.AddConv2D(); micro_op_resolver.AddDepthwiseConv2D(); micro_op_resolver.AddReshape(); micro_op_resolver.AddFullyConnected(); micro_op_resolver.AddSoftmax();

On Arduino 33 BLE Sense I am using JpegDecoder to decode the images from an SD and pass them to input->data.int8, I have also tested with the instructions for converting to c array in the following link: https://github.com/tensorflow/tflite-micro/blob/f583f92992c3c9dfb8e10f36d66b2fe7267cf7bc/tensorflow/lite/micro/examples/person_detection/person_image_data.h#L17-L20 with exact same results.

It is not working well :D On Arduino I am always receiving an integer for positive and the negative of the integer for negative. IE:

Positive image: Positive score: 111 Negative score: -111

Negative image: Positive score: 74 Negative score: -74

Nearly always 74, -74 for negative and nearly always 111, -111 for positive.

At first I was simply reading the image in using SD.open ........ jpegFile.read() and looping through assigning to input->data.int8, but then I made a version that uses decodeSdFile and made a script based on the person detector project that does the grayscaling etc, but either way gets exactly the same results as above.

Do you have any suggestions as to why this is happening, I don't have the experience with Arduino ML to work this out. Thanks in advance.

AdamMiltonBarker commented 3 years ago

This is nuts. I have the same image being processed which is a negative sample. In loop I have:

  if (kTfLiteOk != getImage(images[6], input->data.int8)) {
    TF_LITE_REPORT_ERROR(error_reporter, "Image capture failed.");
  }

  if (kTfLiteOk != interpreter->Invoke()) {
    TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed.");
  }

  TfLiteTensor* output = interpreter->output(0);

  int8_t all_score = output->data.int8[kAllIndex];
  int8_t no_all_score = output->data.int8[kNotAllIndex];
  toggle(all_score, no_all_score, images[6]);
  delay(2000);

Where getImage is using decodeSdFile, the value is then passed to DecodeAndProcessImage from the person detector but without JpegDec.decodeArray(jpeg_buffer, jpeg_length); as the image is passed to it. Below is the output:

8:37:56.486 -> Im099_0.jpg
8:37:56.519 -> ===============
8:37:56.580 -> ALL positive score: -74
8:37:56.698 -> ALL negative score: 74
8:37:56.775 -> 
8:37:56.802 -> Im099_0.jpg
8:38:01.981 -> ===============
8:38:02.018 -> ALL positive score: 127
8:38:02.098 -> ALL negative score: -128
8:38:02.194 -> 
8:38:02.219 -> Im099_0.jpg
8:38:07.457 -> ===============
8:38:07.519 -> ALL positive score: -40
8:38:07.623 -> ALL negative score: 40
8:38:07.684 -> 
8:38:07.714 -> Im099_0.jpg
8:38:12.935 -> ===============
8:38:12.997 -> ALL positive score: -74
8:38:13.075 -> ALL negative score: 74
8:38:13.152 -> 
8:38:13.179 -> Im099_0.jpg
8:38:18.426 -> ===============
8:38:18.482 -> ALL positive score: -74
8:38:18.578 -> ALL negative score: 74
8:38:18.657 -> 
8:38:18.685 -> 

This then stays consistent, it is correctly identifying a negative Acute Lymphoblastic image, but why the change at the beginning 127, -128, -40, 40?

Next I pass it another negative image:

8:42:55.970 -> Im101_0.jpg
8:42:56.021 -> ===============
8:42:56.086 -> ALL positive score: -74
8:42:56.202 -> ALL negative score: 74
8:42:56.292 -> 
8:42:56.322 -> Im101_0.jpg
8:43:01.479 -> ===============
8:43:01.546 -> ALL positive score: 127
8:43:01.635 -> ALL negative score: -128
8:43:01.724 -> 
8:43:01.754 -> Im101_0.jpg
8:43:06.969 -> ===============
8:43:07.055 -> ALL positive score: 0
8:43:07.136 -> ALL negative score: 0
8:43:07.223 -> 
8:43:07.223 -> Im101_0.jpg
8:43:12.460 -> ===============
8:43:12.525 -> ALL positive score: -40
8:43:12.629 -> ALL negative score: 40
8:43:12.701 -> 
8:43:12.730 -> Im101_0.jpg
8:43:17.921 -> ===============
8:43:17.970 -> ALL positive score: -40
8:43:18.072 -> ALL negative score: 40
8:43:18.169 -> 
8:43:18.198 -> Im101_0.jpg
8:43:23.435 -> ===============
8:43:23.497 -> ALL positive score: -40
8:43:23.609 -> ALL negative score: 40
8:43:23.716 -> 
8:43:23.745 -> 

All good, but what happens at the beginning again? Now another negative, this image always does this, but the model in Python doesn't get a single one wrong, again what happens at the begging? It is an incorrect classification anyway but why does it suddenly jump to 127, -128 ? This happens for a few of the negative images:

8:49:53.900 -> Initialising SD card...
8:49:53.901 -> Initialisation done.
8:49:53.904 -> Im041_0.jpg
8:49:59.442 -> ===============
8:49:59.442 -> ALL positive score: 74
8:49:59.442 -> ALL negative score: -74
8:49:59.442 -> 
8:49:59.442 -> Im041_0.jpg
8:50:02.582 -> ===============
8:50:02.582 -> ALL positive score: 127
8:50:02.582 -> ALL negative score: -128
8:50:02.582 -> 
8:50:02.582 -> Im041_0.jpg
8:50:08.045 -> ===============
8:50:08.045 -> ALL positive score: 127
8:50:08.045 -> ALL negative score: -128
8:50:08.045 -> 
8:50:08.045 -> Im041_0.jpg
8:50:13.510 -> ===============
8:50:13.545 -> ALL positive score: 127
8:50:13.600 -> ALL negative score: -128
8:50:13.658 -> 
8:50:13.658 -> Im041_0.jpg
8:50:18.947 -> ===============
8:50:19.021 -> ALL positive score: 127
8:50:19.140 -> ALL negative score: -128
8:50:19.275 -> 
8:50:19.314 -> Im041_0.jpg
8:50:24.458 -> ===============
8:50:24.533 -> ALL positive score: 127
8:50:24.639 -> ALL negative score: -128

Now a positive:

8:53:15.367 -> Im028_1.jpg
8:53:19.946 -> ===============
8:53:19.946 -> ALL positive score: 111
8:53:19.946 -> ALL negative score: -111
8:53:19.946 -> 
8:53:19.946 -> Im028_1.jpg
8:53:24.131 -> ===============
8:53:24.164 -> ALL positive score: 127
8:53:24.194 -> ALL negative score: -128
8:53:24.194 -> 
8:53:24.194 -> Im028_1.jpg
8:53:29.593 -> ===============
8:53:29.675 -> ALL positive score: 127
8:53:29.767 -> ALL negative score: -128
8:53:29.866 -> 
8:53:29.903 -> Im028_1.jpg
8:53:35.053 -> ===============
8:53:35.123 -> ALL positive score: 127
8:53:35.216 -> ALL negative score: -128
8:53:35.338 -> 
8:53:35.377 -> 

There it is again it jumps to 127,-128. Still a correct classification but why the jump?

8:55:39.736 -> Im026_1.jpg
8:55:43.757 -> ===============
8:55:44.181 -> ALL positive score: 111
8:55:44.983 -> ALL negative score: -111
8:55:46.634 -> 
8:55:46.634 -> Im026_1.jpg
8:55:48.597 -> ===============
8:55:48.657 -> ALL positive score: 127
8:55:48.737 -> ALL negative score: -128
8:55:48.846 -> 
8:55:48.874 -> Im026_1.jpg
8:55:54.056 -> ===============
8:55:54.099 -> ALL positive score: 127
8:55:54.168 -> ALL negative score: -128

Again correct classification but why the jump to 127,-128 again? If I remove the code from loop() and create a loop in setup() to loop through an array of the file names:

  for (int i = 0; i < 8; i++) {
    getImage(images[i], input->data.int8);
    TfLiteTensor* output = interpreter->output(0);
    int8_t all_score = output->data.int8[kAllIndex];
    int8_t no_all_score = output->data.int8[kNotAllIndex];
    toggle(all_score, no_all_score, images[i]);
    delay(2000); 
  }

This is the output:

9:9:45.662 -> Im006_1.jpg
9:9:46.140 -> ===============
9:9:46.227 -> ALL positive score: -7
9:9:46.319 -> ALL negative score: -18
9:9:46.431 -> 
9:9:46.464 -> Im028_1.jpg
9:9:48.661 -> ===============
9:9:48.747 -> ALL positive score: 13
9:9:48.869 -> ALL negative score: 18
9:9:49.002 -> 
9:9:49.052 -> Im024_1.jpg
9:9:51.198 -> ===============
9:9:51.267 -> ALL positive score: 18
9:9:51.364 -> ALL negative score: 24
9:9:51.463 -> 
9:9:51.498 -> Im026_1.jpg
9:9:53.701 -> ===============
9:9:53.774 -> ALL positive score: 27
9:9:53.874 -> ALL negative score: 24
9:9:53.954 -> 
9:9:53.991 -> Im031_1.jpg
9:9:56.260 -> ===============
9:9:56.293 -> ALL positive score: -13
9:9:56.416 -> ALL negative score: -16
9:9:56.541 -> 
9:9:56.571 -> Im088_0.jpg
9:9:58.777 -> ===============
9:9:58.847 -> ALL positive score: -21
9:9:58.994 -> ALL negative score: -24
9:9:59.122 -> 
9:9:59.155 -> Im041_0.jpg
9:10:01.353 -> ===============
9:10:01.430 -> ALL positive score: 14
9:10:01.529 -> ALL negative score: 6
9:10:01.609 -> 
9:10:01.640 -> Im099_0.jpg
9:10:03.858 -> ===============
9:10:03.932 -> ALL positive score: -46
9:10:04.053 -> ALL negative score: -22
9:10:04.170 -> 
9:10:04.170 -> Im095_0.jpg
9:10:06.373 -> ===============
9:10:06.445 -> ALL positive score: -33
9:10:06.595 -> ALL negative score: -38
9:10:06.720 -> 
9:10:06.755 -> Im101_0.jpg
9:10:08.928 -> ===============
9:10:08.997 -> ALL positive score: -17
9:10:09.110 -> ALL negative score: -14

Hoping someone can shed some light lol, again this model in Python classifies everything correct out of this test set, 20 images all together.

AdamMiltonBarker commented 3 years ago

Full code unfortunately cannot share the dataset you have to apply for it https://homes.di.unimi.it/scotti/all/

#include "Arduino.h"
#include <SPI.h>

#include <TensorFlowLite.h>

#include "main_functions.h"
#include "all_model.h"
#include "model_settings.h"

#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

#include <JPEGDecoder.h>

String images[]={
    "Im006_1.jpg",
    "Im020_1.jpg",
    "Im024_1.jpg",
    "Im026_1.jpg",
    "Im028_1.jpg",
    "Im031_1.jpg",
    "Im035_0.jpg",
    "Im041_0.jpg",
    "Im047_0.jpg",
    "Im053_1.jpg",
    "Im057_1.jpg",
    "Im060_1.jpg",
    "Im063_1.jpg",
    "Im069_0.jpg",
    "Im074_0.jpg",
    "Im088_0.jpg",
    "Im095_0.jpg",
    "Im099_0.jpg",
    "Im101_0.jpg",
    "Im106_0.jpg"
};

int tp = 0;
int fp = 0;
int tn = 0;
int fn = 0;

namespace {
  tflite::ErrorReporter* error_reporter = nullptr;
  const tflite::Model* model = nullptr;
  tflite::MicroInterpreter* interpreter = nullptr;
  TfLiteTensor* input = nullptr;
  constexpr int kTensorArenaSize = 136 * 1024;
  static uint8_t tensor_arena[kTensorArenaSize];
} 

void setup() {

  Serial.begin(9600);
  while (!Serial) {
    ; 
  }

  Serial.println(F("Initialising SD card..."));
  if (!SD.begin(10)) {
    Serial.println(F("Initialisation failed!"));
    return;
  }
  Serial.println(F("Initialisation done."));

  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter = &micro_error_reporter;

  model = tflite::GetModel(all_model);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
    TF_LITE_REPORT_ERROR(error_reporter,
                         "Model provided is schema version %d not equal "
                         "to supported version %d.",
                         model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  static tflite::MicroMutableOpResolver<6> micro_op_resolver;
  micro_op_resolver.AddAveragePool2D();
  micro_op_resolver.AddConv2D();
  micro_op_resolver.AddDepthwiseConv2D();
  micro_op_resolver.AddReshape();
  micro_op_resolver.AddFullyConnected();
  micro_op_resolver.AddSoftmax();

  static tflite::MicroInterpreter static_interpreter(
      model, micro_op_resolver, tensor_arena, kTensorArenaSize, error_reporter);
  interpreter = &static_interpreter;

  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
    return;
  }

  input = interpreter->input(0);

  for (int i = 0; i < 20; i++) {
    getImage(images[i], input->data.int8);
    TfLiteTensor* output = interpreter->output(0);
    int8_t all_score = output->data.int8[kAllIndex];
    int8_t no_all_score = output->data.int8[kNotAllIndex];
    toggle(all_score, no_all_score, images[i]);
    delay(2000); 
  }

  Serial.print("True Positives: ");
  Serial.println(tp);
  Serial.print("False Positives: ");
  Serial.println(fp);
  Serial.print("True Negatives: ");
  Serial.println(tn);
  Serial.print("False Negatives: ");
  Serial.println(fn);
}

TfLiteStatus getImage(String filepath, int8_t* image_data){
  File jpegFile = SD.open(filepath, FILE_READ);  

  if ( !jpegFile ) {
    Serial.print("ERROR: File not found!");
    return kTfLiteError;
  }

  boolean decoded = JpegDec.decodeSdFile(jpegFile);
  processImage(filepath, image_data);

  return kTfLiteOk;
}

void getInputInfo(TfLiteTensor* input){

  Serial.println("");
  Serial.println("Model input info");
  Serial.println("===============");
  Serial.print("Dimensions: ");
  Serial.println(input->dims->size);
  Serial.print("Dim 1 size: ");
  Serial.println(input->dims->data[0]);
  Serial.print("Dim 2 size: ");
  Serial.println(input->dims->data[1]);
  Serial.print("Dim 3 size: ");
  Serial.println(input->dims->data[2]);
  Serial.print("Dim 4 size: ");
  Serial.println(input->dims->data[3]);
  Serial.print("Input type: ");
  Serial.println(input->type);
  Serial.println("===============");
  Serial.println("");

}

void processImage(String filename, int8_t* image_data){

  // Crop the image by keeping a certain number of MCUs in each dimension
  const int keep_x_mcus = kNumCols / JpegDec.MCUWidth;
  const int keep_y_mcus = kNumRows / JpegDec.MCUHeight;

  // Calculate how many MCUs we will throw away on the x axis
  const int skip_x_mcus = JpegDec.MCUSPerRow - keep_x_mcus;
  // Roughly center the crop by skipping half the throwaway MCUs at the
  // beginning of each row
  const int skip_start_x_mcus = skip_x_mcus / 2;
  // Index where we will start throwing away MCUs after the data
  const int skip_end_x_mcu_index = skip_start_x_mcus + keep_x_mcus;
  // Same approach for the columns
  const int skip_y_mcus = JpegDec.MCUSPerCol - keep_y_mcus;
  const int skip_start_y_mcus = skip_y_mcus / 2;
  const int skip_end_y_mcu_index = skip_start_y_mcus + keep_y_mcus;

  // Pointer to the current pixel
  uint16_t* pImg;
  // Color of the current pixel
  uint16_t color;

  // Loop over the MCUs
  while (JpegDec.read()) {
    // Skip over the initial set of rows
    if (JpegDec.MCUy < skip_start_y_mcus) {
      continue;
    }
    // Skip if we're on a column that we don't want
    if (JpegDec.MCUx < skip_start_x_mcus ||
        JpegDec.MCUx >= skip_end_x_mcu_index) {
      continue;
    }
    // Skip if we've got all the rows we want
    if (JpegDec.MCUy >= skip_end_y_mcu_index) {
      continue;
    }
    // Pointer to the current pixel
    pImg = JpegDec.pImage;

    // The x and y indexes of the current MCU, ignoring the MCUs we skip
    int relative_mcu_x = JpegDec.MCUx - skip_start_x_mcus;
    int relative_mcu_y = JpegDec.MCUy - skip_start_y_mcus;

    // The coordinates of the top left of this MCU when applied to the output
    // image
    int x_origin = relative_mcu_x * JpegDec.MCUWidth;
    int y_origin = relative_mcu_y * JpegDec.MCUHeight;

    // Loop through the MCU's rows and columns
    for (int mcu_row = 0; mcu_row < JpegDec.MCUHeight; mcu_row++) {
      // The y coordinate of this pixel in the output index
      int current_y = y_origin + mcu_row;
      for (int mcu_col = 0; mcu_col < JpegDec.MCUWidth; mcu_col++) {
        // Read the color of the pixel as 16-bit integer
        color = *pImg++;
        // Extract the color values (5 red bits, 6 green, 5 blue)
        uint8_t r, g, b;
        r = ((color & 0xF800) >> 11) * 8;
        g = ((color & 0x07E0) >> 5) * 4;
        b = ((color & 0x001F) >> 0) * 8;
        // Convert to grayscale by calculating luminance
        // See https://en.wikipedia.org/wiki/Grayscale for magic numbers
        float gray_value = (0.2126 * r) + (0.7152 * g) + (0.0722 * b);

        // Convert to signed 8-bit integer by subtracting 128.
        gray_value -= 128;
        // The x coordinate of this pixel in the output image
        int current_x = x_origin + mcu_col;
        // The index of this pixel in our flat output buffer
        int index = (current_y * kNumCols) + current_x;
        image_data[index] = static_cast<int8_t>(gray_value);
      }
    }
  }
}

void toggle(int8_t all_score, int8_t no_all_score, String filename){

  Serial.println(filename);
  Serial.println("===============");
  Serial.print("ALL positive score: ");
  Serial.println(all_score);
  Serial.print("ALL negative score: ");
  Serial.println(no_all_score);
  if(all_score > no_all_score && filename.indexOf("_1") > 0){
    Serial.println("True Positive");
    tp = tp + 1;
  }
  else if(all_score > no_all_score && filename.indexOf("_0") > 0){
    Serial.println("False Positive");
    fp = fp + 1;
  }
  else if(all_score < no_all_score && filename.indexOf("_1") > 0){
    Serial.println("False Negative");
    fn = fn + 1;
  }
  else if(all_score < no_all_score && filename.indexOf("_0") > 0){
    Serial.println("True Negative");
    tn = tn + 1;
  }
  Serial.println("");

  static bool is_initialized = false;
  if (!is_initialized) {
    pinMode(LEDR, OUTPUT);
    pinMode(LEDG, OUTPUT);
    pinMode(LEDB, OUTPUT);
    is_initialized = true;
  }

  digitalWrite(LEDG, HIGH);
  digitalWrite(LEDR, HIGH);

  digitalWrite(LEDB, LOW);
  delay(100);
  digitalWrite(LEDB, HIGH);

  if (all_score > no_all_score) {
    digitalWrite(LEDG, HIGH);
    digitalWrite(LEDR, LOW);
    digitalWrite(LEDR, HIGH);
    digitalWrite(LEDR, LOW);
    digitalWrite(LEDR, HIGH);
    digitalWrite(LEDR, LOW);
  } else {
    digitalWrite(LEDG, LOW);
    digitalWrite(LEDR, HIGH);
    digitalWrite(LEDG, LOW);
    digitalWrite(LEDG, HIGH);
    digitalWrite(LEDG, LOW);
  }

}

void jpegInfo() {

  Serial.println("JPEG image info");
  Serial.println("===============");
  Serial.print("Width      :");
  Serial.println(JpegDec.width);
  Serial.print("Height     :");
  Serial.println(JpegDec.height);
  Serial.print("Components :");
  Serial.println(JpegDec.comps);
  Serial.print("MCU / row  :");
  Serial.println(JpegDec.MCUSPerRow);
  Serial.print("MCU / col  :");
  Serial.println(JpegDec.MCUSPerCol);
  Serial.print("Scan type  :");
  Serial.println(JpegDec.scanType);
  Serial.print("MCU width  :");
  Serial.println(JpegDec.MCUWidth);
  Serial.print("MCU height :");
  Serial.println(JpegDec.MCUHeight);
  Serial.println("===============");
  Serial.println("");
}

void loop() {
}

Output:

18:7:10.586 -> Im006_1.jpg
18:7:10.639 -> ===============
18:7:10.639 -> ALL positive score: -7
18:7:10.640 -> ALL negative score: -18
18:7:10.640 -> True Positive
18:7:10.640 -> 
18:7:10.640 -> Im020_1.jpg
18:7:14.918 -> ===============
18:7:16.074 -> ALL positive score: -14
18:7:17.955 -> ALL negative score: -6
18:7:19.432 -> False Negative
18:7:19.440 -> 
18:7:19.446 -> Im024_1.jpg
18:7:19.453 -> ===============
18:7:19.453 -> ALL positive score: 18
18:7:19.453 -> ALL negative score: 24
18:7:19.453 -> False Negative
18:7:19.453 -> 
18:7:19.453 -> Im026_1.jpg
18:7:19.463 -> ===============
18:7:19.463 -> ALL positive score: 27
18:7:19.472 -> ALL negative score: 24
18:7:19.482 -> True Positive
18:7:19.482 -> 
18:7:19.482 -> Im028_1.jpg
18:7:20.777 -> ===============
18:7:20.904 -> ALL positive score: 13
18:7:21.111 -> ALL negative score: 18
18:7:21.287 -> False Negative
18:7:21.349 -> 
18:7:21.349 -> Im031_1.jpg
18:7:23.298 -> ===============
18:7:23.298 -> ALL positive score: -13
18:7:23.298 -> ALL negative score: -16
18:7:23.369 -> True Positive
18:7:23.453 -> 
18:7:23.514 -> Im035_0.jpg
18:7:25.870 -> ===============
18:7:25.978 -> ALL positive score: 12
18:7:26.109 -> ALL negative score: 20
18:7:26.173 -> True Negative
18:7:26.273 -> 
18:7:26.331 -> Im041_0.jpg
18:7:28.370 -> ===============
18:7:28.370 -> ALL positive score: 14
18:7:28.579 -> ALL negative score: 6
18:7:28.635 -> False Positive
18:7:28.736 -> 
18:7:28.789 -> Im047_0.jpg
18:7:30.869 -> ===============
18:7:30.950 -> ALL positive score: 25
18:7:31.070 -> ALL negative score: 20
18:7:31.171 -> False Positive
18:7:31.245 -> 
18:7:31.281 -> Im053_1.jpg
18:7:33.457 -> ===============
18:7:33.583 -> ALL positive score: 39
18:7:33.719 -> ALL negative score: 5
18:7:33.843 -> True Positive
18:7:33.932 -> 
18:7:33.976 -> Im057_1.jpg
18:7:36.024 -> ===============
18:7:36.122 -> ALL positive score: 6
18:7:36.223 -> ALL negative score: -1
18:7:36.383 -> True Positive
18:7:36.504 -> 
18:7:36.582 -> Im060_1.jpg
18:7:38.571 -> ===============
18:7:38.702 -> ALL positive score: 25
18:7:38.875 -> ALL negative score: 12
18:7:39.031 -> True Positive
18:7:39.138 -> 
18:7:39.193 -> Im063_1.jpg
18:7:41.080 -> ===============
18:7:41.162 -> ALL positive score: 23
18:7:41.359 -> ALL negative score: -52
18:7:41.588 -> True Positive
18:7:41.674 -> 
18:7:41.723 -> Im069_0.jpg
18:7:43.663 -> ===============
18:7:43.759 -> ALL positive score: -4
18:7:43.908 -> ALL negative score: 34
18:7:44.001 -> True Negative
18:7:44.095 -> 
18:7:44.139 -> Im074_0.jpg
18:7:46.145 -> ===============
18:7:46.218 -> ALL positive score: 22
18:7:46.342 -> ALL negative score: 18
18:7:46.501 -> False Positive
18:7:46.636 -> 
18:7:46.685 -> Im088_0.jpg
18:7:48.678 -> ===============
18:7:48.772 -> ALL positive score: -21
18:7:48.914 -> ALL negative score: -24
18:7:49.091 -> False Positive
18:7:49.178 -> 
18:7:49.226 -> Im095_0.jpg
18:7:51.192 -> ===============
18:7:51.289 -> ALL positive score: -33
18:7:51.444 -> ALL negative score: -38
18:7:51.624 -> False Positive
18:7:51.727 -> 
18:7:51.765 -> Im099_0.jpg
18:7:53.755 -> ===============
18:7:53.831 -> ALL positive score: -46
18:7:53.969 -> ALL negative score: -22
18:7:54.101 -> True Negative
18:7:54.195 -> 
18:7:54.235 -> Im101_0.jpg
18:7:56.305 -> ===============
18:7:56.343 -> ALL positive score: -17
18:7:56.478 -> ALL negative score: -14
18:7:56.614 -> True Negative
18:7:56.685 -> 
18:7:56.720 -> Im106_0.jpg
18:7:58.832 -> ===============
18:7:58.935 -> ALL positive score: -42
18:7:59.072 -> ALL negative score: -45
18:7:59.218 -> False Positive
18:7:59.316 -> 
18:7:59.362 -> True Positives: 7
18:8:00.961 -> False Positives: 6
18:8:01.059 -> True Negatives: 4
18:8:01.164 -> False Negatives: 3

The classifications are the same each time so at least it is stable :)

advaitjain commented 3 years ago

This looks like something with the Arduino specific bits which unfortunately we won't be able to help much with debugging.

Some pointers that may help:

AdamMiltonBarker commented 3 years ago

Thanks for the reply. I will look into your sugestions, by renode do you mean the following ? https://github.com/renode/renode

The version of TFLM is 2.4.0 APLHA

advaitjain commented 3 years ago

TFLM already makes use of renode. The following command:

make -f tensorflow/lite/micro/tools/make/Makefile OPTIMIZED_KERNEL_DIR=cmsis_nn TARGET=stm32f4 test_person_detection_test

will run the test using renode.

It might be worthwhile to use the latest TFLM Arduino library (built from tip of tree)

@petewarden can likely help with that.

AdamMiltonBarker commented 3 years ago

Just to update on this, to make sure I retrained the classifier on 2 separate machines and they all receive the same results regards metrics and classification so it definitely is something related to the Arduino / TFLM. I will do the tests you suggested today. Thanks for the advice.

AdamMiltonBarker commented 3 years ago

@advaitjain any documentation on making a custom project and adding it to the makefile, not as straight forward as just making the example based on person_detection_test is it!

I have downloaded the person detector zip specified in the makefile / download shell file. If my understanding is correct to get the make sorted I will do the following:

Is the only way to do this ? Anything missed out ?

I don't find test_person_detection_test_int8 in the makefile or in code so how is this translated ?

AdamMiltonBarker commented 3 years ago

Version one is published, https://github.com/AMLResearchProject/ALL-Arduino-Nano-33-BLE-Sense-Classifier In version two I will include the testing above in the documentation and hopefully can get to the bottom of the misclassification on Arduino. I linked to this issue and gave you credit for your help :)

AdamMiltonBarker commented 3 years ago

Hi @advaitjain sorry to pester I appreciate you are busy. Are the steps above correct for adding a new project to the tests? It is a little time sensitive :D

advaitjain commented 3 years ago

You are on the right track. Following the same pattern as the existing examples is the way to go.

The person_detection example setup is a bit more complex because we did not want to commit the large images and model files into the repo. For the purposes of testing, you should be able to directly have all the model and inputs be in the source tree, similar to hello_word.

Alternately, take a look at the person_detection_benchmark.

Once you have the example / benchmark building for x86, you should be able to also build and run the same binary on stm32f4 without changing any code. That would be the easiest way to verify if the discrepancy is in the TFLM code or in the Arduino-specific bits.

AdamMiltonBarker commented 3 years ago

OK thank you for the information. I will get to this ASAP. Thanks again. Hello world link is dead by the way, most are now, it has been really hard to find examples to debug against.

petewarden commented 3 years ago

From reading the conversation, it looks like you've managed to get past the initial issue that started this bug? If not, please update with more information, I'm closing this for now.

Thanks for the heads up about the broken links too. I'm working with O'Reilly to try to update their URL shortener to ensure the book links are fixed at least.