arduino-libraries / ArduinoBLE

ArduinoBLE library for Arduino
GNU Lesser General Public License v2.1
292 stars 198 forks source link

Arduino BLE stuck on running IMU Tiny ML #188

Open fullmakeralchemist opened 3 years ago

fullmakeralchemist commented 3 years ago

Hi, I was following this tutorial of Arduino BLE 33 Sense to use it like a controler of Street Fighter on Raspberry Pi, I tried to use the BLE library to make it wireless with a baterry but I have problems running it like for two weeks, basically it get stuck on running the model I´m not sure if is because of the Bluetooth. Or something else but it don't show me the output from the gestures. It conects but after that it just show me on the serial monitor this:

Connecting ... Connected

on my Python code that receives all the information I tried with a simple int and it works but when I tried to use the model don't give me any result from the gestureid.

#include <ArduinoBLE.h>
#include <Arduino_LSM9DS1.h>

#include <TensorFlowLite.h>
#include <tensorflow/lite/micro/all_ops_resolver.h>
#include <tensorflow/lite/micro/micro_error_reporter.h>
#include <tensorflow/lite/micro/micro_interpreter.h>
#include <tensorflow/lite/schema/schema_generated.h>
#include <tensorflow/lite/version.h>

#include "model2.h"

BLEService sensorService("00001101-0000-1000-8000-00805f9b34fb");
//TX Characteristics
BLEStringCharacteristic txChar("00001143-0000-1000-8000-00805f9b34fb", BLERead | BLENotify,15);     

// last sensor data    
float oldXLevel = 0; 
float oldYLevel = 0; 
float oldZLevel = 0; 
long previousMillis = 0; 

const float accelerationThreshold = 2.5; // threshold of significant in G's
const int numSamples = 119;

int samplesRead = numSamples;
int gestureid=0;

// global variables used for TensorFlow Lite (Micro)
tflite::MicroErrorReporter tflErrorReporter;

// pull in all the TFLM ops, you can remove this line and
// only pull in the TFLM ops you need, if would like to reduce
// the compiled size of the sketch.
tflite::AllOpsResolver tflOpsResolver;

const tflite::Model* tflModel = nullptr;
tflite::MicroInterpreter* tflInterpreter = nullptr;
TfLiteTensor* tflInputTensor = nullptr;
TfLiteTensor* tflOutputTensor = nullptr;

// Create a static memory buffer for TFLM, the size may need to
// be adjusted based on the model you are using
constexpr int tensorArenaSize = 8 * 1024;
byte tensorArena[tensorArenaSize] __attribute__((aligned(16)));

// array to map gesture index to a name
const char* GESTURES[] = {
  "punch",
  "defend",
  "summon"
};

#define NUM_GESTURES (sizeof(GESTURES) / sizeof(GESTURES[0]))

void setup() {
  Serial.begin(115200);  
  while (!Serial);

  if (!IMU.begin()) {
    Serial.println("Failed to initialize IMU!");
    while (1);
  }

  pinMode(LED_BUILTIN, OUTPUT); 

  if (!BLE.begin()) {
    Serial.println("starting BLE failed!");
    while (1);
  }

  // get the TFL representation of the model byte array
  tflModel = tflite::GetModel(model);
  if (tflModel->version() != TFLITE_SCHEMA_VERSION) {
    Serial.println("Model schema mismatch!");
    while (1);
  }

  // Create an interpreter to run the model
  tflInterpreter = new tflite::MicroInterpreter(tflModel, tflOpsResolver, tensorArena, tensorArenaSize, &tflErrorReporter);

  // Allocate memory for the model's input and output tensors
  tflInterpreter->AllocateTensors();

  // Get pointers for the model's input and output tensors
  tflInputTensor = tflInterpreter->input(0);
  tflOutputTensor = tflInterpreter->output(0);

  BLE.setLocalName("NanoBLE33");
  BLE.setAdvertisedService(sensorService);

  sensorService.addCharacteristic(txChar);
  BLE.addService(sensorService); 

  // initialize default data
  txChar.writeValue(String(0)); 

  // start advertising
  BLE.advertise();
  Serial.println("Bluetooth device active, waiting for connections...");
}

void loop() {
  // wait for a BLE central
  BLEDevice central = BLE.central();
  if (central) {
    Serial.print("Connected to central: ");
    Serial.println(central.address());
    digitalWrite(LED_BUILTIN, HIGH);

    while (central.connected()) {
      //long currentMillis = millis();
      updateGyroscopeLevel();
      delay(300);
    }

    digitalWrite(LED_BUILTIN, LOW);
    Serial.print("Disconnected from central: ");
    Serial.println(central.address());
  }
}

void updateGyroscopeLevel() {

  float aX, aY, aZ, gX, gY, gZ;

  // check if new acceleration AND gyroscope data is available
  if (IMU.accelerationAvailable() && IMU.gyroscopeAvailable()) {
    // read the acceleration and gyroscope data
    IMU.readAcceleration(aX, aY, aZ);
    IMU.readGyroscope(gX, gY, gZ);

    // normalize the IMU data between 0 to 1 and store in the model's
    // input tensor
    tflInputTensor->data.f[samplesRead * 6 + 0] = (aX + 4.0) / 8.0;
    tflInputTensor->data.f[samplesRead * 6 + 1] = (aY + 4.0) / 8.0;
    tflInputTensor->data.f[samplesRead * 6 + 2] = (aZ + 4.0) / 8.0;
    tflInputTensor->data.f[samplesRead * 6 + 3] = (gX + 2000.0) / 4000.0;
    tflInputTensor->data.f[samplesRead * 6 + 4] = (gY + 2000.0) / 4000.0;
    tflInputTensor->data.f[samplesRead * 6 + 5] = (gZ + 2000.0) / 4000.0;

    samplesRead++;

    if (samplesRead == numSamples) {
      // Run inferencing
      TfLiteStatus invokeStatus = tflInterpreter->Invoke();
      if (invokeStatus != kTfLiteOk) {
        Serial.println("Invoke failed!");
        while (1);
        return;
      }

      // Loop through the output tensor values from the model
      for (int i = 0; i < NUM_GESTURES; i++) {
        if(tflOutputTensor->data.f[i]>0.7)
        {
          gestureid=i+1;
          txChar.writeValue(String(gestureid));
          Serial.print(txChar.writeValue(String(gestureid)));
        }
      }

      Serial.print(gestureid);
      gestureid=0;
      //Serial.println();
    }
  }
}
HieronymusLex commented 1 year ago

Did you ever solve this? I have experienced a similar issue. Through some searching I found that if your model is defined as a const then execution will hang when invoking your TFLite model. If you remove const from your model file it should work. My understanding is that this will move the model from flash to SRAM but I don't know why this is needed given it only happens when using ArduinoBLE with TFLite Micro but it limits the size of model you can use.

fullmakeralchemist commented 1 year ago

Did you ever solve this? I have experienced a similar issue. Through some searching I found that if your model is defined as a const then execution will hang when invoking your TFLite model. If you remove const from your model file it should work. My understanding is that this will move the model from flash to SRAM but I don't know why this is needed given it only happens when using ArduinoBLE with TFLite Micro but it limits the size of model you can use.

Hi, I tried multiple times with different codes and I tried about erasing the const from the model.h file right? but still doesn't work. Or what did you mean?

HieronymusLex commented 1 year ago

Yeah so I meant in your model.h you should have unsigned char model[] = {....} rather than const unsigned char model[] = {...}

Some other things to note:

  1. Don't block the main loop with while (central.connected()) otherwise ArduinoBLE seems to not work. Just check if it's connected, then check for motion. If no, return, else gather enough samples, invoke the model, update the characteristic then return
  2. Ensure your tensor arena size is big enough
  3. Throw in some additional debugging statements to find out where the hard fault is occurring (if that is what's happening). E.g. log right before you invoke your model and afterwards

As an aside I believe I read somewhere you need to define your BLE services/characteristics like this:

BLEStringCharacteristic * txChar = nullptr;
BLEService * sensorService = nullptr;

then set them in the setup function:

  sensorService = new BLEService(UUID_GEN("1000"));
  txChar = new BLEStringCharacteristic(UUID_GEN("1001"), BLERead | BLENotify, 512);