raspberrypi / pico-tflmicro

Pico TensorFlow Lite Port
635 stars 97 forks source link

Unable to get output from model #13

Closed risb21 closed 8 months ago

risb21 commented 8 months ago

I've been trying to get an output from the sequential model that I'm running on a Raspberry Pi Pico W, but the output is always the same.

it is best to checkout the code in the repository I'm currently working on, https://github.com/risb21/pico-shape-detection/tree/main

Here I've defined a method to predict values, calling Invoke() on the micro interpreter and then passing an output tensor data pointer. https://github.com/risb21/pico-shape-detection/blob/c50be45e462dbe6fd0bb65b4ae5ed76494c5db7a/src/tflite_wrapper.cpp#L94-L108

void* TFLMicro::predict() {
    TfLiteStatus invoke_status = _interpreter -> Invoke();

    if (invoke_status != kTfLiteOk) {
        MicroPrintf("Could not Invoke interpreter\n");
        return nullptr;
    }

    _output_tensor = _interpreter -> output(0);

    // float y_quantized = _output_tensor -> data.f;
    // float y = (y_quantized - _output_tensor -> params.zero_point) *
    //           _output_tensor -> params.scale;
    return _output_tensor -> data.data;
}

But when I read from it, the data stays the same, even though the input accelerometer data is different every time. https://github.com/risb21/pico-shape-detection/blob/c50be45e462dbe6fd0bb65b4ae5ed76494c5db7a/src/main.cpp#L209-L235

        if (flags & Flag::predict) {
            // Unset predict flag
            flags &= 0xFF ^ Flag::predict;

            float scale = model.input_scale();
            int32_t zp = model.input_zero_point();
            for (int line = 0; line < MAX_RECORD_LEN; line++) {
                input[line*3] = rec_data[line].x;
                input[line*3 + 1] = rec_data[line].y;
                input[line*3 + 2] = rec_data[line].z;
            }

            float *pred = reinterpret_cast<float *>(model.predict());

            if (pred == nullptr) {
                printf("Error in predicting shape\n");
                continue;
            }

            printf("+----------+----------+----------+\n"
                   "|  Circle  |  Square  | Triangle |\n"
                   "+----------+----------+----------+\n"
                   "| %8.3f | %8.3f | %8.3f |\n"
                   "+----------+----------+----------+\n",
                   pred[0], pred[1], pred[2]);

        }

image

The tflite model has 83 X 3 input nodes and 3 output nodes: image

risb21 commented 8 months ago

I would also like to note that I've tried to run the hello world example on my RPi Pico W too (with relevant modifications for the LED pin choice) and the output of the model stayed the same, i.e. the LED did not pulsate and the model just gave the same output every iteration.

risb21 commented 8 months ago

UPDATE: I managed to get it working by converting the model I had trained to an int8 quantized model, but I still would like to know if there is a way to run the un-quantized float32 model on the pico. I went through tensorflow lite documentation and it mentioned that fully quantized models are used for microcontrollers, but did not exclude the un-quantized models from running on microcontrollers.

risb21 commented 8 months ago

I would also like to note that I've tried to run the hello world example on my RPi Pico W too (with relevant modifications for the LED pin choice) and the output of the model stayed the same, i.e. the LED did not pulsate and the model just gave the same output every iteration.

I also managed to get this running by using the int8 quantized model included in the hello_world directory instead