I was having trouble getting the Hello World example to work. Code builds and runs on the Raspberry Pico, but the LED is constantly on.
When debugging, I notice that x_quantized value is always 0, despite the x value moving up and down. With further debugging, I notice that the input→params.scale and input→params.zero_point are always 0.
Am I doing something wrong? Looking at the original TensorFlow Lite example I see that the x-value isn't quantised, and wondered if keeping it a float would work for the Pico. It seems to, so putting the code here in case anyone else is encountering this issue (happy to submit a PR if helpful).
#include "constants.h"
#include "hello_world_float_model_data.h"
#include "main_functions.h"
#include "output_handler.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_log.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/micro/system_setup.h"
#include "tensorflow/lite/schema/schema_generated.h"
// Globals, used for compatibility with Arduino-style sketches.
namespace {
const tflite::Model *model = nullptr;
tflite::MicroInterpreter *interpreter = nullptr;
TfLiteTensor *input = nullptr;
TfLiteTensor *output = nullptr;
int inference_count = 0;
constexpr int kTensorArenaSize = 2000;
uint8_t tensor_arena[kTensorArenaSize];
} // namespace
// The name of this function is important for Arduino compatibility.
void setup() {
tflite::InitializeTarget();
// Map the model into a usable data structure. This doesn't involve any
// copying or parsing, it's a very lightweight operation.
model = tflite::GetModel(g_hello_world_float_model_data);
if (model->version() != TFLITE_SCHEMA_VERSION) {
MicroPrintf(
"Model provided is schema version %d not equal "
"to supported version %d.",
model->version(), TFLITE_SCHEMA_VERSION);
return;
}
// This pulls in all the operation implementations we need.
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::MicroMutableOpResolver<1> resolver;
TfLiteStatus resolve_status = resolver.AddFullyConnected();
if (resolve_status != kTfLiteOk) {
MicroPrintf("Op resolution failed");
return;
}
// Build an interpreter to run the model with.
static tflite::MicroInterpreter static_interpreter(
model, resolver, tensor_arena, kTensorArenaSize);
interpreter = &static_interpreter;
// Allocate memory from the tensor_arena for the model's tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
MicroPrintf("AllocateTensors() failed");
return;
}
// Obtain pointers to the model's input and output tensors.
input = interpreter->input(0);
output = interpreter->output(0);
// Keep track of how many inferences we have performed.
inference_count = 0;
}
// The name of this function is important for Arduino compatibility.
void loop() {
// Calculate an x value to feed into the model. We compare the current
// inference_count to the number of inferences per cycle to determine
// our position within the range of possible x values the model was
// trained on, and use this to calculate a value.
float position = static_cast<float>(inference_count) /
static_cast<float>(kInferencesPerCycle);
float x = position * kXrange;
// int8_t x_quantized = x / input->params.scale + input->params.zero_point;
// Place the quantized input in the model's input tensor
// input->data.int8[0] = x_quantized;
input->data.f[0] = x;
// Run inference, and report any error
TfLiteStatus invoke_status = interpreter->Invoke();
if (invoke_status != kTfLiteOk) {
MicroPrintf("Invoke failed on x: %f\n", static_cast<double>(x));
return;
}
// Obtain the quantized output from model's output tensor
// int8_t y_quantized = output->data.int8[0];
// Dequantize the output from integer to floating-point
// float y = (y_quantized - output->params.zero_point) * output->params.scale;
float y = output->data.f[0];
// Output the results. A custom HandleOutput function can be implemented
// for each supported hardware target.
HandleOutput(x, y);
// Increment the inference_counter, and reset it if we have reached
// the total number per cycle
inference_count += 1;
if (inference_count >= kInferencesPerCycle) inference_count = 0;
}
PS: Thank you @petewarden for all your great work!
I was having trouble getting the Hello World example to work. Code builds and runs on the Raspberry Pico, but the LED is constantly on.
When debugging, I notice that
x_quantized
value is always0
, despite thex
value moving up and down. With further debugging, I notice that theinput→params.scale
andinput→params.zero_point
are always0
.Am I doing something wrong? Looking at the original TensorFlow Lite example I see that the x-value isn't quantised, and wondered if keeping it a
float
would work for the Pico. It seems to, so putting the code here in case anyone else is encountering this issue (happy to submit a PR if helpful).PS: Thank you @petewarden for all your great work!