Open henjin0 opened 1 year ago
Hey, I found a thorough solution here: https://samueladesola.medium.com/object-classification-on-arduino-nano-33-ble-sense-using-teachable-machine-3ead7389000a
The logic is to change the image dimension and change the conversion method.
Describe the bug The trained model program (.ino) for Arduino, which is output after training with
Embedded image models
, cannot be written with current Arduino_tensorflowlite to Arduino nano 33 BLE Sense as recommend in the GettingStarted.To Reproduce
Embeded Image Model
and Training some picture.Export Model
. And now, preview is working.Tensorflow lite
and CheckTensorFlow Lite for Microcontrollers
.Download my models
and unzip it.tm_template_script.ino(L22): Change include path.
tm_template_script.ino(L26): Comment out
version.h
tm_template_script.ino(L84,L85): Remove
error_reporter
.arduino_image_provider.cpp(L20): Change include path.
image_provider.h(L49): Add
capDataLen
.If there is not fix, happen some build error.
Upload
on Arduino IDE.Screenshots
Desktop (please complete the following information):
Arduino IDE 2.0.3
Arduino Library
Arduino_OV767X version 0.0.2
Device
Relevant link https://github.com/googlecreativelab/teachablemachine-community/blob/master/snippets/markdown/tiny_image/GettingStarted.md
Additional context I would appreciate any information on the version of the Arduino environment at the time of success working.
Relevant Code
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/
include
include "main_functions.h"
include "image_provider.h"
include "model_settings.h"
include "person_detect_model_data.h"
include "tensorflow/lite/micro/tflite_bridge/micro_error_reporter.h"
include "tensorflow/lite/micro/micro_interpreter.h"
include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
include "tensorflow/lite/schema/schema_generated.h"
// Globals, used for compatibility with Arduino-style sketches. namespace { tflite::ErrorReporter error_reporter = nullptr; const tflite::Model model = nullptr; tflite::MicroInterpreter interpreter = nullptr; TfLiteTensor input = nullptr;
// In order to use optimized tensorflow lite kernels, a signed int8_t quantized // model is preferred over the legacy unsigned model format. This means that // throughout this project, input images must be converted from unisgned to // signed format. The easiest and quickest way to convert from unsigned to // signed 8-bit integers is to subtract 128 from the unsigned value to get a // signed value.
// An area of memory to use for input, output, and intermediate arrays. constexpr int kTensorArenaSize = 136 * 1024; static uint8_t tensor_arena[kTensorArenaSize]; } // namespace
// The name of this function is important for Arduino compatibility. void setup() { // Set up logging. Google style is to avoid globals or statics because of // lifetime uncertainty, but since this has a trivial destructor it's okay. // NOLINTNEXTLINE(runtime-global-variables) static tflite::MicroErrorReporter micro_error_reporter; error_reporter = µ_error_reporter;
// Map the model into a usable data structure. This doesn't involve any // copying or parsing, it's a very lightweight operation. model = tflite::GetModel(g_person_detect_model_data); if (model->version() != TFLITE_SCHEMA_VERSION) { TF_LITE_REPORT_ERROR(error_reporter, "Model provided is schema version %d not equal " "to supported version %d.", model->version(), TFLITE_SCHEMA_VERSION); return; }
// Pull in only the operation implementations we need. // This relies on a complete list of all the ops needed by this graph. // An easier approach is to just use the AllOpsResolver, but this will // incur some penalty in code space for op implementations that are not // needed by this graph. // // tflite::AllOpsResolver resolver; // NOLINTNEXTLINE(runtime-global-variables) static tflite::MicroMutableOpResolver<6> micro_op_resolver; micro_op_resolver.AddAveragePool2D(); micro_op_resolver.AddConv2D(); micro_op_resolver.AddDepthwiseConv2D(); micro_op_resolver.AddReshape(); micro_op_resolver.AddSoftmax(); micro_op_resolver.AddFullyConnected();
// Build an interpreter to run the model with. // NOLINTNEXTLINE(runtime-global-variables) static tflite::MicroInterpreter static_interpreter( model, micro_op_resolver, tensor_arena, kTensorArenaSize); interpreter = &static_interpreter;
// Allocate memory from the tensor_arena for the model's tensors. TfLiteStatus allocate_status = interpreter->AllocateTensors(); if (allocate_status != kTfLiteOk) { TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed"); return; }
// Get information about the memory area to use for the model's input. input = interpreter->input(0); }
void loop() { // Get image from provider. if (kTfLiteOk != GetImage(error_reporter, kNumCols, kNumRows, kNumChannels, input->data.int8)) { TF_LITE_REPORT_ERROR(error_reporter, "Image capture failed."); }
// Run the model on this input and make sure it succeeds. if (kTfLiteOk != interpreter->Invoke()) { TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed."); }
TfLiteTensor* output = interpreter->output(0);
// Process the inference results. int8_t person_score = output->data.uint8[kPersonIndex]; int8_t no_person_score = output->data.uint8[kNotAPersonIndex]; for (int i = 0; i < kCategoryCount; i++) { int8_t curr_category_score = output->data.uint8[i]; const char* currCategory = kCategoryLabels[i]; TF_LITE_REPORT_ERROR(error_reporter, "%s : %d", currCategory, curr_category_score); } // Serial.write(input->data.int8, bytesPerFrame); }
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/
ifndef TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_IMAGE_PROVIDERH
define TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_IMAGE_PROVIDERH
include "tensorflow/lite/c/common.h"
include "tensorflow/lite/micro/tflite_bridge/micro_error_reporter.h"
// This is an abstraction around an image source like a camera, and is // expected to return 8-bit sample data. The assumption is that this will be // called in a low duty-cycle fashion in a low-power application. In these // cases, the imaging sensor need not be run in a streaming mode, but rather can // be idled in a relatively low-power mode between calls to GetImage(). The // assumption is that the overhead and time of bringing the low-power sensor out // of this standby mode is commensurate with the expected duty cycle of the // application. The underlying sensor may actually be put into a streaming // configuration, but the image buffer provided to GetImage should not be // overwritten by the driver code until the next call to GetImage(); // // The reference implementation can have no platform-specific dependencies, so // it just returns a static image. For real applications, you should // ensure there's a specialized implementation that accesses hardware APIs. TfLiteStatus GetImage(tflite::ErrorReporter error_reporter, int image_width, int image_height, int channels, int8_t image_data);
endif // TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_IMAGE_PROVIDERH