Up to this point the model seemed to perform okay, although the "circle" gesture was a bit tricky to trigger.
After commit cbddbc28aa2e9d709f97bf8b067dee70e87ba5b0 the "Data Processor" samples 80 accelerometer data points, as done in the ML Trainer. And from that point when the micro:bit is not moving in the "face up" position, the "circle" ML gesture is constantly triggered.
Changing the orientation of the micro:bit does seem to correctly trigger the "still" gesture when not moving, and shaking does works well.
Taking the parent commit e0e8e5e1494efe58b0c783a1297c6a1b7d6291aa, and changing the mlDataProcessor.init(inputLen / 3) call to mlDataProcessor.init(80), produces the same result, so that would discard a different issue or bug introduced by cbddbc28aa2e9d709f97bf8b067dee70e87ba5b0.
We'll need to check if this is an issue converting the model with ML4F, or if there is a bug in the implementation of the data processing before feeding it into the model.
Things we need to double check:
The ML trainner sampling period is 25ms
The ML trainer processes 80 samples for each model inferance
Before commit cbddbc28aa2e9d709f97bf8b067dee70e87ba5b0 the "Data Processor" was only collecting 8 samples with the built-in model (model input length of 24 divided by 3): https://github.com/microbit-foundation/pxt-ml-runner-poc/blob/e0e8e5e1494efe58b0c783a1297c6a1b7d6291aa/pxtextension.cpp#L109
Up to this point the model seemed to perform okay, although the "circle" gesture was a bit tricky to trigger.
After commit cbddbc28aa2e9d709f97bf8b067dee70e87ba5b0 the "Data Processor" samples 80 accelerometer data points, as done in the ML Trainer. And from that point when the micro:bit is not moving in the "face up" position, the "circle" ML gesture is constantly triggered. Changing the orientation of the micro:bit does seem to correctly trigger the "still" gesture when not moving, and shaking does works well. Taking the parent commit e0e8e5e1494efe58b0c783a1297c6a1b7d6291aa, and changing the
mlDataProcessor.init(inputLen / 3)
call tomlDataProcessor.init(80)
, produces the same result, so that would discard a different issue or bug introduced by cbddbc28aa2e9d709f97bf8b067dee70e87ba5b0.We'll need to check if this is an issue converting the model with ML4F, or if there is a bug in the implementation of the data processing before feeding it into the model.
Things we need to double check: