Closed scottcantreid closed 3 years ago
If your interest is in benchmarks for TinyML, I would like to point you to https://github.com/mlcommons/tiny/tree/e104a3aac401f3e278e81759611da5e94c2fd660/v0.5
Here are more specific answers to your questions:
@advaitjain that's cool. is there a way to fuse ops to SVDF
Hi all, I hope it is okay to use your issues channel to ask a question about the benchmark models! I am looking at the benchmark models in
tflite-micro/tensorflow/lite/micro/benchmarks
, namely the person-detection model and the keyword spotting model. I'm curious to learn more about these model architectures, i.e. what are the hidden sizes, kernel sizes, etc. behind each of these architectures.I'm able to deduce from training_a_model.md that the person detection model is a mobilenet_v1 architecture, presumably with grayscale 96x96 input images. Assuming that the architecture is just a standard mobilenet_v1 with the given input image size (96x96x1) and 2 output classes (
'person'
and'not a person'
), I should be able to fill in all of the details about the architecture. Could you confirm if this is indeed the correct architecture?It is harder to fill in the architectural details for the keyword-spotting architecture. It appears from keyword_benchmark.cc that the architecture is
FC -> Quantize -> Softmax -> SVDF
(probably using this SVDF layer) However, in this code, the low-latency-svdf architecture seems to quite different from the above. Thecreate_low_latency_svdf_model
function provides enough detail that I can figure out the architectural details, if this is indeed the code that is being used to define the KWS benchmark model. I'd appreciate if someone could clarify which of the two architectures is actually being tested by the kws benchmark model.Thank you!