[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
Hi @waquey thanks for your interest in our work. Our current implementation only supports int8 models. Please convert your models to int8 in order to use Tinyengine.
Thanks for the great work. As the tutorial described, the input and output are all of "int8" type.
Is uint8 model supported or we should just convert into int8 instead? Thanks