[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
While the paper suggests that patchwise inference generally decreases peak memory usage across various models, our experimentation reveals that the peak memory comparisons between patchwise and non-patchwise inference models such as mcunet-vww2, mcunet-in2, and mcunet-in3 exhibit higher peak memory usage with patchwise inference.
While the paper suggests that patchwise inference generally decreases peak memory usage across various models, our experimentation reveals that the peak memory comparisons between patchwise and non-patchwise inference models such as
mcunet-vww2
,mcunet-in2
, andmcunet-in3
exhibit higher peak memory usage with patchwise inference.