mit-han-lab / tinyengine

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
https://mcunet.mit.edu
MIT License
792 stars 130 forks source link

Using SDRAM of OpenMV instead of SRAM #79

Closed senceryucel closed 1 year ago

senceryucel commented 1 year ago

Hey, @meenchen . it is me again :)

I have realized that there exists a SDRAM which has a 32MB of memory while I was looking for OpenMV H7 Plus' features. This is way bigger than SRAM which has only 1MB. I was wondering if is there a way to embed the TinyEngine to SDRAM.

Just for you to remember, I was trying to handle a memory overflow problem in the firmware for higher resolutions ( #75 ).

I have no limit for the time that the inference takes, I will have a lot of time for inference in the pipeline of my project. So, if using SDRAM for higher resolutions is possible, I won't be suffering because of the time increased.

Thank you so much in advance.

BehicKlncky commented 1 year ago

Hi, @senceryucel

Actually, OpenMV DOES use SDRAM to load the models. What it does in SRAM is, basically, if the model that you are using is small enough, then SRAM takes it to process it faster as like thousands time. (3.2GB/s speed). So, even though I do not have all the details about what TinyEngine does to fit the engine into the firmware; the thought of using SRAM only and keeping SDRAM empty does not seem to be correct to me.

senceryucel commented 1 year ago

Oh, @BehicKlncky

I think I got it. Thank you for your explanation. Can I ask some further questions about OpenMV in detail from direct message if it is OK?

BehicKlncky commented 1 year ago

Hi, @senceryucel Sure, I' ll be waiting for your message.

Have a great day!