espressif / esp-dl

Espressif deep-learning library for AIoT applications
MIT License
548 stars 118 forks source link

Is it possible to run nn on esp32s2? #42

Closed PureHing closed 3 years ago

PureHing commented 3 years ago

@yehangyang HI, I see v2_alpha on the git branch. There are nn lib*.a libraries of esp32s2. Can s2 also run nn? And this link is based on which version of idf to run esp32 or eps32s2 or esp32s3?

yehangyang commented 3 years ago

Hi, @PureHing Thank you for your attention.

  1. Yes, ESP32-S2 can also run NN functionally. However, as ESP32-S2 has no instruction boost, it is slower than ESP32-S3.
  2. For ESP32, ESP32-S2 and ESP32-C3 are base on ESP-IDF/release/v4.3. For ESP32-S3, the corresponding ESP-IDF is not released yet. Once ESP-IDF supports ESP32-S3, ESP-DL will be released officially.

Probably you can try ESP-DL/tutorial or ESP-DL/examples first, with ESP32, ESP32-S2, and ESP32-C3.

Best regards.

PureHing commented 3 years ago

@yehangyang Okay. Thanks much. If the input is 480x480x1 , similar to the feature extraction of mobilenet, is the speed acceptable with esp32s3? BTW, the master branch with 8e3e65a47b of esp-idf is suitable?

yehangyang commented 3 years ago

Hi, @PureHing I think it depends on your application latency requirement. Well, please try it with the tutorial for testing whether ESP-IDF is suitable.

Best regards.

PureHing commented 3 years ago

Hi, @PureHing I think it depends on your application latency requirement. Well, please try it with the tutorial for testing whether ESP-IDF is suitable.

Best regards. @yehangyang Hi,this is description on esp32s2 with human face keypoint detect

MNIST::forward: 797270 us
[0] score: 0.987729, box: [137, 75, 246, 215]
left eye: (157, 131), right eye: (158, 177)
nose: (170, 163)
mouth left: (199, 133), mouth right: (193, 180)

This is a great job! Can you give me an overview of the network structure? What is the maximum number of net channels in your work.

yehangyang commented 3 years ago

Hi, @PureHing Thanks. But, I'm afraid we can not tell the details of the network structure. Best regards.

PureHing commented 3 years ago

@yehangyang Hi, I have a question for you,according to "output_exponent" is effective for "bias" coefficient converting only by now., that's means the value of each out_exponent=log2(max(abs(np.load(f'{filename}'_bias.npy))) / 2 ^ (element_width - 1)) in config.json.Is it right?

And About Bit Quantize is an empty document.

yehangyang commented 3 years ago

Hi, @PureHing It because bias_exponent must equal output_exponent, so I merge them together to output_exponent.

No, output_exponent is got by yourself. You can get it by equation, output_exponent = log2(max(abs(output_float))) / 2 ^ (element_width - 1)), the output_float is the output of a layer in floating-point. Or, through other quantization method to settle the output_exponent.

And output_exponent is setted in Layer initialization, for example in tutorial.

We plan to generate a Model from the JSON file in the future. So I reserve "output_exponent" in config.json. Probably some misleading here.

We're working on writing About Bit Quantize. Briefly, all quantization(int16 and int8) follow the equation, value_float = value_quant ^ exponent. There is some difference between int16 and int8, which will be released soon. By now, you could use int16, which is much more stable.

Best regards.

PureHing commented 3 years ago

Okay.Thanks