tauri-apps / tauri-docs

The source for all Tauri project documentation.
https://tauri.app
MIT License
834 stars 633 forks source link

[request] Why using rust-based yolo in tauri 2.0 is too slow #2992

Closed lqxisok closed 1 week ago

lqxisok commented 1 week ago

Question you want answered

Why using rust-based yolo in tauri 2.0 is too slow

Where did you look for an answer?

I am trying to implement a embeded yolo in Tauri. In the first time, I use pyo3 to link py files to perform yolo inference. I got ~160ms per image on 1280x1280 resolution. Then I find there is another implementation based on rust. When I change to use the second implementation, it is suprising that I got ~800ms per image on 1280x1280 resolution. But, in fact, I test the crate without using tauri as frontend framework. I got ~60ms per image. This heavily influences the deployment of neural networks for Tauri.

Links: https://github.com/jamjamjon/usls -> Rust based Yolo inference crate

Page URL

No response

Additional context

No response

Are you willing to work on this yourself?

FabianLars commented 1 week ago

are you sending the data back and forth from js to rust? The IPC will always add at least some delay. i'd recommend to try release mode, or to be sure to send binary data via https://v2.tauri.app/develop/calling-rust/#returning-array-buffers / https://v2.tauri.app/develop/calling-rust/#accessing-raw-request if possible.

speaking of debug vs release, this will make a huge difference when comparing the rust implementation vs the python implementation without counting in the IPC.