sub2sub2 / AI-LINUX-RUST

0 stars 2 forks source link

How to launch AI model only when dbus requesting occurs. #18

Open hyunsube opened 4 months ago

hyunsube commented 4 months ago

Running AI model seems inefficient, because AI inference may not be requested very frequently. So, AI model should be launched on demand and terminated.

sseoreo commented 4 months ago

Do you mean maintaining AI models in memory (or GPU memory)? Uploading models in GPU memory causes a bit of delay, and some apps may this delay may be negligible as you said.

How bout apps provide their frequency level for AI when the apps are registered on MCA?

hyunsube commented 4 months ago

No, I mean that launching AI model binary file automatically.

hyunsube commented 1 month ago

I'd implemented Dbus auto launching using systems when d-bus request has been received from MCA