AIRLegend / aitrack

6DoF Head tracking software
MIT License
1.03k stars 102 forks source link

Cuda acceleration #141

Open Deamons2006 opened 2 years ago

Deamons2006 commented 2 years ago

I dont know a whole lot about coding for this kind of stuff but would it be possible to use the cuda cores on Nvidia gpus to accelerate the AI computation and decrease CPU usage? Being able to assign the gpu that is used would be nice as well.

searching46dof commented 1 year ago

This not available in the Ort::SessionOptions that is currently being used by AItrack.

https://onnxruntime.ai/docs/api/c/struct_ort_1_1_session_options.html#afea37bbd589b4acd151ce2cc49ac7844 AItrack would need to upgrade Ort to support AppendExecutionProvider_CUDA.

The current session options do support execution with optimal parallel threads (=#CPU's) instead of currently a single CPU auto session_options = Ort::SessionOptions(); session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED); session_options.SetInterOpNumThreads(0); // 0=default optimal parallel threads session_options.SetIntraOpNumThreads(0); // 0=default optimal parallel threads session_options.SetExecutionMode(ExecutionMode::ORT_PARALLEL);

AIRLegend commented 1 year ago

This have been suggested before (#64). Sounds like an interesting idea that should be explored, but I think it should not be a priority for now because:

  1. It requires adding a dependency to the TensorRT sdk.
  2. It would only support NVIDIA cards with real time inference capabilities (IDK how many of AITrack users have "modern" RTX cards).
  3. I suspect that having the real-time video schema would still have a signiticant weight on CPU because on each frame data should be copied to and from the card, reducing the available bus bandwith for the games (haven't measured it, though).

Will mark this as an IDEA/ENHACEMENT in case someone wants to benchmark this.

MisterFabulous commented 1 year ago

unsubscribe

Cheers,

Chris and Isi

Chris and Isi Morrice

2 Mercantile Parade Kensington Vic 3031

tel 03 9372 2395 mob 0421 062 604

*e @.** **@.***>

On Mon, 15 Aug 2022 at 07:45, AIR @.***> wrote:

This have been suggested before (#64 https://github.com/AIRLegend/aitrack/issues/64). Sounds like an interesting idea that should be explored, but I think it should not be a priority for now because:

  1. It requires adding a dependency to the TensorRT sdk.
  2. It would only support NVIDIA cards with real time inference capabilities (IDK how many of AITrack users have "modern" RTX cards).
  3. I suspect that having the real-time video schema would still have a signiticant weight on CPU because on each frame data should be copied to and from the card, reducing the available bus bandwith for the games (haven't measured it, though).

Will mark this as an IDEA/ENHACEMENT in case someone wants to benchmark this.

— Reply to this email directly, view it on GitHub https://github.com/AIRLegend/aitrack/issues/141#issuecomment-1214454575, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQ4MKMVQFE54LQPKCW43BB3VZFSG5ANCNFSM5X5HPZ3A . You are receiving this because you are subscribed to this thread.Message ID: @.***>

RedSnt commented 1 year ago

Not sure how to report this, but I'm sure MisterFabulous here wouldn't like his address and phone numbers shared.