Project-HAMi / HAMi

Heterogeneous AI Computing Virtualization Middleware
http://project-hami.io/
Apache License 2.0
956 stars 197 forks source link

When a pod exclusive a gpu deivce, we can skip hami-core.so interception #603

Open lengrongfu opened 1 week ago

lengrongfu commented 1 week ago

What would you like to be added: Current even though pod exclusive a gpu device, libcuda.so will still be called through libvgpu.so, which will affect the calling efficiency. I hope that libcuda.so can be called directly in this scenario.

What type of PR is this?

/kind feature

What this PR does / why we need it:

Which issue(s) this PR fixes: Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

lengrongfu commented 1 week ago

/assign

archlitchi commented 1 week ago

you can add env 'CUDA_DISABLE_CONTROL' to container to skip hami-core manually.

note that if you skip hami-core, you won't be able to monitor your pod with port 31992