Open diablodale opened 4 years ago
Just from my personal experience with using the SDK on Ubuntu: I used the order of the NVidia GPUs listed by the "nvidia-smi" command. The first GPU there is 0, the second GPU is 1, etc.
GPUs from other vendors doesn't seem to get a number.
Hi,
Is Azure-Kinect-Sensor-SDK supported on Lattepanda (Windows OS) , Raspberry Pi 4 (Windows ARM OS) and Intel Compute Stick ? Does Microsoft hololens azure kinect is same as new azure kinect camera? thank you in advance
We are aware of customers successfully running the Sensor SDK on LattePanda boards. There are currently no plans to support Raspberry Pi 4 w/Windows however are considering expanding Ubuntu support to Raspberry Pi 4. There also are currently no plans to support Intel Compute Stick. You can open feedback at https://feedback.azure.com/forums/920053-azure-kinect-dk asking for support of specific additional platforms.
Whilst the depth module in HoloLens 2 and Azure Kinect are identical they are run differently. There are no equivalent API to the Sensor SDK on HoloLens 2.
@qm13 thankyou for the answer.
Clarity on this is also needed across the new engines in body tracking v1.1.0. How do we identify all the possible IDs as they map to GPU devices? How do we use those IDs to specify the specific CUDA, DirectML, TensorRT, etc. device?
For example, my laptop has an integrated Intel GPU, and an Nvidia RTX2070Super. I need to be able to generate a list with two items, and the KinectSDK ids for those items.
something like vector<tuple<int32_t, string>> k4a::getGPUDevices()
which on my laptop would return a vector with size=2 . Each tuple having the int32_t
needed for k4abt_tracker_configuration_t.gpu_device_id
and some string or other struct that can describe the GPU device.
Here is the updated documentation. It will be published online with the next release of the SDK.
/** Specify the GPU device ID to run the tracker.
*
* The setting is not effective if the processing_mode setting is set to K4ABT_TRACKER_PROCESSING_MODE_CPU.
*
* For K4ABT_TRACKER_PROCESSING_MODE_GPU_CUDA and K4ABT_TRACKER_PROCESSING_MODE_GPU_TENSORRT modes,
* ID of the graphic card can be retrieved using the CUDA API.
*
* In case when processing_mode is K4ABT_TRACKER_PROCESSING_MODE_GPU_DIRECTML,
* the device ID corresponds to the enumeration order of hardware adapters as given by IDXGIFactory::EnumAdapters.
*
* A device_id of 0 always corresponds to the default adapter, which is typically the primary display GPU installed on the system.
*
* More information can be found in the ONNX Runtime Documentation.
*/
Hi.
Since the Body Tracker implements these APIs already it would make sense to provide a simple wrapper.
A simple function that returns the names of the available/applicable GPUs for a specified processing mode, which we can simply reference (or present in a GUI) to select which GPU ID (or the default) is used.
Instead of everyone needing to link to the different APIs and learn how to use them themselves. Especially since some APIs only support certain GPU types while other APIs support others.
So for example say a system has an NVidia 1xxx GPU, calling the function specifying CUDA will return a single GPU, DirectML returns the NVidia GPU as well as an Intel GPU that's part of the CPU (if that exists), TensorRT returns nothing (assuming that's only for 2xxx/3xxx GPUs).
The SDK can then internally map to the IDs used in the APIs while the user can work with much more predictable GPU names.
Just a thought.
"What 'CUDA API'? There are too many apis", CUDA runtime API. There is great documentation that describes how to obtain any information about the device, not only the name. For example, cudaGetDeviceProperties can be used to retrieve device properties.
"I recommend you remove this whole sentence, or, correct it to consider the following". This sentence was taken from the ONNX Runtime documentation. We will investigate it and create an issue if required.
"Since the Body Tracker implements these APIs". The device ID is required by ONNX Runtime Library. It is not used for body tracking.
I look forward to more clarity and forward movement on this issue. @Brekel has some good suggestions about the Microsoft SDK wrapping compute devices. Having a vector<computeInfo> getComputeInfoByType(k4abt_tracker_processing_mode_t computeType)
or vector<computeInfo> getComputeInfoAll()
are both useful.
The device ID is required by ONNX Runtime Library. It is not used for body tracking.
😵 That is not true from any reasonable perspective. Please see Microsoft documentation and the SDK header files for the struct and gpu_device_id. The gpu_device_id is relevant for all Body Tracking use. It could be default value 0, or some other value. Therefore, the gpu_device_id
is needed for Body Tracking.
If the gpu_device_id
is not needed for body tracking, please remove it from documentation and deprecate that field in the struct.😂
If onnx runtime is not needed for the Body Tracking SDK, then please remove onnxruntime.dll
, onnxruntime_providers_shared.dll
, and onnxruntime_providers_tensorrt.dll
from /tools
and sdk\windows-desktop\amd64\release\bin
directories of the SDK. Remember also to remove them from REDIST.txt
in your Body Tracking SDK also. 😂
Meanwhile... for the CUDA provider (does this also apply to TensorRT?) are you writing that we should...
cudaGetDeviceCount()
cudaGetDeviceProperties(..., countId)
If Microsoft choses to not provide an API to determine the values of the id
for its sibling API call, then I recommend Microsoft write an example in the Body Tracking SDK examples repo showing how to generate ids and friendly names for the 4 main types (cpu, cuda, tensorrt, directml).
@diablodale, @HlibKazakov2000 was right that 'gpu_device_id' is not directly required by the BT SDK. The BT SDK uses onnx-runtime for its inference, and the onnx-runtime uses CUDA, DirectML, etc. execution providers on lower level. Up to BT SDK v1.1 only the CUDA execution provider was available, and it has a 'device_id' parameter (hence 'gpu_device_id' in the body tracker parameters). See https://www.onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html
'gpu_device_id' was added after a customer request, to have an option to utilize an extra GPU for the BT inference, in order to free the resources of the primary GPU. If you leave it at 0, the by-default GPU will be used by any of the ONNX execution providers (except the CPU one, of course).
I recommend we agree to disagree on the semantics and word definitions regarding the need, use, or requirement of gpu_device_id
. That topic is slightly off track of the OP. The field is in the struct, it is not deprecated, and it must be initialized to its default 0
or set to one of 4 billion other ints.
MIcrosoft, do you have interest in participating ideas like from @Brekel or a new example in your BT example repo?
Everyone, have you difficulties in discerning people that are "official" voices of Microsoft? People like @HlibKazakov2000 have almost zero participation in GitHub and no employer listed. Yet their post above suggest some level of inside knowledge.
We are aware of customers successfully running the Sensor SDK on LattePanda boards. There are currently no plans to support Raspberry Pi 4 w/Windows however are considering expanding Ubuntu support to Raspberry Pi 4. There also are currently no plans to support Intel Compute Stick. You can open feedback at https://feedback.azure.com/forums/920053-azure-kinect-dk asking for support of specific additional platforms.
Whilst the depth module in HoloLens 2 and Azure Kinect are identical they are run differently. There are no equivalent API to the Sensor SDK on HoloLens 2.
You said that there is consideration to extend the SDK's Ubuntu support to the Raspberry Pi 4. So how is it going now? Our project needs to use Raspberry Pi 4+ROS+Azure Kinect. But we don't need body tracking, we just need to read color images and depth images normally. So is it currently possible or can you provide some help.
I request documentation clarification of the integer identifier in Body Tracking
k4abt_tracker_configuration_t.gpu_device_id
. What/how is this id discerned? I see it isint32_t
. That means the id can be 4 billion possibilities. How do I know what integer to set here? Trial and error of 4 billion? ;-)Is it some number seeable in the Windows registry? Id that can be retrieved from DirectX apis? CUDA? OpenCL? OpenGL?
Scenerio
Imagine I want to present the end-customer with a list of GPUs currently installed on the computer. That list has friendly names like "Intel Graphics 123", "AMD Radeon Super 5", "Nvidia RTX2001". Then the end-customer can select one of these GPUs. And now, my code needs to set
k4abt_tracker_configuration_t.gpu_device_id
. I likely used APIs to get all the GPU friendly names. How do I get thisk4abt_tracker_configuration_t.gpu_device_id
integer?Related #992