Open nietras opened 2 years ago
@nietras, I looked into this a little bit.
It looks like the C++ code in LibTorch is directly messing with Python objects to add the necessary reference to from the CUDA backend.
As far as I can tell, we'd have to link with CUDA libraries directly in order to find these APIs, it doesn't seem like libtorch exports anything that we can use from TorchSharp's native interop layer.
pytorch exposes this as https://pytorch.org/docs/stable/generated/torch.cuda.get_device_properties.html the idea is to do the same for TorchSharp with the goal of being able to query available memory on CUDA devices etc.
I don't know exactly what needs to be changed to allow this, but it should be fairly straightforward.