Open DrewRidley opened 2 years ago
@Adele101 @smk2007 @PatriceVignola - can you respond, since this is related to PyTorch/TensorFlow-specific, not DirectML itself?
(though, my hunch is that using the Python API or C++ API of each framework won't make much difference to performance, as the bulk of the work is in the tensor processing on the GPU rather than the API calls)
@fdwr the point of using C++ APIs has to do with not needing to deploy Python.
Additionally, I would like to use https://github.com/dotnet/TorchSharp , which "links" against torch.dll
Currently we are targeting using PyTorch for training in python, on top of DX12 compatible GPUs via DirectML. As such don't support a native distribution that is compatible with libtorch.
@Adele101, we should into producing a native distribution to facilitate integration with libtorch customers and other torch projections (ie: TorchSharp mentioned above).
DirectML for libtorch would be amazing. We would like to start shipping some machine learning tools with our product (written in C++) and would love to use libtorch, but using the GPU for training will be required. Compiling libtorch with GPU (CUDA) support will add an extra 1.2 GB to the install package of our product, plus the extra ~200 MB for the CPU libraries. This would quadruple the install package size of our product, which is a bit rough as these tools may not be used by the majority of our users. Plus DirectML will not limit our users to require NVIDIA GPUs, which would be great.
need a plugin that not depend on Python, fucking python.
Any updates on this thread? I am really hoping to have DirectML work with libtorch and torchsharp.
ML.Net has recently introduced GenAI packages, implementing popular Large Language Models (LLMs) such as Phi, Llama, and Mistral using TorchSharp. These implementations are 1:1 ported from huggingface transformers and can load the same .safetensor
weight of which transformers models load. If it's possible to have DirectML work with libtorch and torchsharp so inferencing on these models can also be accerlerated on non-cuda device.
Hello,
I was wondering if it is possible to use the C++ apis of either pytorch (libtorch) or tensorflow with DirectML for enhanced performance. If it is possible could the documentation be updated to reflect that?