Closed amirvenus closed 5 months ago
Thanks for your feedback @amirvenus. I believe that this would mostly be up to the other libraries that we depend on for that type of stuff, i.e. OnnxRuntime/TorchSharp/etc.
Thoughts here @luisquintanilla?
Correct. We can improve our APIs for ONNX Execution Provider support in ML.NET, but hardware support for the most part especially for deep learning workflows that make best use of specialized hardware will be dependent on the underlying libraries.
Closing this for now since there is nothing we can do. We will keep our underlying libraries versions updated so that when this does come out we can use it.
Is your feature request related to a problem? Please describe. I'm always frustrated when I hear that despite the portability of dotnet and availability of large amounts of GPU memory thanks to recent Apple SoC machines, ml.net only support nvidia GPUs
Describe the solution you'd like Support for Apple's SoC integrated GPU in M series
Describe alternatives you've considered There is currently no other viable dotnet alternative
Additional context Open source models such as Ollama and Mistral can be loaded on the Apple's SoC GPU so it would be great to add support for Apple's GPU (i.e. Metal, etc.) and/or the neural's engine