Closed dansimon363 closed 3 months ago
Yes, if you are using TorchSharp with ML.NET, you do have to use the right version. TorchSharp is still in preview, so breaking binary compatibility is still possible.
Ok, thanks for confirming.
Should the issue be closed, or is there an action item here?
I'm not sure if it should be closed. I'll defer to your expertise.
If this is expected, then it would be nice if there was some way that a dev could know about breaking binary compatibilities when using TorchSharp and ML.NET (if there isn't a way to already know). I think I made the assumption that using the latest version of packages for everything would work. I don't know if that would be appropriate in RELEASENOTES.md or README.md or somewhere else (if at all). I understand that being in preview can cause breaking changes so this is just a suggestion.
When you make a reference to ML.NET in your project, it should pull in the right version of TorchSharp. Is that not what you're seeing? If 0.102.7 is pulled in by ML.NET, I would think that's something that has to be fixed in ML.NET's package dependency definitions.
When I add a reference to Microsoft.ML.TorchSharp, it pulls in TorchSharp version 0.101.5.
I must have added TorchSharp-cpu reference at some point while trying to get the different types of ML .NET examples working in my project and then updated all the packages.
So, it seems it's behaving like it should.
The problem you ran into is that we are not guaranteeing binary compatibility between releases, so when another package takes a dependency on TorchSharp, it is to a specific version of it.
The update to cuda()
was to enable asynchronous moves, which aligns with PyTorch. Since ML.NET pulls in the right version, it seems to me that there's no further action to be taken (besides moving TorchSharp beyond preview status :-))
Thanks for the explanation @NiklasGustafsson.
Visual Studio 2022 - Version 17.10.4 x64 TorchSharp-cpu (0.102.7)
Using the following code causes an exception in TorchSharp-cpu version 0.102.7 but works correctly in version 0.101.5.
Error occurs on this line:
ITransformer mlModel = mlContext.Model.Load(mlNetModelPath, out var _);
I was informed of a workaround via this post: https://stackoverflow.com/questions/78630239/missingmethodexception-method-not-found-0-torchsharp-moduleextensionmethods.