Closed RensOliemans closed 4 months ago
Thank you for the report.
You are correct, the C++ implementation calls the wrong libtorch function -- mse_loss() instead of l1_loss().
I can fix it, or you can submit a PR and get yourself on our 'Contributor' board... :-)
@RensOliemans -- take a look at #1360
Thanks, I had the same diff in https://github.com/RensOliemans/TorchSharp/commit/1f7e50b177cc68e0d2473b5da59c4baeb9e30b8a but didn't push it yet: I haven't figured out how to build the TorchSharp so that I get a .nupkg
that I can test locally. Is that possible on Linux?
Does dotnet pack
not work?
You can look in the azure_pipeline.yaml file for the exact CLI arguments to provide to dotnet pack
You can look in the azure_pipeline.yaml file for the exact CLI arguments to provide to
dotnet pack
Thanks, this works. Specifically the /p:IncludeTorchSharpPackage=true
and /p:IncludeLibTorchCpuPackages=true
flags for dotnet pack
.
Closed #1359 as completed via #1360.
Thanks!
The L1Loss should create a criterion that measures the MAE error, but in TorchSharp it measures the MSE.
I believe that this is incorrect. See the following screenshot comparing TorchSharp and pytorch, which I believe should get identical results, but don't:
See the call to
functional::mse_loss
: https://github.com/dotnet/TorchSharp/blob/26240d52c81d86c69af7e14faed9e11af6e37567/src/Native/LibTorchSharp/THSLoss.cpp#L94-L102If this is indeed incorrect, I would be happy to submit a fix in a PR, but am still figuring out how to make my projects use the locally built version of TorchSharp—I'm not the biggest C# / dotnet / nuget expert.