Open pkese opened 1 year ago
@pkese
Thanks for raising this issue:
This issue aligns with the recent dotnet interactive initiative to evaluate new features (e.g. adding support for e.g. python) to .NET Interactive.
With your suggestion, we are closer to e.g. providing a .NET interactive polyglot notebook by porting the pure pytorch code below: PyTorch Fundamentals using a combination of Python.NET and TorchSharp.
FYI Polyglot notebook discussion:
https://github.com/MicrosoftDocs/pytorchfundamentals/tree/main/intro-to-pytorch
Sure.
I think it would be practical, if we'd keep the focus here on this particular feature.
This feature could probably discussed (or added) orthogonal to the Polyglot notebook discussion.
I don't immediately see how this can be made remotely safe without copying the data. There are several memory-management schemes involved already:
Add to that the complication that if the tensor is a view, things get slightly more complicated. This would add to it with a Python object holding a pointer.
I'm not rejecting the idea, but it will require some serious thought in order to make it work safely, and then a carefully crafted implementation.
@pkese Generally speaking, I am also very interested. I am also a maintainer of Python.NET.
I would like to see the necessary bits implemented on PyTorch side though: https://discuss.pytorch.org/t/mixing-c-and-python-types/171129
Unfortunately, this thread has no responses. Perhaps requesting the feature in the PyTorch official repository, and potentially implementing it too is the way forward.
Of course, as .NET folks it would be easier for us to implement in TorchSharp, but architecturally worse, IMHO. In PyTorch Python-specific bits are guarded with #ifdef
, and if TorchSharp native bits are built with them on, I am unsure the library will load with C++ LibTorch installed.
I think the way they do it in Python is:
PyTorch.tensor structure contains a space for a Storage object which is a reference to tensor's storage. That storage PyObject has a free()
method that gets called when Python's reference count to PyTorch tensor drops to zero.
So they set a custom PyObject implementation as tensor's Storage and unlink their own storage reference when free() gets called.
https://github.com/pytorch/pytorch/blob/master/torch/csrc/utils/object_ptr.cpp
In our case we could implement our own custom storage object mimicking PyObject containing a pointer reference to dotnet managed tensor and with custom implementation of free() that would release that dotnet reference.
A simpler approach would be that whoever calls the unsafe 'getRawLibTorchTensorPtr()' method is then responsible for keeping the dotnet reference to the owning tensor alive until Python is done using it.
As long as we don't introduce a Python VM dependency into TorchSharp...
Yes. The idea is to get raw pointer and then it's up to me (or whoever the user is), to provide the rest.
Maybe it's possible to use Python.NET to wrap a dotnet tensor reference (for PyTorch Storage object) instead of pulling in PyTorch/Python library dependencies.
@lostmsu could this be done through Python.NET?
@pkese unsure. The storage mechanism in PyTorch seems multilayered and spans both Python and C++ side. Unless one can come up with a concise Python interface that would need to be implemented, I don't see how we could use it.
Until PyTorch has (which it might already, we just don't know about it) some kind of opaque tensor handle visible both in its Python side and C++ side, we are out of luck.
As long as we don't introduce a Python VM dependency into TorchSharp...
Yeah, my feed is full of links to LeCun's recent comment on Python slowing deep learning research down due to GIL.
I also noticed, that they tried and failed to make simple multithreading work efficiently with nn.DataParallel
due to it, and now recommend DistributedDataParallel
, which is magnitude more complicated to setup, and likely uses some arcane magic internally like passing Python objects across process boundaries with big rapid-firing footguns.
I have nothing against Python (although I'm personally a big fan of Julia for numeric workloads) and I think Python.NET is really great for the scenarios it's intended to serve.
That said, I want TorchSharp to remain simple and straightforward. It's a library that is aware of its role and not trying to be something more than it is -- .NET bindings to libtorch. Therefore, it must remain free of any runtime dependencies beyond .NET and whatever supported accelerator HW requires.
When will C# be able to have a library like pytorch, not just a wrapper. This eliminates the need to worry about interacting with external language facilities and thinking about how to marshal objects. Unfortunately C# is not good at machine learning.
Although pytorch is just a wrapper, it is at least a good start. I hope that C# can develop well in the field of machine learning in the future without having to form a strong dependence on languages like python.
I'd like to share tensors between Python and .NET,
e.g. have a batch generator written in .NET and pass those batches to a Python model loaded through Python.NET.
Currently there's no way to unsafely get raw C++ Torch tensor pointer from TorchSharp, and pass it to Python.
A workaround is to access the raw data (
tensor.bytes
) and use that to construct a tensor in Python, but it results in unnecessary copying and is inefficient, especially if the source and target tensors are on GPU.So the ask in this ticket is to get access to raw tensor pointer.
My target is actually to get a full-blown Python PyTorch tensor (the PyObject wrapper of the the C++ tensor), but I assume that's beyond the scope of TorchSharp.