Is your feature request related to a problem? Please describe.
The current method of hooking tensors doesn't interoperate smoothly with PyTorch's JIT, which makes it difficult to impossible to convert a Torch model's backward pass to Torchscript for serialization to mobile devices.
Describe the solution you'd like
Rework the way tensor types are defined and hooked to use PyTorch's new affordances for extending torch.tensor for smoother interoperation with the JIT tracer.
Describe alternatives you've considered
Try to mash the existing code into some form that works with the JIT tracer. We tried for a few hours and eventually decided that reworking the custom tensor types was a saner approach.
Build code into Plans that constructs a reverse Plan using autograd somehow. Might be possible but doesn't account for optimization, which would still need to be Torchscript-ified somehow.
Additional context
Rough roadmap of the core primitives:
[x] (PyTorch) AbstractTensor - should work with .grad, nn.Parameter, and nn.Module (including learning)
[x] (Tensorflow) AbstractTensor - should work with Variable and Tensor
[x] (Numpy) AbstractTensor - should work with nd.array
[x] (Tensorlfow) AbstractVariable - should work with gradietns
[ ] (PyTorch) AbstractParameter - should work with nn.Module
[ ] (Tensorflow) AbstractTensor - should work with learning stuff
[ ] (Syft) SyftTensor -> which should automatically create the above tensors in PyTorch, Tensorflow, and Numpy
[x] FixedPrecisionTensor (a simple one) - same as before - it should inherit from SyftTensor and automatically populate the 3 frameworks
[ ] Tensor chaining - .child should work automatically in all 3
[ ] PointerTensor
[ ] (Syft) PlaceholderTensor - which should inherit from SyftTensor and automatically populate the 3 frameworks
[ ] Plans
[ ] Plans -> Torchscript
[ ] Plans -> torchscript over tensor chains
[ ] AutogradTensor shoudl allow for us to have our own python-level autograd
[ ] Promise (which I think we should simply rename "Graph", which we can "stream" executions through)
This issue has been marked stale because it has been open 30 days with no activity. Leave a comment or remove the stale label to unmark it. Otherwise, this will be closed in 7 days.
Is your feature request related to a problem? Please describe. The current method of hooking tensors doesn't interoperate smoothly with PyTorch's JIT, which makes it difficult to impossible to convert a Torch model's backward pass to Torchscript for serialization to mobile devices.
Describe the solution you'd like Rework the way tensor types are defined and hooked to use PyTorch's new affordances for extending
torch.tensor
for smoother interoperation with the JIT tracer.Describe alternatives you've considered
Additional context Rough roadmap of the core primitives: