Closed hossein1387 closed 3 years ago
Hi everyone, As i know caffe2 fully supports ONNX import/export and caffe 2 is a part of PyTorch recently. However, I can not import onnx model file to Pytorch and run inference properly. How can I resolve this issue?
Agreed. I was kind a shocked when I have learnt PyTorch doesn't have that.
@soumith, any plans to roll out this functionality anytime soon?
+
to motivation
I need to convert an onnx model to torchscript (.pt) so I can run it on pytorch mobile.
How can I do this now? Any workarounds?
+1, not supporting ONNX import feels like half of the ONNX implementation is missing
Is there any ongoing PR for this issue?
As a heavy torch user, this feature sounds nice.
This feature is relevant, it should be implemented.
Looking forward to have it in torch
This feature is very relevant, my motivation:
@gordinmitya Have you found a solution to your problem? I am trying to do something similar.
@solarflarefx no ): pytorch mobile suddenly stoped working without any changes from my side, so I decide to just skip it until mobile version become stable
onnxruntime may helps
Anyone figured a simple way to do this ? (without onnxruntime)
@Franzis39 how does onnxruntime help?
I think it will be a huge challenge to restore the runtime graph in PyTorch. But if we focus on the model parameters, likes that we only load the parameters stored in an onnx file rather than load the full model, maybe more easy.
Maybe this snippet will help:
import onnx
from onnx import numpy_helper
import torch
from torchvision import models
model = models.resnet18()
torch.onnx.export(model, torch.randn(1, 3, 224, 224), 'resnet18.onnx')
onnx_model = onnx.load('resnet18.onnx')
graph = onnx_model.graph
initalizers = dict()
for init in graph.initializer:
initalizers[init.name] = numpy_helper.to_array(init)
for name, p in model.named_parameters():
p.data = (torch.from_numpy(initalizers[name])).data
Found a not-perfect but working solution. https://gist.github.com/qinjian623/6aa777037534c1c1dccbb66f832e93b8
+1
Now 151 likes but no sign of step forward. I dont know how the pytorch project works but is a minimum of reply from the project maintainers to be expected ?
A temporary not perfect time consuming idea I'm going to try is to convert onnx -> keras (onnx2keras) then keras -> pytorch (MMdnn).
It is absolutely ridiculous that this issue is still open honestly. I exported a very time consuming model to Onnx for use with RT and now need to make some modifications to it in Pytorch and I realize that for some reason this is a missing capability. I fully expected that if I can export to onnx format from pytorch that I should also be able to import it - rarely have I ever encountered a framework that lets you export to a format without also being able to read that format and it is frankly bad practice.
@gchanan any updates on if/when this will be implemented in the future?
+1 to this request, any ETA?
This looks like a pretty new and solid effort to achieve onnx to pytorch conversion: https://github.com/ToriML/onnx2pytorch
Any updates? almost 200 votes already.
Hi folks, I create a repo for generating pytorch code from onnx.
https://github.com/fumihwh/onnx-pytorch
Different from ToriML/onnx2pytorch
, my repo GENERATE pytorch code, not Module instance in stack.
Just a heads-up that v0.4 of onnx2pytorch (https://github.com/ToriML/onnx2pytorch) was released today. In addition to now supporting Loop, LSTM, and a lot more, this release has improved memory management by removing activations that are no longer required by any following operations. We're nearing full ONNX operator coverage, and now all the MLPerf Datacenter Inference ONNX models can be successfully imported into PyTorch.
There are no plans to add this directly to PyTorch, but seems like there are at least two solutions for this already linked to in this thread.
Another +1 for this feature. I've gotten pretty far into a project on the assumption that if I could export to onnx that I'd be able to go the other direction. I don't think it's unreasonable to think this functionality should be included in pytorch itself
Another +1 from me, really surprised it isn't a feature.
I'm very surprised this can't be done 😮 +1
Just ran into the issue, and I'm surprised that this hasn't been addressed in more than three years...
There is another lib: onnx2torch It works for me and I can use it to continue train on my onnx model.
Would love this feature!
+1
+1
Why was this issue closed? It seems like there is a lot of support from the community for it.
Whats the current status of this issue? Why was it closed?
This feature is obviously non-trivial, but would be great to have
any reason why this issue is closed ?
I just meet a trouble in getting the network structure from a onnx file, should i visualize the structure and write in pytorch by hand ?
Why this issue is closed?! It's MUST be resolved!
+1
+1
+1
I would like to use this feature too, does someone have any news about this issue?
@BowenBao @neginraoof Bumping this yet again, and echoing that I am not sure why this was closed.
🚀 Feature
Importing ONNX models into Pytorch.
Motivation
Almost all other frameworks already support this. Importing ONNX models into Pytorch makes Pytorch much more flexible.
Pitch
In
torch.onnx
, a function should be created to take the ONNX model and outputs a Pytorch model.cc @BowenBao @neginraoof