Open valeriosofi opened 1 year ago
This also can be used to implement onnx to pytorch conversion (https://github.com/nebuly-ai/nebullvm/blob/main/nebullvm/operations/conversions/converters.py#L126)
Or is it implemented elsewhere?
Hi @SuperSecureHuman, yep as you can see the pytorch_conversion
method is not implemented for now, but we should test the repository linked above and if it works well we could use it to implement the method. If you want to contribute feel free to assign yourself the issue :)
I've started working (by directly importing the onnx2torch module). Not sure if I can assign myself to this because its gonna take me some time to understand the convertors api (and overall layout of all other apis).
Edit: I also have 0 experience with onnx :sweat_smile:
Thanks, your help is very appreciated! Of course take your time to understand the code, and if you have any questions about how it works, just ask ;)
from nebullvm.operations.conversions.onnx import convert_onnx_to_torch
import onnx
onnx_model_path = '/home/venom/Downloads/mobilenetv2-12.onnx'
onnx_model = onnx.load(onnx_model_path)
output_file_path = '/home/venom/Downloads/model.pt'
device = 'cpu'
convert_onnx_to_torch(onnx_model, output_file_path, device)
For now the convertion works without error
Will try working further to improve it
Meanwhile, it would be great, if you could edit parts of my commit to make it more "module like" :)
Great! Maybe check also with another couple of models to see if they are converted without errors as well. After that you could implement also the method pytorch_conversion
in the TensorflowConverter
class, where the model should be first converted to ONNX and then to PyTorch :)
Update: Sorry for the delay
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
from nebullvm.operations.conversions.onnx import convert_onnx_to_torch
import onnx
onnx_model_path = '/home/venom/Downloads/mobilenetv2-7.onnx'
onnx_model = onnx.load(onnx_model_path)
output_file_path = '/home/venom/Downloads/model.pt'
device = 'cpu'
outfile = convert_onnx_to_torch(onnx_model, output_file_path, device)
if outfile is not None:
print("Converted successfully")
print(outfile)
else:
print("Conversion failed")
Now, added error handling
Commit - https://github.com/nebuly-ai/nebullvm/commit/cb74ee8baff286df796eca0e03069ae23c9251b2
onnx2torch module, has the requirement of opset version 9 (minimum)
In the above example, the model used opset7 (hence failed)
Trying to convert all the models in https://github.com/onnx/models
Filtering out non-quantised and models that match the min requirements
43 Failed, 24 Converted
Workaround for opset version
Onnx Version Conversion - Offical Docs
This could be added to the docs, and in the error message
At the moment we support only PyTorch to ONNX and TensorFlow to ONNX conversions. We could test and use this repo to convert an ONNX model to PyTorch in order to support TensorFlow to PyTorch conversion.