NVIDIA / MinkowskiEngine

Minkowski Engine is an auto-diff neural network library for high-dimensional sparse tensors
https://nvidia.github.io/MinkowskiEngine
Other
2.47k stars 367 forks source link

Any idea on how to export a ME based model using onnx #402

Open lombardm opened 3 years ago

lombardm commented 3 years ago

Sorry for the "Bug Report" label. Actually this is not a real bug concerning thisfantastic work, but just a demand for help.

I would like to export a model based on ME (specifically FCGF) using the Pytorch functionality torch.onnx. The idea is then to pass it to TensorRT to try to boost the performances of such architecture. I wrote a simple code that I report in a snippet in the following:

def export_model_fcgf(torch_model, net_input, onnx_net_file):
    print("Exporting Torch FCGF Model to Onnx")
    torch.onnx.export(
        torch_model,                    # model being run
        net_input,                      # model input (or a tuple for multiple inputs)
        onnx_net_file,                  # where to save the model
        export_params = True,           # store the trained parameter weights inside the model file
        opset_version = 10,             # the ONNX version to export the model to
        do_constant_folding = True,     # whether to execute constant folding for optimization
        input_names = ['input_pcd'],    # the model's input names
        output_names = ['output_fcgf'], # the model's output names
        dynamic_axes = {
            'input_pcd'   : {0 : 'N'},  
            'output_fcgf' : {0 : 'N'}
        }                               # variable lenght axes
    )    
    print(f"Torch model exported to {onnx_net_file}.")

Here the net_input is a sparse Tensor, created using the Minkowski library, starting from a regular array of coordinates and features. Unfortunately the torch.onnx.export throws the RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: SparseTensor

How can we deal with this custom input then? I know this is something that should be better to ask in a Pytorch forum, however I have already checked there and I only found a couple of not answered questions similar to mine. Therefore I was wondering if this kind of necessity has been already encountered by someone else using this library and if anyone is aware of how to solve this problem.

Many thanks in advance and have a nice day, Marco ``

Magnusgaertner commented 2 years ago

Hi @chrischoy,

is there any plan to provide onnx support for this library? I would like to use this library in a deployment setting without running a python interpreter, thus onnx seems to be the only proper option :)

I am also happy to contribute here, let me know what you think.

Best, Magnus

mariusud commented 1 year ago

+1 on this, any updates? Happy to contribute

NUS-WWS commented 1 year ago

Any updates? Error: RuntimeError: ONNX export failed: Couldn't export Python operator MinkowskiConvolutionFunction. Happy to contribute here

sivareddy94 commented 1 year ago

Hi @lombardm @NUS-WWS @Magnusgaertner @chrischoy @mariusud
Any updates on ONNX Export?

RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: SparseTensor

Happy to collaborate and contribute here.