yangxy / GPEN

2.44k stars 451 forks source link

ONNX Port #66

Open JanFschr opened 3 years ago

JanFschr commented 3 years ago

I added onnx port and natice backups for the custom ops:

check out https://github.com/deepartist/GPEN

puppy9207 commented 3 years ago

I added onnx port and natice backups for the custom ops:

check out https://github.com/deepartist/GPEN

Have you tried Tensor RT conversion? This transformation method doesn't work either.

xuewengeophysics commented 3 years ago

I added onnx port and natice backups for the custom ops:

check out https://github.com/deepartist/GPEN

Thanks very much. I encountered a problem "No module named 'fused'".

puppy9207 commented 3 years ago

Do you check this code? Some codes are annotated. 10 - 18 https://github.com/deepartist/GPEN/blob/main/face_model/op/fused_act.py

ajaskier commented 3 years ago

@JanFschr could you please provide the exact versions of torch torchvision, onnx and onnxruntime (and any other library that is relevant for the conversion) that you used to convert the model to ONNX? Without that information, I'm getting ambiguous errors while trying to reproduce your work.

Seanseattle commented 3 years ago

@JanFschr, Thank you for your work! When I tried to convert the model to the onnx and met the following problem. RuntimeError: ONNX export failed: Couldn't export Python operator FusedLeakyReLUFunction. It seems that the onnx does not support the FusedLeakyReLUFunction.

ajaskier commented 3 years ago

@Seanseattle could you please provide what package versions are you using?

Seanseattle commented 3 years ago

@ajaskier, thank you for your reply. The package versions are as follows: onnx 1.10.1 onnxruntime 1.8.0 torch 1.6.0 python 3.8

ajaskier commented 2 years ago

@Seanseattle using these package versions I encountered these errors:

Running Environment:
----------------------------------------------
  DEVICE:  cuda
Export model ...
/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:241: UserWarning: `add_node_names' can be set to True only when 'operator_export_type' is `ONNX`. Since 'operator_export_type' is not set to 'ONNX', `add_node_names` argument will be ignored.
  warnings.warn("`{}' can be set to True only when 'operator_export_type' is "
/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:241: UserWarning: `do_constant_folding' can be set to True only when 'operator_export_type' is `ONNX`. Since 'operator_export_type' is not set to 'ONNX', `do_constant_folding` argument will be ignored.
  warnings.warn("`{}' can be set to True only when 'operator_export_type' is "
/gpen2/face_model/model.py:275: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out = F.conv2d(input, weight, padding=self.padding, groups=batch)
/gpen2/face_model/model.py:260: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
Traceback (most recent call last):
  File "onnx_decoder.py", line 190, in <module>
    export_onnx(model = args.model, path = args.path, force_cpu=args.force_cpu)
  File "onnx_decoder.py", line 68, in export_onnx
    torch.onnx.export(torch_model, dummy_input.to(device), onnx_file_name,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/__init__.py", line 203, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 86, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 526, in _export
    graph, params_dict, torch_out = _model_to_graph(model, args, verbose, input_names,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 366, in _model_to_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 319, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/__init__.py", line 338, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/__init__.py", line 421, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/__init__.py", line 415, in wrapper
    out_vars, _ = _flatten(outs)
RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type NoneType

Did you have a similar experience?

puppy9207 commented 2 years ago

@Seanseattle using these package versions I encountered these errors:

Running Environment:
----------------------------------------------
  DEVICE:  cuda
Export model ...
/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:241: UserWarning: `add_node_names' can be set to True only when 'operator_export_type' is `ONNX`. Since 'operator_export_type' is not set to 'ONNX', `add_node_names` argument will be ignored.
  warnings.warn("`{}' can be set to True only when 'operator_export_type' is "
/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py:241: UserWarning: `do_constant_folding' can be set to True only when 'operator_export_type' is `ONNX`. Since 'operator_export_type' is not set to 'ONNX', `do_constant_folding` argument will be ignored.
  warnings.warn("`{}' can be set to True only when 'operator_export_type' is "
/gpen2/face_model/model.py:275: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out = F.conv2d(input, weight, padding=self.padding, groups=batch)
/gpen2/face_model/model.py:260: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch)
Traceback (most recent call last):
  File "onnx_decoder.py", line 190, in <module>
    export_onnx(model = args.model, path = args.path, force_cpu=args.force_cpu)
  File "onnx_decoder.py", line 68, in export_onnx
    torch.onnx.export(torch_model, dummy_input.to(device), onnx_file_name,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/__init__.py", line 203, in export
    return utils.export(model, args, f, export_params, verbose, training,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 86, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 526, in _export
    graph, params_dict, torch_out = _model_to_graph(model, args, verbose, input_names,
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 366, in _model_to_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 319, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/__init__.py", line 338, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/__init__.py", line 421, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "/usr/local/lib/python3.8/dist-packages/torch/jit/__init__.py", line 415, in wrapper
    out_vars, _ = _flatten(outs)
RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type NoneType

Did you have a similar experience?

Would you like to remove None from this

ajaskier commented 2 years ago

@puppy9207 Thank you for this advice. I got rid of that error indeed, and now get the same error as @Seanseattle (RuntimeError: ONNX export failed: Couldn't export Python operator FusedLeakyReLUFunction)

FIY, when I use --force-cpu flag, the conversion completes and produces ONNX model.

puppy9207 commented 2 years ago

@puppy9207 Thank you for this advice. I got rid of that error indeed, and now get the same error as @Seanseattle (RuntimeError: ONNX export failed: Couldn't export Python operator FusedLeakyReLUFunction)

FIY, when I use --force-cpu flag, the conversion completes and produces ONNX model.

I think it's because fusedLeackyRelu is a custom operator. I think it's a problem that can be solved only by finding a way to track custom operators, but it's hard for me now. In addition, the result of converting to onnx and converting to onnxruntime seems to be very different from the existing GPEN.

kelisiya commented 2 years ago

I added onnx port and natice backups for the custom ops:

check out https://github.com/deepartist/GPEN

The onnx model not support dynamic_axes , if i set dynamic_axes , the error: RuntimeError: Unsupported: ONNX export of convolution for kernel of unknown shape.

tongchangD commented 2 years ago

How to solve this problem

haiderasad commented 2 years ago

@moderator can be please get a straight script to convert model to onnx

haiderasad commented 2 years ago

I added onnx port and natice backups for the custom ops:

check out https://github.com/deepartist/GPEN

hi can you tell how to make this onnx model for 256 model ?

JohnC05 commented 1 year ago

I added onnx port and natice backups for the custom ops: check out https://github.com/deepartist/GPEN

hi can you tell how to make this onnx model for 256 model ?

Please check the link given below for detailed ONNX conversion. https://github.com/yangxy/GPEN/issues/120#issuecomment-1571336269