NVIDIA / waveglow

A Flow-based Generative Network for Speech Synthesis
BSD 3-Clause "New" or "Revised" License
2.27k stars 529 forks source link

not able to export waveglow_old.pt into onnx format and keep getting RuntimeError: Only tuples, lists and Variables supported as JIT inputs, but got NoneType #97

Closed ajaysg-zz closed 5 years ago

ajaysg-zz commented 5 years ago
dummy_input = Variable(torch.randn(1, 80, 100))
torch.onnx.export(waveglow, dummy_input, "waveglow.onnx")

and the error is RuntimeError Traceback (most recent call last)

in () 5 6 # Invoke export ----> 7 torch.onnx.export(waveglow, ummy_input, "waveglow.onnx") ~/anaconda3/lib/python3.6/site-packages/torch/onnx/__init__.py in export(*args, **kwargs) 25 def export(*args, **kwargs): 26 from torch.onnx import utils ---> 27 return utils.export(*args, **kwargs) 28 29 ~/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type) 102 operator_export_type = OperatorExportTypes.ONNX 103 _export(model, args, f, export_params, verbose, training, input_names, output_names, --> 104 operator_export_type=operator_export_type) 105 106 ~/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, propagate) 279 training, input_names, 280 output_names, operator_export_type, --> 281 example_outputs, propagate) 282 283 # TODO: Don't allocate a in-memory string for the protobuf ~/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py in _model_to_graph(model, args, f, verbose, training, input_names, output_names, operator_export_type, example_outputs, propagate) 222 raise RuntimeError('\'forward\' method must be a script method') 223 else: --> 224 graph, torch_out = _trace_and_get_graph_from_model(model, args, training) 225 params = list(_unique_state_dict(model).values()) 226 ~/anaconda3/lib/python3.6/site-packages/torch/onnx/utils.py in _trace_and_get_graph_from_model(model, args, training) 190 # training mode was.) 191 with set_training(model, training): --> 192 trace, torch_out = torch.jit.get_trace_graph(model, args, _force_outplace=True) 193 194 if orig_state_dict_keys != _unique_state_dict(model).keys(): ~/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py in get_trace_graph(f, args, kwargs, _force_outplace) 194 if not isinstance(args, tuple): 195 args = (args,) --> 196 return LegacyTracedModule(f, _force_outplace)(*args, **kwargs) 197 198 ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 487 result = self._slow_forward(*input, **kwargs) 488 else: --> 489 result = self.forward(*input, **kwargs) 490 for hook in self._forward_hooks.values(): 491 hook_result = hook(self, input, result) ~/anaconda3/lib/python3.6/site-packages/torch/jit/__init__.py in forward(self, *args) 250 trace_inputs = _unflatten(all_trace_inputs[:len(in_vars)], in_desc) 251 out = self.inner(*trace_inputs) --> 252 out_vars, _ = _flatten(out) 253 torch._C._tracer_exit(tuple(out_vars)) 254 except Exception: **RuntimeError: Only tuples, lists and Variables supported as JIT inputs, but got NoneType**
ajaysg-zz commented 5 years ago

@rafaelvalle please help

csapot commented 5 years ago

I faced the same issue today. Do you have any solution for that?

ajaysg-zz commented 5 years ago

What is the dimension of the dummy_input you created?

csapot commented 5 years ago

it was like this:

mel_extra=torch.tensor(np.full((1,80,222),-10.0)).float()
dummy_input=torch.autograd.Variable(mel_extra).cuda().float()
androidof2008 commented 5 years ago

waveglow_path = './waveglow_256channels.pt' waveglow = torch.load(waveglow_path)['model']

mel_extra=torch.tensor(np.full((1,80,222),-10.0)).float() dummy_input=torch.autograd.Variable(mel_extra).cuda().float()

audio = torch.tensor(np.full((1,222), -0.5)).float() audio_input = torch.autograd.Variable(audio).cuda().float() torch.onnx.export(waveglow, ((dummy_input, audio_input),), "./waveglow.onnx")

It can start to run export, but since logdet operator is not supported, export failed.

/home/user/anaconda3/envs/pytts/lib/python3.6/site-packages/torch/onnx/utils.py:501: UserWarning: ONNX export failed on ATen operator logdet because torch.onnx.symbolic.logdet does not exist ...... RuntimeError: ONNX export failed: Couldn't export operator aten::logdet

ajaysg-zz commented 5 years ago

@csapot Did you find any way to export the model to onnx format

csapot commented 5 years ago

@ajaysg no, finally I did not use this

ajaysg-zz commented 5 years ago

@androidof2008 are you able to export to onnx