Closed WeiLi233 closed 3 years ago
Were you able to use the onnx files with the workaround you've described. If not, could you please try the latest master or https://github.com/NVIDIA/NeMo/pull/232 ?
I didn't try the workaround but the master branch still have the same issue above
by using the inputs/outputs_to_drop, I managed to get it to work by puttin this code after this line https://github.com/NVIDIA/NeMo/blob/146a51cb685d463c98b0eae4de4d4aefd32ebfb5/nemo/nemo/backends/pytorch/actions.py#L1121
for input_to_drop in inputs_to_drop:
input_names.remove(input_to_drop)
for output_to_drop in outputs_to_drop:
output_names.remove(output_to_drop)
closing as this is related to the old version
Hello, I tried to trained my own mandarin ASR model with open corpus aishell_1, everything seems right, the config file I used is located in
examples/asr/configs/quartznet10x5.yaml
, but when I attempted to convert temporary JasperEncoder-STEP-30000.pt and JasperDecoderForCTC-STEP-30000.pt to onnx format by usingscripts/export_jasper_to_onnx.py
script, An error occured when converting encoder pt file to onnx format, some logs are:After my own check and trace, I think there may be a bug in
nemo.backends.pytorch.actions.py
https://github.com/NVIDIA/NeMo/blob/146a51cb685d463c98b0eae4de4d4aefd32ebfb5/nemo/nemo/backends/pytorch/actions.py#L1135-L1136after I removed "length" from list
input_names
and removed "encoded_lengths" from listoutput_names
before callingtorch.onnx.export
, the converting process worked fine.The nemo version I used is
0.9.0