FunAudioLLM / CosyVoice

Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
https://funaudiollm.github.io/
Apache License 2.0
4.84k stars 492 forks source link

Failed to export onnx model from flow #399

Open zhangyike opened 5 days ago

zhangyike commented 5 days ago

env: torch.version = 2.0.1+cu118 onnx.version = '1.16.0' command: python cosyvoice/bin/export_onnx.py --model_dir $dir error logs: /root/miniconda3/envs/cosyvoice/lib/python3.8/site-packages/diffusers/models/attention_processor.py:645: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if current_length != target_length: /root/miniconda3/envs/cosyvoice/lib/python3.8/site-packages/diffusers/models/attention_processor.py:660: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.shape[0] < batch_size * head_size: /apdcephfs_cq8/share_784792/users/yikezhang/public_repo/CosyVoice/examples/libritts/cosyvoice/cosyvoice/bin/../../third_party/Matcha-TTS/matcha/models/components/decoder.py:149: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert inputs.shape[1] == self.channels ============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 1 ERROR ======================== ERROR: missing-standard-symbolic-function

Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 18 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues. None

Traceback (most recent call last): File "cosyvoice/bin/export_onnx.py", line 112, in main() File "cosyvoice/bin/export_onnx.py", line 68, in main torch.onnx.export( File "/root/miniconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export _export( File "/root/miniconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export graph, params_dict, torch_out = _model_to_graph( File "/root/miniconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/onnx/utils.py", line 1117, in _model_to_graph graph = _optimize_graph( File "/root/miniconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/onnx/utils.py", line 665, in _optimize_graph graph = _C._jit_pass_onnx(graph, operator_export_type) File "/root/miniconda3/envs/cosyvoice/lib/python3.8/site-packages/torch/onnx/utils.py", line 1901, in _run_symbolic_function raise errors.UnsupportedOperatorError( torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 18 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
aluminumbox commented 5 days ago

use torch==2.2+, but onnx model rtf is not very stable