Open asfiyab-nvidia opened 3 weeks ago
Please test with torch.onnx.export(..., dynamo=True, report=True)
using the latest torch-nightly. Attach the generated report if there is an error. Thanks!
I was able to successfully export your model with torch.onnx.export(model, (dict(input_ids=inputs),), dynamo=True)
. You should use the nightly build.
🐛 Describe the bug
I'm using the script below to export the Flux T5 model to ONNX using torch.onnx.dynamo_export(). However, I run into an error due to missing support for
fused_layer_norm_cuda.PyCapsule.rms_forward_affine
.The script below can be used to reproduce the issue:
The error is pasted below:
Versions
transformers 4.42.2 diffusers 0.31.0.dev0 torch 2.5.0a0+b465a5843b.nv24.9 (Nvidia NGC 24.09 PyTorch container)