Closed Mrpatekful closed 4 years ago
I met the same problem, have you solved it?
Unfortunately I could not. If a recall correctly an error originated from the cumsum operation (which is included in onnx 1.5 with opset 11) in https://github.com/pytorch/fairseq/blob/9f25ffb02ab54a443ba181f732d5dcc00db7bea8/fairseq/utils.py#L192 . However I might have fixed that and one more error persisted so I gave up and implemented my own fairseq model (Not sure if I fixed it and still got an error or just didn't bother and went straight to implementing my own). https://github.com/Mrpatekful/xlmr-finetuning I tested the outputs of the two version and they are identical with atol=1e-5, but mine could be exported without and error with onnx>=1.5 .
I have tried exporting the xlmr.base model from torch.hub to onnx, but I receive an exception regarding ATen.