learnables / learn2learn

A PyTorch Library for Meta-learning Research
http://learn2learn.net
MIT License
2.63k stars 351 forks source link

derivative for aten::_scaled_dot_product_efficient_attention_backward is not implemented #429

Open Darius888 opened 4 months ago

Darius888 commented 4 months ago

Hello,

When trying to apply the Sine Wave example approach to a transformer based model I get the following output:

File "/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py", line 767, in _engine_run_backward return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: derivative for aten::_scaled_dot_product_efficient_attention_backward is not implemented

Regression task setup. Multiple sequences.

Is it possible to somehow work around this ?

Thank you,

JingminSun commented 4 months ago

I think this happens when you set first_order = False, so the simplest way is to set first_order = True

If you really want to do second order, check this https://github.com/pytorch/pytorch/issues/117974

Darius888 commented 4 months ago

This was exactly it, thank you so much! @JingminSun

renhl717445 commented 1 week ago

This was exactly it, thank you so much! @JingminSun

How to modify it specifically?

I think this happens when you set first_order = False, so the simplest way is to set first_order = True

If you really want to do second order, check this pytorch/pytorch#117974