orobix / fwdgrad

Implementation of "Gradients without backpropagation" paper (https://arxiv.org/abs/2202.08587) using functorch
MIT License
95 stars 7 forks source link

No speed-up in my implementation too #7

Open LSC527 opened 2 years ago

LSC527 commented 2 years ago

I implemented this papaer with torch.autograd.forward_ad. However, fwd gradient showed no speed-up compared to fwd+bwd.

DavideTr8 commented 2 years ago

It would be interesting for us to see your implementation as well. If you want, you can make a PR to our repo with your code. So we can have multiple implementations available.

LittleWork123 commented 8 months ago

I ran the code from the repository, but I couldn't replicate the results mentioned in the paper, especially regarding the CNN. I used the hyperparameter settings specified in the paper.

May I inquire if there are alternative parameter settings available?

image

DavideTr8 commented 8 months ago

Hi, unfortunately we weren't able to reproduce the same results too. The hyperparameters we used are the same reported in the paper, but we don't now if alternative hyperparameters settings are available.

We believe that the difference between our implementation and the official one are due to the fact that they did not use functorch