Open koen-dejonghe opened 5 years ago
fdl
is univariate. It doesn't support multivariate calculus.
ok, I was not aware of that. You may want to have a look at scorch where I have implemented autograd in a similar way as Pytorch. Maybe there is a more Scala-esque way if doing this? Much appreciated.
https://github.com/botkop/scorch/tree/master/src/main/scala/scorch/autograd
The implementation technique you've used in Scorch (reification of the expression language) is basically the only one that makes sense for a complete system, unless you have a compiler plugin / macro that allows compilation of source code into some other representation. LMS ( https://scala-lms.github.io/) is a project that enables the latter but I would hesitate to use it in production code as it lags behind the main Scala compiler, and I'm not clear how supported it is.
On Mon, Aug 12, 2019 at 8:36 AM Koen Dejonghe notifications@github.com wrote:
ok, I was not aware of that. You may want to have a look at scorch where I have implemented autograd in a similar way as Pytorch. Maybe there is a more Scala-esque way if doing this? Much appreciated.
https://github.com/botkop/scorch/tree/master/src/main/scala/scorch/autograd
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/noelwelsh/fdl/issues/1?email_source=notifications&email_token=AAAH5XFYHSYR5F2K6PMXITLQEEHJHA5CNFSM4IKYI4G2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4BYNYI#issuecomment-520324833, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAH5XDQIUKSGGL446SXFP3QEEHJHANCNFSM4IKYI4GQ .
Taking below snippet as reference:
I would expect the gradients to be as follows:
See http://cs231n.github.io/optimization-2/#backprop
However, I get: