nengo / keras-lmu

Keras implementation of Legendre Memory Units
https://www.nengo.ai/keras-lmu/
Other
207 stars 35 forks source link

Feature - parallel training #38

Open Ondrysak opened 3 years ago

Ondrysak commented 3 years ago

Are there any plans to implement training in parallel manner as shown in

https://arxiv.org/pdf/2102.11417.pdf

arvoelke commented 3 years ago

This has been implemented as keras_lmu.LMUFFT, which will be automatically selected if you use keras_lmu.LMU and satisfy these conditions: https://github.com/nengo/keras-lmu/blob/ab0775791aa73f9d22780539594ef4bd7de0be25/keras_lmu/layers.py#L398-L403 There is still a bit of support that can be added for the RNN flags in #35 but let us know if this works for your use case.

NarsimhaChilkuri commented 3 years ago

For now, you might want to look at the implementation here. This is essentially the same as keras_lmu.LMUFFT , with two exceptions: 1) it supports multi dimensional input; and 2) when return_sequences=False, it implements equation (25) from the paper, which is more efficient.

drasmuss commented 3 years ago

Just a note that the multi dimensional input for LMUFFT is now supported in master.