Open disadone opened 1 month ago
The question does not make much sense. Having hidden state which is carried over to the next call makes the equation not an ODE and thus not convergent. If you do what you do here where you init the hidden state on each call, this model is equivalent to just calling the NN that is supposed to be recurrant, and so you might as well call that NN directly. So I don't quite get what you're trying to do?
Sorry for confusing. I would like to train a sequence-to-sequence model where the RNN could first derive a series of values and they are then fed into a sequence-to-sequence neuralode stuff as the inhomogeneous equation input. The weight in RNN and parameters in neuralode are trained together.
Maybe the question can be simplified as "How can I train a sequence-to-sequence neuralode with a series of inputs ?"
Maybe @avik-pal has an example
The weight in RNN and parameters in neuralode are trained together.
Do you mean the RNN weights and the neural network weights are shared?
The weight in RNN and parameters in neuralode are trained together.
Do you mean the RNN weights and the neural network weights are shared?
No, I mean the output of RNN could be the input of neualode at each time point.
But without state? Then it's not an RNN?
Hi! Just wondering how the RNN could be mixed into the
ODEProblem
In flux times, it seems a Recur layer need to be created. However there is already a
Recurrence
in Lux.jl Training of UDEs with recurrent networksHow can
Lux.jl
do the job now? I self defined a GRUcell and it runs well combined with the beginner tutorial Training a Simple LSTMWhen I try to transfer my self-defined GRUcell to the tutorial MNIST Classification using Neural ODEs, I don't know how to start the job. Really appreciate If anyone could help me! Thanks!