gitwukeyi / FSPEN

38 stars 11 forks source link

Hidden states of the model/rnn #5

Open ercandogu-elevear opened 5 months ago

ercandogu-elevear commented 5 months ago

Hello,

I was wondering if there is a reason why you are not using for the dual path extension (DPE) the hidden states from the previous DPE but just initializing all of them to zero (So not just initializing the first dpe and using the out_state from the first dpe for the second dpe hidden state input).

Also, are you then using the out_hidden_state in each batch or each epoch? I was wondering where you actually initialize this: "in_hidden_state = [[torch.zeros(1, batch * num_bands, inter_hiddensize//groups) for in range(groups)] for _ in range(num_modules)]"

Thank you.

gitwukeyi commented 4 months ago

You can apply this method as the initial state of the network; In the training step , each batch data is independent, so the initial state is 0, of course, the training can not give the GRU initial state, at this time the GRU will automatically assign 0 as the initial state. In this implementation, I wrote the code this way for compatibility training and inference. In the infering stage, the audio stream input, that is, the frame by frame input, so that the hidden state output of the previous step, as the hidden state input, in the inference process, the value of the hidden state will constantly change.