I see the following lines in the model construction and I wonder if you would be kind enough to comment on them:
if self.res_connection:
decoded = decoded + input
if return_hiddens:
return decoded,hidden,output
return decoded, hidden
res_connection: Is this basically a skip/residual connection?
return_hiddens: Should this be better renamed to return_outputs? I am guessing this is optionally returning the weighted output of each hidden layer
Also, the decoder is simply a linear layer. From what I see from looking at RNN autoencoder like tutorials, the decoder is usually also a RNN. Could you comment on the architecture?
I see the following lines in the model construction and I wonder if you would be kind enough to comment on them:
res_connection
: Is this basically a skip/residual connection?return_hiddens
: Should this be better renamed toreturn_outputs
? I am guessing this is optionally returning the weighted output of each hidden layerAlso, the decoder is simply a linear layer. From what I see from looking at RNN autoencoder like tutorials, the decoder is usually also a RNN. Could you comment on the architecture?