Open Tiacy opened 4 years ago
Hello,
The paper results were produced when the dropout value was set to 0.7, so this was the optimal value for my experiments.
Best, Maria
OK. I see. Thanks for your help!
Hello, I have another question,Why combine hidden features and input features as input gate inputs in the LSTM module, lines 46-49 in 'Networkl.py'? What happens if hidden features are removed from the input gate? Do you try that? Thanks for your help!
Hello,
This is how the LSTMs work, they combine current information with previous information, that's why i do that. Otherwise the temporal relationship of different time steps will not be calculated properly.
Hello, Thanks for your work! I found that there was only one pooling operation in the Unet-LSTM model. Why not a pooling operation immediately after each convolution operation during the down sampling? Thanks for your reply.
Hello,
There are 4 max pooling operations during the downsampling. Look at the encoder function.
Hello, Thanks for your work! I have a question: Whether the 'dropout' parameter in the LSTM is set to 0.7 can get the optimal effect? And for optimal results, what is the LSTM's dropout parameter set to ? Thanks very much!