Closed losDaniel closed 4 years ago
My mistake, this was an error in my code when I introduced a fix for the new version of keras. To get DTC to work with the new version of keras you need to replace the following in TAE.py:
encoded = Bidirectional(CuDNNLSTM(n_units[0], return_sequences=True), merge_mode='sum')(encoded)
Must become
encoded = Bidirectional(LSTM(n_units[0], return_sequences=True), merge_mode='sum')(encoded)
and
encoded = Bidirectional(CuDNNLSTM(n_units[1], return_sequences=True), merge_mode='sum')(encoded)
must become
encoded = Bidirectional(LSTM(n_units[1], return_sequences=True), merge_mode='sum')(encoded)
I messed up and had n_units[0]
in the second argument which is why I was getting the error above. Closing this issue.
Glad you could find a solution to your issue! Keras versions often require to make small changes to the code.
Hello, I'm trying to replicate your examples but keep getting this error on the output dimensions of the autoencoder.
The autoencoder output is expecting 6400 = 128 (timesteps) x 50 (n_filter). I know its in the autoencoder because I checked the output dimensions of encoder, decoder and autoencoder:
I tried replacing it with the
output = Conv1D(1, kernel_size, strides=strides, padding='same', activation='linear', name='output_seq')(decoded)
line that was commented out in TAE.py but that just returned another error:
ValueError: Input 0 is incompatible with layer output_seq: expected ndim=3, found ndim=4
I also tried using
temporal_autoencoder_v2
in TAE.py but that just returned another shape error:ValueError: Input 0 is incompatible with layer dense: expected shape=(None, 16, 100), found shape=(None, 16, 2)
I am very cautious of playing with the architecture too much as I want to be able to replicate the results. Any suggestions on what to try?