FlorentF9 / DeepTemporalClustering

:chart_with_upwards_trend: Keras implementation of the Deep Temporal Clustering (DTC) model
MIT License
219 stars 58 forks source link

Problem with Autoencoder Dimensions #2

Closed losDaniel closed 4 years ago

losDaniel commented 4 years ago

Hello, I'm trying to replicate your examples but keep getting this error on the output dimensions of the autoencoder.

Pretraining...
Traceback (most recent call last):
  File "DeepTemporalClustering.py", line 535, in <module>
    save_dir=args.save_dir)
  File "DeepTemporalClustering.py", line 313, in pretrain
    self.autoencoder.fit(X, X, batch_size=batch_size, epochs=epochs, verbose=verbose)
  File "C:\Users\Computer\Anaconda3\lib\site-packages\keras\engine\training.py", line 1154, in fit
    batch_size=batch_size)
  File "C:\Users\Computer\Anaconda3\lib\site-packages\keras\engine\training.py", line 621, in _standardize_user_data
    exception_prefix='target')
  File "C:\Users\Computer\Anaconda3\lib\site-packages\keras\engine\training_utils.py", line 145, in standardize_input_data
    str(data_shape))
ValueError: Error when checking target: expected output_seq to have shape (6400, 1) but got array with shape (128, 1)

The autoencoder output is expecting 6400 = 128 (timesteps) x 50 (n_filter). I know its in the autoencoder because I checked the output dimensions of encoder, decoder and autoencoder:

image

I tried replacing it with the

output = Conv1D(1, kernel_size, strides=strides, padding='same', activation='linear', name='output_seq')(decoded)

line that was commented out in TAE.py but that just returned another error:

ValueError: Input 0 is incompatible with layer output_seq: expected ndim=3, found ndim=4

I also tried using temporal_autoencoder_v2 in TAE.py but that just returned another shape error:

ValueError: Input 0 is incompatible with layer dense: expected shape=(None, 16, 100), found shape=(None, 16, 2)

I am very cautious of playing with the architecture too much as I want to be able to replicate the results. Any suggestions on what to try?

losDaniel commented 4 years ago

My mistake, this was an error in my code when I introduced a fix for the new version of keras. To get DTC to work with the new version of keras you need to replace the following in TAE.py:

encoded = Bidirectional(CuDNNLSTM(n_units[0], return_sequences=True), merge_mode='sum')(encoded)

Must become

encoded = Bidirectional(LSTM(n_units[0], return_sequences=True), merge_mode='sum')(encoded)

and

encoded = Bidirectional(CuDNNLSTM(n_units[1], return_sequences=True), merge_mode='sum')(encoded)

must become

encoded = Bidirectional(LSTM(n_units[1], return_sequences=True), merge_mode='sum')(encoded)

I messed up and had n_units[0] in the second argument which is why I was getting the error above. Closing this issue.

FlorentF9 commented 4 years ago

Glad you could find a solution to your issue! Keras versions often require to make small changes to the code.