X = np.concatenate([X_train_c,X_train_o,X_test_c,X_test_o],axis=0)
sequence_autoencoder.fit(X[:len(X_train_c)+len(X_train_o)], X[:len(X_train_c)+len(X_train_o)],
batch_size=128, epochs=100, verbose=2, shuffle=True)
I was going through the extreme forecasting code and I saw that you used only the sequence of average price as an input to the auto encoder. I was wondering why didn't you use the extra features as well in the input to the autoencoder? According to my knowledge autoencoder is used to learn a complex representation of the feature space so including more features should help it.
I was going through the extreme forecasting code and I saw that you used only the sequence of average price as an input to the auto encoder. I was wondering why didn't you use the extra features as well in the input to the autoencoder? According to my knowledge autoencoder is used to learn a complex representation of the feature space so including more features should help it.
Thanks in advance.