jaungiers / LSTM-Neural-Network-for-Time-Series-Prediction

LSTM built using Keras Python package to predict time series steps and sequences. Includes sin wave and stock market data
GNU Affero General Public License v3.0
4.78k stars 1.95k forks source link

In- & Out-memory results differ quite a lot #50

Closed CharlyEmpereurmot closed 5 years ago

CharlyEmpereurmot commented 5 years ago

Hello Jakob, Thank you for this inspiring, well written piece of code !

I noticed results differ when using in-memory training & out-of-memory generative training. I did not modify the code otherwise in any other file than run.py: I'm just commenting the "out-of-memory generative traning" block and un-commenting the block "in-memory training" and results are the following:

Results without any modification (very similar to what you posted in your article, just another run without fixed seeds): out-of-mem generative training

Results using in-memory training: in-memory training

Looking at the code in file model.py I can't understand why in-memory training produces these results. Is this normal ? Could generative training produce much better results, as we have here ?

I also modified your code to check performance for binary up/down price prediction, and results are also better using generative training. I'm wondering what is happening here.

Thank you again for sharing your code, it really is the best software & article I could find about LSTM for stock prediction.

Peppershaker commented 5 years ago

not an expert. just my 2 cent

I didn't run the code but it seems that the model.fit_generator() does not use early stopping, where as model.fit does, and pretty aggressively, with a patience of 2.

also, fit_generator asks you for steps per epoch, which in the code is set to 1. you might want to check to see if this is the correct implementation given how the data pipeline is setup. Because of the potential difference between how fit and fit_generator accounts for epochs this might lead you to different results at the same epoch number.

last, from your in memory training it seems like the network perhaps didn't converge (its more or less predicting the same thing over and over again). I would train it for longer (higher early stopping threshold).

CharlyEmpereurmot commented 5 years ago

Ok thank you very much, that makes sense. I didn't notice that at the time.