kieranjwood / trading-momentum-transformer

This code accompanies the the paper Trading with the Momentum Transformer: An Intelligent and Interpretable Architecture (https://arxiv.org/pdf/2112.08534.pdf).
https://kieranjwood.github.io/publication/momentum-transformer/
MIT License
432 stars 188 forks source link

Problem with Generating Prediction #7

Open kewlmodee opened 1 year ago

kewlmodee commented 1 year ago

Thanks for sharing this code: it's a really great and novel approach, that is well developed

I managed to run a full TFT experiment using the prompt you gave in the last step and TFT. It ran for about 12-14 hours and produced all of the data in "results\experiment_quandl_100assets_tft_cpnone_len252_notime_div_v1", for each year. But when it came to output it just says "Predicting on test set" and then gave the below Tensorflow error.

Where does it generate a future forecast, or if it doesnt do that, where in the code can I pick up to understand the practical outputs either by way of using the trained architecture, or generating a graphical or text output for the time series data. At the moment it just seems to be testing, training and delivering that output to results\ folder and nothing else.

Output:

Best sharpe So Far: 3.190661052421784 Total elapsed time: 04h 01m 38s Best validation loss = 3.0821421303297503 Best params: hidden_layer_size = 40 dropout_rate = 0.5 max_gradient_norm = 1.0 learning_rate = 0.001 batch_size = 128 Predicting on test set... performance (sliding window) = 0.21481784546357402 performance (fixed window) = 0.9377698822801832 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2 WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e. g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.

signalprime commented 11 months ago

that's because the optimizer was not specified during checkpoint creation, it is not critical for inference