AI4Finance-Foundation / FinRL-Tutorials

Tutorials. Please star.
https://ai4finance.org
MIT License
798 stars 339 forks source link

Errors in 1-Introduction/Stock_NeurIPS2018_3_Backtest.ipynb #38

Open chopin opened 1 year ago

chopin commented 1 year ago
  1. Where is 'processed_full' defined?

    ----> 1 data_risk_indicator = processed_full[(processed_full.date<TRAIN_END_DATE) & (processed_full.date>=TRAIN_START_DATE)] 
      2 insample_risk_indicator = data_risk_indicator.drop_duplicates(subset=['date']) 
    NameError: name 'processed_full' is not defined
  2. Where is the file 'agent_a2c.zip'?

    ----> 1 trained_a2c = A2C.load("agent_a2c") if if_using_a2c else None 
      2 trained_ddpg = DDPG.load("agent_ddpg") if if_using_ddpg else None
    FileNotFoundError: [Errno 2] No such file or directory: 'agent_a2c.zip'

I countered above error when I followed the tutorial (https://finrl.readthedocs.io/en/latest/start/first_glance.html). But every Colab notebook example reads files created from the previous example and sometimes non-existing files too. I am afraid that I cannot continue the tutorial.

Can I let the current Colab notebook read files created from the previous Colab notebok example?

ZiyiXia commented 1 year ago

Hi, thanks for your report. We are working on the reconstruction of these notebooks. Stock_NeurIPS2018_3_Backtest.ipynb was just updated. Please try it out.

Can I let the current Colab notebook read files created from the previous Colab notebok example?

Yeah the three-part design is aim for letting users try different setting in each part, and use the files created from previous part in the latter part. You could also use the sample files (data and agent files) we provided to quickly go through the whole process.

chopin commented 1 year ago

Thanks for your rapid response. I tried again and I barely went through the Colab examples up to 3.

Can I let the current Colab notebook read files created from the previous Colab notebok example?

Yeah the three-part design is aim for letting users try different setting in each part, and use the files created from previous part in the latter part. You could also use the sample files (data and agent files) we provided to quickly go through the whole process.

I found that files could not be shared between different Colab scripts. It means that the train_data.csv created in the 1st Colab script is not readable from the 2nd Colab script. It tried to download it from the 1st one to local and upload to the 2nd Colab, but the download failed maybe because the file size is too big.

After many trials and failures, I barely found a way to share the files using Google Drive. This way looks good, but may not be straight forward to beginners because it requires change of the source code.

Are you running and testing using Jupyter notebook in your own server, not in Colab? Files are shared between different Jupyter notebooks and I wonder that this is why you don't have the file sharing problems.

I hope that the Tutorial examples run in Colab rather than in Jupyter Notebook in an indivisual machine because I have to pay money to use a cloud server which costs money, and the cost is expensive when I use GPU or TPU. I want to run most of time in Colab which is completely free including GPU and TPU.