notadamking / RLTrader

A cryptocurrency trading environment using deep reinforcement learning and OpenAI's gym
https://discord.gg/ZZ7BGWh
GNU General Public License v3.0
1.73k stars 540 forks source link

Performance after date-issue-fix #54

Closed andre-cavalheiro closed 5 years ago

andre-cavalheiro commented 5 years ago

So i'm a little confused with the repo right now and i'd like to understand what i'm working with since i'm not really achieving the results that were mentioned in the article.

28 pointed out a possible problem with the data's integrity and i'd like to ask if anyone was able to recreate a profitable agent since. Although in that issue people have spoken about achieving pretty good results with the parameters that @notadamking has shared in #16 i'd like to confirm this since i was not able to replicate, all my agents fail and go into bankruptcy.

Following that question has anyone gotten to run optimize.py since that same bug fix, and if so would they mind sharing the optimal parameters to avoid re-running the said script for time-saving purposes.

I tried training my agents for 20 sessions but that didn't seem to be enough, should it?

PromediaB commented 5 years ago

@andre-cavalheiro Same problem here after fixing date, we are tying many thing to get it profitable back.

robinvanleeuwen commented 5 years ago

https://github.com/notadamking/Bitcoin-Trader-RL/issues/28#issuecomment-500973715 in #28 says he trained the agents with the same hyperparameters and got 108% profit. Offcourse we should optimize against the good dataset, but like you until now i don't get any good results.

TheExGenesis commented 5 years ago

Agreed. Would be good to have the following:

notadamking commented 5 years ago

Let's keep the conversation in #28.