hugocen / freqtrade-gym

A customized gym environment for developing and comparing reinforcement learning algorithms in crypto trading.
GNU General Public License v3.0
220 stars 46 forks source link

Complete newb. After running, can model.zip immediately be used? #7

Closed brizzbane closed 2 years ago

brizzbane commented 3 years ago

Just wondering if you can give a few sentence crash course on what this actually does?

I've been looking at/messing with the freqtrade bot for a while.

I've had a small introduction into machine learning.

Just wondering, what does this actually do? Is the model.zip something that can be used with a running strategy?

If I am asking these questions, ...will practical use of this not be something within the scope of responding to this issue? :)

Hoping you can provide some insight.

(side note):

What I had hoped that this would do, looking at the code, is look at the different TA methods in IndicatorforRL.py under populate_indicators, and output what the best combo it found was.

Any information you can provide would be much appreciated.

hugocen commented 3 years ago

Just wondering if you can give a few sentence crash course on what this actually does? This is a simple simulator for training reinforcement learning agents to trade.

Just wondering, what does this actually do? Is the model.zip something that can be used with a running strategy? Yes! I updated the repo. It now comes with a LoadRLModel strategy. You can try it with your trained model.

What I had hoped that this would do, looking at the code, is look at the different TA methods in IndicatorforRL.py under populate_indicators, and output what the best combo it found was. Well, it might be. because the agent will change the model weights to achieve this goal. Or you can try other algorithms agent on this project. For example, I have tried the NEAT(NeuroEvolution of Augmenting Topologies) algorithm, and it works great.

Feel free to ask if you got any questions.

And have fun!

Payback80 commented 3 years ago

Or you can try other algorithms agent on this project. For example, I have tried the NEAT(NeuroEvolution of Augmenting Topologies) algorithm, and it works great.

Feel free to ask if you got any questions.

And have fun!

How could have used neat, since it's not in openbaselines? interesting project, do you mind create an interface for RLLIB?

hugocen commented 3 years ago

@Payback80 I used NEAT-Python to create the trading agent. If you are interested. I can update the code to the repo. Sure, I can find some time to work on a RLLIB example.

Payback80 commented 3 years ago

@Payback80 I used NEAT-Python to create the trading agent. If you are interested. I can update the code to the repo. Sure, I can find some time to work on a RLLIB example.

Well i'm not the biggest fan of NEAT family but why not? If u want to exchange ideas feel free to DM me, u did a great job btw

brizzbane commented 3 years ago

Thanks for the update with example.

I was able to get it to run, and get to backtesting. I've backtested with exact dates of timerange in config_rl.json (updated to be 2021 data). My tests show negative performance, even using the exact dates for what was used to train the model.

Again--complete newb. My question is--should this have shown a positive performance, or is there more editing that would need to be done to create a profitable model? (I am just trying to test and see that what is happening "works". With negative performance on exact dates for backtesting downloaded data, it makes me think I might be doing something wrong?)

hugocen commented 3 years ago

My question is--should this have shown a positive performance, or is there more editing that would need to be done to create a profitable model? (I am just trying to test and see that what is happening "works". With negative performance on exact dates for backtesting downloaded data, it makes me think I might be doing something wrong?)

This is absolutely normal. A profitable model isn't easy to build. You need to do more research on the models, the features, and data...etc. And do more experiments.