kkuette / TradzQAI

Trading environnement for RL agents, backtesting and training.
Apache License 2.0
165 stars 47 forks source link

How to change the agent type? #6

Closed lamhk closed 5 years ago

lamhk commented 5 years ago

Hi, I tried to build "-b BUILD" using DQN instead of PPO inside run.py line-10 / agent = "PPO". However, when I try to run the run.py, it still create directory for PPO_n. Where I should change it to switch the agent effectively. Thanks.

lamhk commented 5 years ago

Hi, i tried to change (1) session/local.py with "dqn"; (2) run.py agent="DQN" and (3) config/agent.json with "type": "dqn_agent" and saw it successfully create save/DQN_0 directory. However, I received the following error...

Traceback (most recent call last): File "run.py", line 56, in session.loadSession() File "/home/lamhk/TradzQAI-master/core/session/local.py", line 47, in loadSession self.initAgent() File "/home/lamhk/TradzQAI-master/core/session/local.py", line 58, in initAgent self.agent = self.agent(env=self.env, device=self.device)._get() File "/home/lamhk/TradzQAI-master/agents/DQN.py", line 6, in init Agent.init(self, env=env, device=device) File "/home/lamhk/TradzQAI-master/agents/agent.py", line 23, in init device=device File "/home/lamhk/.local/lib/python3.6/site-packages/tensorforce/agents/agent.py", line 283, in from_spec kwargs=kwargs File "/home/lamhk/.local/lib/python3.6/site-packages/tensorforce/util.py", line 192, in get_object return obj(*args, **kwargs) TypeError: init() got an unexpected keyword argument 'gae_lambda'

kkuette commented 5 years ago

You just have to delete your current TradzQAI/config/ directory and build a new one with this command py run.py -b DQN.

lamhk commented 5 years ago

Hi, if I deleted the TradzQAI/config, i received the following error (after running py run.py -b DQN) . If the config directory exists, there is no error but the above 'gae_lamdba' error happens again. Any idea? Thanks.

Traceback (most recent call last): File "run.py", line 46, in session = Session(mode=args.mode, config=args.config, agent=args.build) File "/home/lamhk/TradzQAI-master/core/session/local.py", line 15, in init self.env = Local_env(mode=mode, gui=gui, contract_type=contract_type, config=config, agent=agent) File "/home/lamhk/TradzQAI-master/core/environnement/local_env.py", line 98, in init self.close() File "/home/lamhk/TradzQAI-master/core/environnement/base/base_env.py", line 71, in close self.logger.stop() AttributeError: 'Local_env' object has no attribute 'logger'

kkuette commented 5 years ago

I've fixed the problem, it was due to a hard coded argument. Commit done !

lamhk commented 5 years ago

Hi, Thanks for the updated code. I tried using DQN agent as mentioned above with 100 episodes. After the first episode, the rest of the episodes doesn't have result. I checked the eval report also shows zero result.

2018:10:03 13:49:06 000000 Starting episode : 1 2018:10:03 13:49:44 000001 ###################################################### 2018:10:03 13:49:44 000002 Total reward : -238.704 2018:10:03 13:49:44 000003 Average daily reward : -10.850 2018:10:03 13:49:44 000004 Total profit : -240.25 2018:10:03 13:49:44 000005 Total trade : 465 2018:10:03 13:49:44 000006 Sharp ratio : -1.723 2018:10:03 13:49:44 000007 Mean return : -0.013 2018:10:03 13:49:44 000008 Max Drawdown : -0.019 2018:10:03 13:49:44 000009 Max return : 0.002 2018:10:03 13:49:44 000010 Percent return : -0.012 2018:10:03 13:49:44 000011 Trade W/L : 0.495 2018:10:03 13:49:44 000012 Step : 18353 2018:10:03 13:49:44 000013 ###################################################### 2018:10:03 13:49:44 000014 Starting episode : 2 2018:10:03 13:50:17 000015 ###################################################### 2018:10:03 13:50:17 000016 Total reward : 0.0 2018:10:03 13:50:17 000017 Average daily reward : 0.000 2018:10:03 13:50:17 000018 Total profit : 0 2018:10:03 13:50:17 000019 Total trade : 0 2018:10:03 13:50:17 000020 Sharp ratio : 0.000 2018:10:03 13:50:17 000021 Mean return : 0.000 2018:10:03 13:50:17 000022 Max Drawdown : 0.000 2018:10:03 13:50:17 000023 Max return : 0.000 2018:10:03 13:50:17 000024 Percent return : 0.000 2018:10:03 13:50:17 000025 Trade W/L : 0.000 2018:10:03 13:50:17 000026 Step : 18353

kkuette commented 5 years ago

Defaults values aren't the best values for your agent, you should run some test to find out the best values. If it shows results like yours it's because your model does not learn well.