notadamking / RLTrader

A cryptocurrency trading environment using deep reinforcement learning and OpenAI's gym
https://discord.gg/ZZ7BGWh
GNU General Public License v3.0
1.73k stars 540 forks source link

Enabled multiprocessing for Optuna. #85

Closed arunavo4 closed 5 years ago

arunavo4 commented 5 years ago

I have checked it and it works as expected! @notadamking please merge this

notadamking commented 5 years ago

Since this only addresses the multiprocessing aspect of the optimize step, it seems we should move this code inside RLTrader and allow num_jobs or similar to be passed in as a param to the optimize step.

arunavo4 commented 5 years ago

ok, but I will have to use multiprocessing pool if the code is taken inside RLTrader

arunavo4 commented 5 years ago

I cant figure it out how to move the current code to RLTrader cause there is no main func to work with

notadamking commented 5 years ago

Would the function not be self.optimize?

arunavo4 commented 5 years ago

Yes, But there seems to be some issue. Once I figure it out I will add a commit. There seems to be some Issue When using multiprocessing outside of main

arunavo4 commented 5 years ago

Traceback (most recent call last): File "/home/ubuntu/PycharmProjects/Trader_AI/test1.py", line 15, in <module> parallel_run(n_process, params) File "/home/ubuntu/PycharmProjects/Trader_AI/RLTrader.py", line 20, in parallel_run pool.starmap(trader.optimize, ((n_trials, n_process, n_parallel_jobs) for _ in range(n_process))) File "/usr/lib/python3.6/multiprocessing/pool.py", line 274, in starmap return self._map_async(func, iterable, starmapstar, chunksize).get() File "/usr/lib/python3.6/multiprocessing/pool.py", line 644, in get raise self._value File "/usr/lib/python3.6/multiprocessing/pool.py", line 424, in _handle_tasks put(task) File "/usr/lib/python3.6/multiprocessing/connection.py", line 206, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/usr/lib/python3.6/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) TypeError: can't pickle _thread.RLock objects

After some digging I found that It also turns out that outside the main func its having issues pickling due to the inner threading of n_jobs.

arunavo4 commented 5 years ago

Well I have figured it all out ! It works now, but this implementation is slower than My Earlier one. So I am not making a commit now.

notadamking commented 5 years ago

@arunavo4 I am going to close this as it does not offer any benefits over the optuna-proposed solution of running multiple instances of optimize.py or cli.py.

arunavo4 commented 5 years ago

Well I just automated launching of multiple instances of optimise.py. so that people don't have to do it manually.