AI4Finance-Foundation / ElegantRL

Massively Parallel Deep Reinforcement Learning. šŸ”„
https://ai4finance.org
Other
3.66k stars 842 forks source link

MATD3 & MADDPG cannot be call by train_agent or train_agent_multiprocessing #286

Open niceban opened 1 year ago

niceban commented 1 year ago

AgentMATD3 & AgentMADDPG cannot be call by train_agent or train_agent_multiprocessing, when I add them to 'demo_DDPG_TD3_SAC.py'

Error shows like that

Traceback (most recent call last): File "/Users/c/Downloads/ElegantRL-master/examples/demo_DDPG_TD3_SAC.py", line 238, in train_ddpg_td3_sac_for_pendulum() File "/Users/c/Downloads/ElegantRL-master/examples/demo_DDPG_TD3_SAC.py", line 43, in train_ddpg_td3_sac_for_pendulum train_agent_multiprocessing(args) # train_agent(args) File "/Users/c/Downloads/ElegantRL-master/elegantrl/train/run.py", line 124, in train_agent_multiprocessing [process.start() for process in process_list] File "/Users/c/Downloads/ElegantRL-master/elegantrl/train/run.py", line 124, in [process.start() for process in process_list] File "/opt/anaconda3/envs/erl/lib/python3.9/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/opt/anaconda3/envs/erl/lib/python3.9/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/opt/anaconda3/envs/erl/lib/python3.9/multiprocessing/context.py", line 291, in _Popen return Popen(process_obj) File "/opt/anaconda3/envs/erl/lib/python3.9/multiprocessing/popen_forkserver.py", line 35, in init super().init(process_obj) File "/opt/anaconda3/envs/erl/lib/python3.9/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/opt/anaconda3/envs/erl/lib/python3.9/multiprocessing/popen_forkserver.py", line 47, in _launch reduction.dump(process_obj, buf) File "/opt/anaconda3/envs/erl/lib/python3.9/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'module' object

Yonv1943 commented 1 year ago

After we update the code from single env to vectorized env, Pull Request: single env to vectorized env, the code adaptation has only been done for the single RL algorithm, and the adaptation for multiple intelligences is not yet complete.

We will continue to update the code of MAEL algorithm after completing the algorithm of Hterm. Thank you for your reminder.

niceban commented 1 year ago

when could your team close those bugs.