AI4Finance-Foundation / ElegantRL

Massively Parallel Deep Reinforcement Learning. 🔥
https://ai4finance.org
Other
3.7k stars 847 forks source link

Question about GPU Configuration for MAPPO Training #422

Open changyu-hu opened 3 weeks ago

changyu-hu commented 3 weeks ago

Hi

I hope you’re doing well! I’m currently working with the MAPPO implementation in your repository and have a question regarding GPU configuration for optimal training performance.

Current Setup: I have an environment with an average response time of about 10 seconds. I’m trying to decide between using one high-performance GPU or multiple low-performance GPUs for training.

Question: Based on your experience with this implementation, which option would you recommend? Would one high-performance GPU provide better performance given the response time, or could multiple low-performance GPUs offer any advantages in this scenario?

Additional Context: I'm also interested in how parallelizing the environment might impact the decision and the overall training process.

Thank you for your assistance!

Best regards,

Yonv1943 commented 1 week ago

【我建议你使用这个开源项目的 MAPPO算法(不带任何利益,纯推荐):https://github.com/agi-brain/xuance/tree/master/examples/mappo

【I recommend using the MAPPO algorithm from this open-source project (no vested interest, purely a recommendation): https://github.com/agi-brain/xuance/tree/master/examples/mappo】

Yonv1943 commented 1 week ago

【自从更新了 vectorized env 和 multiple GPU 之后,2024-10-12我还没有想清楚如何把 MADDPG 以及 MAPPO 以及 QMIX 算法也纳入到ElegantRL库里面】

【Since updating the vectorized environment and multiple GPUs, as of October 12, 2024, I haven't figured out how to incorporate the MADDPG, MAPPO, and QMIX algorithms into the ElegantRL library】