saeed349 / Deep-Reinforcement-Learning-in-Trading

This repository provides the code for a Reinforcement Learning trading agent with its trading environment that works with both simulated and historical market data. This was inspired by OpenAI Gym framework.
208 stars 85 forks source link

Refactor the agents to Pytorch, update TA-Gen.py by TA-Gen, restructure main.py #7

Open labrinyang opened 10 months ago

labrinyang commented 10 months ago

Refactor the code in double_dqn.py, dqn.py, and duelling_dqn.py for PyTorch compatibility, integrate ta-lib into TA-Gen.py(for stockstats is not work yet). And restructure main.py for enhanced clarity.

labrinyang commented 10 months ago

This repository has been a popular resource for beginners learning to use reinforcement learning in quantitative trading. However, the code hasn't been updated for a considerable time. I've refactored the agents into a PyTorch version to improve their universality and stability. Additionally, I've replaced the stockstats package used in TA-Gen.py with TA-Lib, as TA-Lib is more widely used and user-friendly. I've also restructured main.py to enhance its clarity. While I have thoroughly reviewed my code, it may still not fully meet the repository's standards. If any issues arise, please inform me, and I will address them promptly. Finally, I would like to express my gratitude for your code, which has introduced me to the world of reinforcement learning in quantitative trading.😁🎶

saeed349 commented 10 months ago

Thanks @2665477495 for doing this and I am happy that you found this useful. A few thoughts on the natural next steps for the project. 1) Open-Ai Gym is obsolete and the latest updates are coming into its fork Gymnasium (https://gymnasium.farama.org/). So I would port the environment to that framework which would offer you more features. 2) Instead of developing agents yourself, you a framework like SB3 (https://stable-baselines3.readthedocs.io). Now if you are inclined on developing the models for the sake of learning, it makes sense to develop it from scratch. But a fair warning, after DDQN, the models can get quite complex. Look at the implementation of something like A2C and PPO and then you can understand how daunting it can be. Therefore, sticking to something like SB3 would help u focus on framing the problem rather than the implementation nightmare of the models.

labrinyang commented 10 months ago

Thanks @2665477495 for doing this and I am happy that you found this useful. A few thoughts on the natural next steps for the project.

  1. Open-Ai Gym is obsolete and the latest updates are coming into its fork Gymnasium (https://gymnasium.farama.org/). So I would port the environment to that framework which would offer you more features.
  2. Instead of developing agents yourself, you a framework like SB3 (https://stable-baselines3.readthedocs.io). Now if you are inclined on developing the models for the sake of learning, it makes sense to develop it from scratch. But a fair warning, after DDQN, the models can get quite complex. Look at the implementation of something like A2C and PPO and then you can understand how daunting it can be. Therefore, sticking to something like SB3 would help u focus on framing the problem rather than the implementation nightmare of the models.

Thank you for your advice😋. I will consider building upon SB3 and focus on conceptualizing the idea rather than dealing with complex coding.🍻 Your guidance has helped me move away from rigid frameworks that require coding every aspect of the model from scratch. Additionally, I plan to adapt the code for Gymnasium.