This is my degree Final Year Project during my final year studies in university. This research project explores the application of DRL in stock trading, particularly the optimization of trading strategies. In other words, it is a trading bot that autonomously makes trades based on past experiences and trial-and-error. The scope of this research consists of historical stock data of three NASDAQ companies: Amazon, Google, and Microsoft spanning 10 years. The DRL model chosen for this research project is Deep Q-Networks with Long Short-Term Memory neural network architecture. The model achieved a CR of 82.73%, SR of 0.93, and a MDD of -24.15%. This research not only contributes to the optimization of stock trading strategies through DRL, but also provides a structured model for addressing the complexities of financial markets.
The primary objectives of this project are to:
The Deep Reinforcement Learning (DRL) model used in this project is based on the Deep Q-Network (DQN) architecture with a Long Short-Term Memory (LSTM) neural network. This architecture was chosen to capture the temporal dependencies in stock market data, as historical price movements often exhibit time-series characteristics. The model aims to learn an optimal policy that maximizes the expected cumulative return over a trading period by continuously interacting with the simulated trading environment.
1. First_Notebook.ipynb
2. Second_Notebook.ipynb
3. AMZN.csv, GOOGL.csv, MSFT.csv
4. Stock Trading using Deep Reinforcement Learning.pdf
To execute the notebook, follow these steps:
Setup Environment:
pip install numpy pandas matplotlib scikit-learn
Download the Datasets:
Running the Notebook:
First_Notebook.ipynb
file.Second_Notebook.ipynb
after finishing the first notebook.