XanaduAI / QHack2022

QHack—The one-of-a-kind quantum computing hackathon
https://qhack.ai
88 stars 124 forks source link

Predicting Stock Prices using Quantum Long-Short Term Memory. #37

Closed DikshantDulal closed 2 years ago

DikshantDulal commented 2 years ago

Team Name:

UncertaintyHack

Project Description:

We implement a quantum-classical hybrid QLSTM model by incorporating quantum variational layers into the classical LSTM in order to improve the efficiency and trainability of LSTM for better stock price prediction.

Introduction

Stock price prediction is one of the most rewarding problems in modern finance, where the accurate forecasting of future stock prices can yield significant profit and reduce the risks. LSTM (Long Short-Term Memory) is a recurrent Neural Network (RNN) applicable to a broad range of problems aiming to analyze or classify sequential data. LSTM can be used to predict the future stock price based on the historical data sequences. Recent studies have shown that its efficiency and trainability can be improved by leveraging a quantum-classical hybrid model of LSTM. QLSTM proved to learn significantly more information after the first training epoch than its classical counterpart. Thus, we implement a variational quantum-classical hybrid algorithm engaging machine learning techniques within the LSTM model framework in order to predict stock price movement.

Methods

Firstly, we conduct the feature preparation of the stock price using data collection of the technical indicators for a given stock, correlated assets, the sentiment analysis of the related sources of the information from the market: news, social media, reports. Then we test a classical multi-input LSTM on the prepared dataset and analyze the feasibility of this instrument to predict the prices. Finally, we modify classical LSTM by introducing variational quantum circuits and comparing the result to that of classical LSTM.

Viability of the algorithm

QLSTM shows better prediction accuracy (10 times less RMSE) and trainability compared to classical LSTM as shown by Fang et. al (2020) Furthermore, QLSTM requires a relatively low amount of qubits. The depth of the variational quantum circuit (d) grows linearly both on the number of variational layers and the number of qubits. An iterative optimization technique is used to update the variational parameters of the quantum circuit employed in the hidden layer of the Neural Network in a way, that each of the learned parameters can effectively absorb the surrounding noise without even knowing any properties of the noise. Low gate depth per run and lower qubit count, together with noise tolerance make the variational technique viable to use in NISQ. (Noisy Intermediate-Scale) era devices.

Source code:

Please refer to our GitHub repo here

Resource Estimate:

For better trainability of LSTM, we need to try different numbers of variational layers, qubits, and circuit depth. As such, we will run more trials and use IBM quantum computers with a greater number of qubits.

Challenges:

  1. IBM Qiskit Challenge
  2. Amazon Braket Challenge
  3. Quantum Finance Challenge
  4. Quantum Entrepreneur Challenge
  5. Hybrid Algorithms Challenge
isaacdevlugt commented 2 years ago

Thank you for your Power Up submission! As a reminder, the final deadline for your project is February 25 at 17h00 EST. Submissions should be done here: https://github.com/XanaduAI/QHack/issues/new?assignees=&labels=&template=open_hackathon.md&title=%5BENTRY%5D+Your+Project+Title

This issue will be closed shortly.

Good luck!