loocurse / quant-crunch

Project 32 SUTD Capstone
0 stars 0 forks source link

Integrate API to algo #4

Open Banila48 opened 3 years ago

Banila48 commented 3 years ago

I think Piau wants to see some live results soon and we can't keep showing him backtested results. So in the next 2 or so weeks, we have to work on achieving this real time visualizing while still working on various algos.

The rough process will be 1) Data collection from API 2) Data cleaning for respective Algos 3) Predicting buy/sell decisions with live data as test data.

^ We wont be training our model above

When the algo starts to underperform a buy-hold model, we will retrain the model with more historical data (previous live data).

Any thoughts on how we can do this? Is the rough process correct because I am not too sure on how to do it live?

loocurse commented 3 years ago

Can we pick just one algorithm that works and we'll run it through this whole process? We will train our algo in a separate development env and push the trained model to do the predictions just for simplicity for the start.

Also we need to consider what is the algorithm's outputs. Some possibilities

What possible outputs can your algorithm give you? @Mickey1356 @Potatoofan @ktingyew @yinling-tyl

Banila48 commented 3 years ago

We can do that by saving the model in a pickle format. We can start off with two algorithms (doesn't need to be fully working for now) just want to see how it visualize the buy/sell decisions live with the cumlative returns.

I think most algo outputs would be forecasted price then they either buy/sell based on that or the current balance they have

leo-dh commented 3 years ago

I think these steps can be done by setting up a backend server.

The rough process will be

1. Data collection from API

The server will first pull data from whatever API we use through http requests periodically.

2. Data cleaning for respective Algos

The obtained data would then be parsed accordingly to fit the input of the algorithm and stored in memory/db/local file on the server.

3. Predicting buy/sell decisions with live data as test data.

Not so sure about this step. Depends on the format of the model.

Ultimately, the web interface would then pull data from the backend server to get the 'live' data + algorithm predictions.

Server frameworks to consider:

loocurse commented 3 years ago

Sounds good, we can set up the backend server using flask? Since it's in python it's easier to manipulate data and the others can import their scripts there if necessary? Django might be a bit heavy for our use.

leo-dh commented 3 years ago

I can setup a simple flask server. Where should we put it? The same repo as the frontend or another repo?

loocurse commented 3 years ago

I think we can add it to this repo. If we mix it up with the front end one, it'll mess up the linting

Banila48 commented 3 years ago

Minutes from 2/6 Wed Meeting

Agenda:

1) Algo Outputs:

Consensus: Probability > Predict Prices because it is more robust as the latter is still working on a lagging indicator.

Discussion:

Roy mentioned that we can't run away from predict prices because even TA indicators like Fib Extensions and Ichimoku Cloud relies on lagging indicators to convert to leading indicators by predicting future momentum.

Lucas agrees but concluded that price shouldn't be on the chart. Outcome: We dont show predicted prices on the platform but just buy/sell signals

Test Period: Aug 2020 - May 2021

2) API

Discussion:

Evan mentioned that we need to get the prices when the market is closed from 4am to 9pm (next market open hour cycle)

Lucas asked Roy to check which API is he using, the call and rate limit as well as the different pricing. The group will determine which plan to get based on what Roy researched.

Lucas and Ding Hao will be using Flask

3) Roadmap for next few weeks

Discussion:

Roy suggest doing the integration and building the model concurrently

Lucas asked how hard is it to go deep on the model

Roy says its hard for RL because the math behind the buy/sell decisions will be time consuming to understand if one was to cross-check with research papers

Jeremy say its possible given unlimited time by he recommend setting a hard timeline. Examples of going deep: Is different forms of LSTM, customize your layers, change underlying architecture of the model

Evan and YL will be working on models that use TA as its foundation because TA has been proven to work and Piau actually just wanted some results (doesn't matter the means).

Jeremy say at the end of the day it comes down to meeting school expectations or piau's expectations.

Outcome:

mickey1356 commented 3 years ago

Other Stuff

1) Which data API are we going to use? 1.1) What does the API's hourly call (since we will be using hourly data) look like? i.e. If we call at 11.30am, which time period does it return, etc.

loocurse commented 3 years ago

Other Stuff

  1. Which data API are we going to use?

We can either use Alphavantage or Polygon.io. Alphavantage seems easier to use, polygon is quite large scale.

1.1) What does the API's hourly call (since we will be using hourly data) look like? i.e. If we call at 11.30am, which time period does it return, etc.

We're going with data with 15 mins intervals. At any given time, our model should be making a prediction at time t to either buy or sell. If it's 11:35 am, for example, our model will still use the 11:30 am results.

yinling-tyl commented 3 years ago

Best case scenario given that you buy and sell at the best point.