JJJerome / mbt_gym

mbt_gym is a module which provides a suite of gym environments for training reinforcement learning (RL) agents to solve model-based high-frequency trading problems such as market-making and optimal execution. The module is set up in an extensible way to allow the combination of different aspects of different models. It supports highly efficient implementations of vectorized environments to allow faster training of RL agents.
BSD 3-Clause "New" or "Revised" License
153 stars 41 forks source link

Fantastic project #32

Closed traderpedroso closed 1 year ago

traderpedroso commented 1 year ago

Thank you so much for sharing your work! It would be perfect to add a notebook for feeding it with external data instead of stochastic data.

rahulsavani commented 1 year ago

This repository is really designed specifically for learning in environments defined by mathematical models of limit order books, which are typically highly stylised, and, for example, don't even have an explicit representation of a limit order book with multiple levels. As such this repo is definitely not a good starting point if your interest is working with real data.

Designing a simulator of a limit order book that uses historical data and combines that with orders from one of more trading agents (e.g. a market maker one is trying to train) brings its own challenges (that are very different from those solved by mbt-gym). For that you can check out our other repo:

https://github.com/JJJerome/rl4mm

If that looks interesting then you could also look at our paper that uses it:

Market Making with Scaled Beta Policies Joseph Jerome, Gregory Palmer, Rahul Savani In proc. of ICAIF 2022 https://arxiv.org/abs/2207.03352

That paper builds hand-crafter market making strategies, but the repo already supports RL (and, as per its name, that was always the design intention, to allow RL for limit order book trading using historical data).