ctubio / tribeca

Self-hosted crypto trading bot (automated high frequency market making) in node.js, angular, typescript and c++
https://127.0.0.1:3000
Other
95 stars 26 forks source link

Quoting in Blocks #104

Closed beegmon closed 7 years ago

beegmon commented 7 years ago

I wanted to float an idea by everyone here.

Currently I use the AK-47 quoting mode. It works well a majority of the time (especially on the ask side of the book), but the bid side rarely if ever has more than 1 bullet open at any given time. I think this could be improved slightly by quoting in blocks based on bullet number and bullet range. Maybe "shotgun" would be appropriate name for this type of quoting style. In any case here is what I am getting at.

Say the Fair Value is currently $2,500 Max Number of bullets set to: 5 Bullet Range set to: 1.0 Bid Size/Ask Size set to: .1 Ping/Pong Width Set to: .55

For each quote operation on the bid side do the following: Compute bid quote 1 (Bq1): Bq1=2500 - .55 (Bq1 = 2499.45) Compute bid quote 2 (Bq2): Bq2=Bq1 - 1.0 (Bq2 = 2498.45) Compute bid quote 3 (Bq3): Bq3=Bq2- 1.0 (Bq3 = 2497.45) Compute bid quote 4 (Bq4): Bq4=Bq3 - 1.0 (Bq4 = 2497.45) Compute bid quote 5 (Bq5): Bq5=Bq4 - 1.0 (Bq5 = 2496.45)

For each quote operation on the ask side do the following: Compute ask quote 1 (Aq1): Aq1=2500 + .55 (Aq1 = 2500.55) Compute ask quote 2 (Aq2): Aq2=Aq1 + 1.0 (Aq2 = 2501.55) Compute ask quote 3 (Aq3): Aq3=Aq2 + 1.0 (Aq3 = 2502.55) Compute ask quote 4 (Aq4): Aq4=Aq3 + 1.0 (Aq3 = 2503.55) Compute Ask quote 5 (Aq5): Aq5=Aq4 +1 1.0 (Aq5 = 2504.55)

These quotes are then submitted to the market.

When the FV changes (say it moves to $2505.00). New quotes are computed like the above example, submitted to the market, while the old quotes are canceled.

This simple example also doesn't take into account STDEV quote protection (which would would be added to Bq1 and Aq1 before computing the following quotes) or the BW (hollow protection) that you would want to apply to each of the quotes being submitted to the market.

The overall effect of this type of quoting style is that several orders are maintained on both sides of the book at specific intervals (bullet range) at all times. The goal is the setup "traps" on each side of the book that are tripped when a large order comes in on the ask/bid sides. These larger orders match several orders on the side of the book they are acting on. If we can layer several orders on each side of the book, as specific intervals, we can hope to catch more of those big orders, which is generally a good thing.

ctubio commented 7 years ago

im sorry i already try this but end up loosing many while begin ignored in the APIs because too many request. so AK-47 do not force X bullet amount at all times since 0millisecond, it just places orders as usual, but not canceling previous ones unless are worst or already hit the X bullet limit.

im sorry to close this because i really like the idea, just that i couldnt make it after already attempted it several times in the past

ctubio commented 7 years ago

if there are in one side less bullets than the limit, may be because the side is not moving much, so the app do not decide to place better orders (is the only explanation that i have).

beegmon commented 7 years ago

Well for example, once the FIX implementation for GDAX is in we will have 30 requests per second, per order action type.

So that means we can open 30 orders per second, while closing 30 orders per second at the same time.

Given that, this block quoting strategy should be totally viable as you will likely not run into request limits if the bullet number was reasonable (say 5 to 10).

For Websocket APIs this should work as well since the reqs/sec is very high on most websocket APIs.

So maybe its still worth having this type of strategy for the user to choose, if they understand and know their exchange can support it (via websocket or FIX protocols where the req/s limit is very high) and tribeca has support for those faster/higher throughput protocols for that specific exchange.

For extra protection you could increment a counter for each request. Then every 5 seconds or so compare that counter to the known max req/s * 5 supported by the API endpoint on the exchange (which is coded into gateway for that exchange in tribeca).

sudo code

if counter >= (max_req * 5) 
  Update Web GUI with message "Adjust Number of Bullets, you are exceeding the Max Req/s 
  allowed"

else
  Do nothing...all is normal

The other (and I think better) option is the batch requests so you never breach the maximum

sudo code

order_action_queue // an queue containing all the open and cancel order requests that 
                       the gateway needs to send to the exchange.
                       This queue is First in First Out

warning_order_action_queue_length // the length of a queue in which we notify 
                                the user that they have more requests than we can send 
                                in a short period of time. We really want the queue to be 
                                0 after sending out a batch of requests, but that likely 
                                wont happen, so we set this a reasonable number 
                                maybe max_req * 2. 

max_req // the max number of reqs the gateway can send in 1 second

reqnum // counter for the number of requests sent

req // the actual request to send

while [ true ]
do
  if order_action_queue.length == 0 // if the queue is empty/no order actions to take
    continue // immediately jump to the next iteration of the while [ true ] loop

  else
    reqnum = 0
     while [ reqnum <= maxreq || order_action_queue.length > 0 ]
     do
       req = order_action_queue.pop
       send_req_to_exchange (req) // send_req_to_exchange sends the request to the exchange
       reqnum++
    done

    if order_action_queue.length > warning_order_action_queue_length
      Send Message to Web GUI Stating "Queue is growing, reduce number of bullets"

    else
      do nothing...continue as normal

    sleep 1
    //after waiting 1 second, start a new iteration of the while [true] loop

done

By batching requests you can ensure you never push more than the maxreq per second allowed by an exchange for a specific API endpoint (in this case the order API). We also notify the user when they are trying to do to much so they can reduce the number of orders they are attempting to manage given the throughput of the endpoint they are working with.

Batching works not only for high speed/high throughput endpoint like FIX or Websocket, but also for HTTP APIs. The only things we need to know to make batching work are the request limits enforced by the exchange.

Most of the time request limits are made clear in documentation, especially the REST HTTP API limits. However in the case of there being no specified limit, we do some testing to find a reasonable number, first setting maxreq to say 3, and then upping that limit with each test till we find a max, finally backing off that max by 3 - 5 reqs per second to allow some breathing room. This final maxreq per second is what is released into the master branch of tribeca for people to consume.

One final improvement you could make to the batching logic is have it send orders as fast as it can (Without tracking number of requests but while the queue > 0 length), or until a request come backs with a status of "limit exceeded".

sudo code of success style inner while loop
req_status // the status of the request that was just sent

while [order_action_queue.length > 0 || req_status == "sucess" ]
     do
       req = order_action_queue.inspect // get the item at the front of the queue, but don't pop it
       req_status = send_req_to_exchange (req) // send_req_to_exchange sends the request to the 
                                                              exchange
       if req_status == 'success'
         order_action_queue.pop // remove the request we just sent, as it was successful
       else
         the request didn't succeed so don't pop it off the queue...do nothing, and the while loop exits
    done

If the request fails because the limit is exceeded, the request is not removed from the queue so it is still the next one we process when we start sending requests again.

The advantage of using request status here is that it's self limiting, needing no max_req variable to compare it's request count with. Additionally, it keeps from sending an order that isn't accepted, and therefore getting "lost".

However, it also means that we may be sending old order actions when the API finally starts accepting new requests from Tribeca. There is a trade off here, and it might be wiser to just send the request, and not attempt to resend if there is a failure because it will likely be old once the API starts accepting new requests again.

sudo code of inner while loop for not attempting to resend old requests

req_status // the status of the request that was just sent

while [order_action_queue.length > 0 ||  req_status == "success" ]
     do
       req = order_action_queue.pop // pop the next request off the queue
       req_status = send_req_to_exchange (req) // send_req_to_exchange sends the request to the 
                                                              exchange
      // if the request succeed the while loop goes to the next iteration, if it fails then the loop
         exits, and the request is lost.
    done
ctubio commented 7 years ago

will try to do it again but this time available to be enabled/disabled (not as a main behaviour of AK-47)

(batchin is cool, but not if requires timers, also some exchanges support send/cancel multiple (3 usualy) orders together, but since is not a wide supported feature, we stick to 1 single order in every call; we should stick also to the most wide supported request frequency [otherwise we are open a new melon of variables xD], but maybe having optionally an aggressive one is not that bad, will try to do it again'¡ lets turn this thing electric - thank you brother, no problem, lets go :dancer: https://www.youtube.com/watch?v=UVrwzjtBHq0)

ctubio commented 7 years ago

(i would like to open the melon of variables for gateways once the app is stable, for now is still under heavy development and helps to have all gateways equal where possible)

beegmon commented 7 years ago

The batching based on request success status requires no timers. The only choice you have to make is if you want to loose the old request or keep it and retry the next time around.

Using request status makes it auto-limiting as well, which means you don't need to worry about request limit, or having the most compatible request limit for all exchanges.

I agree that doing requests serially is probably best as well. Tracking multiple orders in a single cancel/or send is hard and over complicated in my opinion.

One last thought, you could do something like this as well, which would combines both the self limiting nature of the batching based on request status, and doesn't keep kicking the API when it's rejecting requests from Tribeca.

sudo code of inner while loop using request status and adding anti-kicking-a-dead horse with ignore old actions

reqnum = 5  // the number of requests we sent during the last send loop (we seed this var with 5 so 
                    on startup we run the while loop at least once and 5 requests seems like a good number
                    to start with. reqnum is defined with a seed value on startup only 
                    (before the while [true loop from the previous examples)
while [true]
do
  reqcount = 0 // the number requests we have sent so far in the current send loop interations
  req_status='' // the status of the last request we sent.

  req='' // the actual request to send

  while [order_action_queue.length > 0 ||  req_status == "success"  || reqcount <= reqnum  ]
  do
     req = order_action_queue.pop // pop the next request off the queue
     req_status = send_req_to_exchange (req) // send_req_to_exchange sends the request to the 
                                                              exchange
     if req_status == "success"
       reqcount++
    else
       the request failed, dont increment any counters and exit the loop.
  done
    reqnum=reqcount

   if reqnum == 0
     reqnum = 1 // if we didn't have any successful requests in the last loop, set reqnum to 1
                              so we always try to send at least 1 request

With the above loop we attain the following

1) while there are still requests in the queue (length > 0) keep grabbing those requests and sending them

2) while each request is successful, keep grabbing new requests and sending them

3) while the total number of requests we have done in the current loop is less than or equal to the last number of requests we did previously, keep grabbing new requests and sending them

The inner logic of the loop grabs a request from the queue and attempts to send it.

If that request is successful, we increment reqcount by one.

When the loop ends we set the reqnum equal to the reqcount.

By doing this we can auto adjust the amount of times the loop is run, based on the number of successful requests, over time. Initially the loop will send 5 +1 requests. If all of those are successful, next time, the loop will run 6 +1, then 7 +1 and so on.

As the number of request increases, the likely hood of hitting the rate limit does as well. So when a request isn't successful we don't increment the counter, or if we see only 4 request go through in the last run of our send loop, the next time we will try doing 4 +1, building back up to a higher rate.

No timers, sort of smart auto-limit that grows the amount of requests that are sent over time up to the maximum allowed number of requests, while still being kind of reactive to when the API suddenly slows down or sets a different rate limit we don't know about.

All gateways can use this method regardless of it using HTTP, websocket, FIX, or some other protocol.

Camille92 commented 7 years ago

Great discussion guys,

I think as well that quoting in blocks can be very interesting to get as much as we can from the market.

I'm not qualified enough to enter the questions of technical difficulties but I want to add that it can be judicious to link your bullet range to STDEV so that it adjusts for volatility in the market.

To take your example @beegmon: Say the Fair Value is currently $2,500 Max Number of bullets set to: 5 Bullet Range set to: 0.3 STDEV Bid Size/Ask Size set to: .1 Ping/Pong Width Set to: .55 STDEV on factor 1.

For each quote operation on the bid side do the following: Compute bid quote 1 (Bq1): Bq1=2500 - STDEV Compute bid quote 2 (Bq2): Bq2=Bq1 - STDEV1.3 Compute bid quote 3 (Bq3): Bq3=Bq2- STDEV1.6 Compute bid quote 4 (Bq4): Bq4=Bq3 - STDEV1.9 Compute bid quote 5 (Bq5): Bq5=Bq4 - STDEV2.2

and vice versa fro the other side.

Ps: I also very agree with @ctubio that it might be interesting to get the app well stable and running before moving to the bunch of big projects we've been proposing. And maybe all include them in a version 3 once this version if stable AF ! :D