Open bitsapien opened 7 years ago
I'm guessing the bottleneck is on the machine where the docker daemon is running and not in glot-run itself. I can think of 3 ways to scale the docker api:
Vertical. Add more/faster cpu's and faster disks on the machine where the docker daemon is running.
Horizontal. Set up a load balancer (haproxy/nginx/etc) and multiple machines running the docker daemon and configure DOCKER_API_URL to go to the load balancer.
Queue. Add a queue in front of the docker daemon, I don't have much experience with this, but it looks like nginx plus and haproxy has support for queueing request when the max connection limit is reached. You must also have to a high DOCKER_RUN_TIMEOUT configured in glot-run in this case. The following haproxy configuration options seems relevant: https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-maxconn https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.2-maxqueue https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-timeout%20queue
swarm should be the best way to go, it is an official proxy for docker daemon with extended capabilities of orchestrators. You should point DOCKER_API_URL
to swarm endpoint and register all your docker daemons with swarm.
more info here: https://www.docker.com/products/docker-swarm
Swarm definitely looks like the way to go 👍
Thanks, I'll try doing that.
@bitsapien any success with docker swarm? I am trying to do same
@rushi216 have not tried it.
@bitsapien how did you set up this locally? guide me on this
I have one instance of
glot-run
running. I did a benchmarking using 'Apache ab' with the following parameters:These are the results -
That shows a 15% success rate. Planning to have a coding challenge for my college, and requests will be close to the above parameters. Been thinking about various approaches, one of them being where code run requests are pushed to Redis, while multiple
glot-run
nodes listen for requests, and process when available. Confused about how to go about scaling this setup. Looking for suggestions. Thanks.