This is primarily an internal refactoring to support multiple scale out methods:
The first scale out method is basically what we're doing now: you run conveyor server and all builds are performed in goroutines talking to a single docker daemon. Scaling out can be achieved using something like docker swarm to scale out the docker API.
The second scale out method is via a BuildQueue. This will allow you to provide a queue implementation (sqs will probably be the first), allowing you to run a separate conveyor worker subcommand. The idea here is that conveyor server pushes build requests into a queue, and then individual workers pull jobs off and run the build with their local docker daemon.
The first method is probably the easiest, but I think using an actual queue and multiple worker nodes might be more robust in the long run. This at least gives users a couple options.
This is primarily an internal refactoring to support multiple scale out methods:
conveyor server
and all builds are performed in goroutines talking to a single docker daemon. Scaling out can be achieved using something like docker swarm to scale out the docker API.conveyor worker
subcommand. The idea here is thatconveyor server
pushes build requests into a queue, and then individual workers pull jobs off and run the build with their local docker daemon.The first method is probably the easiest, but I think using an actual queue and multiple worker nodes might be more robust in the long run. This at least gives users a couple options.