Simple load balanced configuration for QGIS, Rancher and Docker.
This example will host a service with a map of the Winelands District, Western Cape, South Africa. Based on data from OpenStreetMap and population data published by CIESIN with the district boundary provided by The South African Demarcation Board, this is a simple local map of the Cape Winelands area.
We have made this example public in the hopes that it will inspire others who are looking for ways to scale up QGIS Server in a production environment.
Adding a new stack...
Setting the docker-compose and rancher-compose yaml files...
Waiting for the stack to spin up...
Stack spin up completed...
QGIS Nodes (currently 2 running)...
Detailed view of nodes ...
Provisioning a new host ...
Scaling up to three nodes ...
Waiting while a new node spins up ...
Third node integrated into the stack ...
We use three containers:
a QGIS container that will publish the maps in the file store.
Here is a sample output for two nodes:
Concurrency Level: 2
Time taken for tests: 34.177 seconds
Complete requests: 3
Failed requests: 0
Total transferred: 30558 bytes
HTML transferred: 30090 bytes
Requests per second: 0.09 [#/sec] (mean)
Time per request: 22784.397 [ms] (mean)
Time per request: 11392.198 [ms] (mean, across all concurrent requests)
Transfer rate: 0.87 [Kbytes/sec] received
Connection Times (ms) min mean[+/-sd] median max Connect: 215 222 13.0 226 237 Processing: 15198 17344 1861.9 18417 18527 Waiting: 15170 17319 1864.9 18394 18511 Total: 15435 17566 1848.9 18631 18742
And here is what happens if we scale up to three nodes and then rerun the test:
Concurrency Level: 2 Time taken for tests: 37.368 seconds Complete requests: 5 Failed requests: 0 Total transferred: 50930 bytes HTML transferred: 50150 bytes Requests per second: 0.13 [#/sec] (mean) Time per request: 14947.096 [ms] (mean) Time per request: 7473.548 [ms] (mean, across all concurrent requests) Transfer rate: 1.33 [Kbytes/sec] received
Connection Times (ms) min mean[+/-sd] median max Connect: 202 253 45.5 273 300 Processing: 12124 13057 1470.0 12712 15594 Waiting: 12112 13050 1471.5 12711 15588 Total: 12343 13310 1491.5 12944 15893
You can see that in particular the time per request dropped from 11.4 seconds
to 7.5 seconds by adding one node. You can also in rancher the load being
spread nicely across the nodes:
![screen shot 2016-12-12 at 4 26 10 pm](https://cloud.githubusercontent.com/assets/178003/21102952/0f1a8c0e-c089-11e6-8076-77dcbd8f8159.png)
Here you can see the load balancer spreading the requests as I do many sequential
map interactions in QGIS (using it as a WMS client). The screencapture below switches
between rancher to show CPU and IO load and the QGIS desktop.
![qgis](https://cloud.githubusercontent.com/assets/178003/21103728/5596a070-c08c-11e6-8b60-e102983ff130.gif)
## Caveats
1. Note that you will need to adapt this for your own needs since it contains
data that is specific to our company and the test.sh script points to our own
servers.
1. When spinning up a new node, it may return OGC service errors until the btsync
is completed. I don't have an elegant way to deal with this (yet) so you should
take that into consideration both when testing and trying to understand if things
are working, and in production as you load balancer may start forwarding traffic
to the new node before it is actually ready to respond to these requests.
Tim Sutton
December 2016