cshum / imagor

Fast, secure image processing server and Go library, using libvips
Apache License 2.0
3.32k stars 126 forks source link

Benchmarks #8

Closed gingerlime closed 2 years ago

gingerlime commented 2 years ago

Hi again :)

I'm trying to run some simple benchmarks to compare thumbor performance to imagor. For some reason I'm not sure the VIPS_CONCURRENCY_LEVEL env has an effect? It seems like it's always using all available CPUs ... Also interesting is that whilst it seems to run faster compared it to a single-process thumbor, when I run it with concurrency of 4 against thumbor (with the same concurrency), thumbor out-performs imagor (without caching, only keeping the original image in file storage) ...

Perhaps it's due to the way I've setup the benchmark? maybe it doesn't keep the original image in storage and always fetches it? or some other trivial misconfiguration? because otherwise I would expect libvips to outperform thumbor ...

You can see the benchmark code I'm running at https://github.com/MinimalCompact/thumbor/tree/imagor-thumbor-benchmark/benchmarks/locust (it's based on an older benchmark I created for optimizing thumbor on a multi-proc environment, so it's a bit messy unfortunately, but hope it's still useful)

I tested it on a DigitalOcean droplet with 4 cpus. Here's a sheet with the stats

cshum commented 2 years ago

Thanks for the benchmarks! That's very interesting.

My first thought is due to file IO but I will take a closer look.

cshum commented 2 years ago

My first assumption is due to file IO. Because the imagor file store was more for testing at the time it was implemented, the calls were pretty exhaustive.

I made some optimisations on file IO part and created a new image shumc/imagor:0.2.6

https://hub.docker.com/layers/shumc/imagor/0.2.6/images/sha256-865ec9b56b086e6b66023200854a32c9b1160de2981ebf7a6e6eddfe6920b163?context=explore

Would you kindly test out the new docker image? Meanwhile I am trying to setup your benchmark script.

gingerlime commented 2 years ago

Thanks @cshum ! What about the VIPS_CONCURRENCY_LEVEL seemingly not having an effect? e.g. when I set it to 1, I still see that it uses all available CPU cores ...

I will re-run the tests against 0.2.6 and let you know. If you want to run those yourself, you can update the docker-compose file and then run ./run_benchmark

gingerlime commented 2 years ago

I re-ran and updated the sheet with results ... I don't see much of a difference to be honest.

cshum commented 2 years ago

I am checking the docker-compose file. Seems the way it's setup it actually always fetches from HTTP, haha.

Think the VIPS_CONCURRENCY_LEVEL is more related to ENGINE_THREADPOOL_SIZE, which references to the engine itself? THUMBOR_NUM_PROCESSES would be the number of server process itself, which is more related to GOMAXPROCS of Go runtime itself, where it utilizes all CPU cores by default.

I am trying to change some of the variables, and will try run those myself soon.

gingerlime commented 2 years ago

Thanks @cshum! Let me know if there's anything I can help with :)

jcupitt commented 2 years ago

Hi, libvips dev here, congrats on this nice project.

It's VIPS_CONCURRENCY, not VIPS_CONCURRENCY_LEVEL. Yes, it sets the size of the libvips worker pool.

gingerlime commented 2 years ago

Hey @jcupitt ! Off-topic, but we've been in touch in the (long) past and you helped us improve colour-blindness using libvips! we didn't use it unfortunately, but I certainly remember and appreciate how helpful you were. Just wanted to say hi and thanks again!

gingerlime commented 2 years ago

Back on topic, to clarify: would VIPS_CONCURRENCY control the process pool size (and hence the number of CPU cores being used) or something else?

jcupitt commented 2 years ago

Yes, I remember the daltonize thing, that was interesting!

It's a little complicated (of course). libvips can have many image processing pipelines in flight at once (sharp says they peak at a few thousand for a heavily loaded server (!!!)). Each pipeline has mostly three parts:

  1. The image load library. These are generally (like libjpeg) single-threaded per image. libvips has a mutex to lock access to this code.
  2. The actual processing, ie. resize, watermark, rotate, filter, etc. This is computed by a flexible pool of worker threads which is shared between all current libvips pipelines. The VIPS_CONCURRENCY env var sets the maximum number of workers which will be committed per pipeline. By default, it's the number of hardware threads your CPU supports. Workers will queue up on the mutex in 1. when they need pixels.
  3. The image save library. This runs asynchronously in a pair of background threads in a tick-tock manner as sets of scanlines are completed by the workers in 2.

So VIPS_CONCURRENCY is approximately the number of 100% CPU threads libvips is aiming for if there is just one pipeline being executed. It'll start to overcommit if you run several pipelines at once.

cshum commented 2 years ago

Hi @jcupitt thanks for the detailed walkthrough! I have read a lot of your comments across different libraries, and they are all very insightful. Its called VIPS_CONCURRENCY_LEVEL in the project, which is kinda confusing I admit. It was named after the govips setting https://github.com/davidbyttow/govips/blob/master/vips/govips.go#L111

In production load, higher number may not be better for a web service like this. Because general performance over a long period of time, is way more important than short performance burst. If a service gets the best CPU utilization gain but then OOM killed, its still not a good setting. Also I tend to turn off all the caches and leave it to the CDN/reverse proxy, because caching at this level simply becomes memory overhead.

Back to the benchmark @gingerlime , the Thumbor multi-process mode is actually faster for this particular test! I am still trying to dig deeper into the codebase to see if I can find some more concrete reasoning behind.

jcupitt commented 2 years ago

Ah gotcha, sounds like I got mixed up.

Yes, it's probably sensible to limit it. Large worker pools are little help for image resizing anyway.

cshum commented 2 years ago

@gingerlime since the last discussions I have actually tried to optimize in several different areas and a lot of code changes https://github.com/cshum/imagor/compare/v0.3.0...master, including the thumbnail shrink-on-load tricks @jcupitt suggested. However none of the trick seems to change this particular benchmark result.

However when I am checking the thumbor's result on performance vs the number of processes it seems very strange. Even I set the Docker to only allowed 2 CPUs at maximum, the thumbor still yields ~9x better performance on 9 processes, even with 2 CPU cores! So my speculation is, some async tricks are applied within the thumbor process, and not all requests are actually kicking the image processor. If that's the case, in imagor:0.5.4 I also applied concurrency tricks then the result looks so much different...

The docker-compose file: https://github.com/cshum/thumbor/blob/imagor-thumbor-benchmark/benchmarks/locust/docker-compose.yml

[+] Running 1/0
 ⠿ locust  Warning: No resource found to remove                                                                                                                                                                    0.0s

***************** BEGIN BENCHMARK FOR imagor:8000 WITH 1 Docker processes, 1 imagor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.8s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.6s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.4s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.8s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.5s
locust_1  | [2021-12-22 16:31:59,492] 28f2d0667d24/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2021-12-22 16:31:59,552] 28f2d0667d24/INFO/locust.runners: Client 'bb2a57a3a2e2_e6919e631ba84fcebd1ecd63fe3db3bb' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2021-12-22 16:31:59,555] 28f2d0667d24/INFO/locust.runners: Client '98e3f72cd1f7_e42a44b2456c4038b1b10212276e3352' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2021-12-22 16:31:59,598] 28f2d0667d24/INFO/locust.runners: Client '520125a32fcb_c1bcf998214545cbb177285a7fb720f9' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2021-12-22 16:32:00,498] 28f2d0667d24/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2021-12-22 16:32:00,499] 28f2d0667d24/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2021-12-22 16:32:00,500] 28f2d0667d24/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2021-12-22 16:32:49,617] 28f2d0667d24/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2021-12-22 16:34:00,360] 28f2d0667d24/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2021-12-22 16:34:00,470] 28f2d0667d24/INFO/locust.runners: Client 'bb2a57a3a2e2_e6919e631ba84fcebd1ecd63fe3db3bb' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:34:00,482] 28f2d0667d24/INFO/locust.runners: Client '520125a32fcb_c1bcf998214545cbb177285a7fb720f9' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:34:00,488] 28f2d0667d24/INFO/locust.runners: Client '98e3f72cd1f7_e42a44b2456c4038b1b10212276e3352' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:34:00,492] 28f2d0667d24/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2021-12-22 16:34:00,864] 28f2d0667d24/INFO/locust.main: Running teardowns...
locust_1  | [2021-12-22 16:34:01,366] 28f2d0667d24/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2021-12-22 16:34:01,366] 28f2d0667d24/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                        56907     0(0.00%)  |     173       5     480     170  |  474.50    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                 56916     0(0.00%)  |     147       4     406     150  |  474.57    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                        113823     0(0.00%)  |     160       4     480     160  |  949.07    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                               170    200    210    220    250    270    300    320    410    470    480  56907
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                        150    170    190    200    220    250    280    300    340    370    410  56916
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                            160    190    200    210    240    260    290    310    390    470    480 113823
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.4s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.5s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.2s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes, 1 thumbor processes and  threads ********************

***************** BEGIN BENCHMARK FOR thumbor WITH 1 Docker processes, 1 thumbor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.6s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.7s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        2.0s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.9s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        2.1s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.5s
locust_1  | [2021-12-22 16:34:06,648] 5daa7d19a76a/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2021-12-22 16:34:06,649] 5daa7d19a76a/INFO/locust.runners: Client 'b33f7d9802ed_0adfb6a6df12480a83fa5f983a4956f0' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2021-12-22 16:34:06,784] 5daa7d19a76a/INFO/locust.runners: Client '8f00009b87a4_01d1d316d95e4643bb4a74ac07d2f32e' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2021-12-22 16:34:06,892] 5daa7d19a76a/INFO/locust.runners: Client 'eac3a90c0a18_b5143d3cc43a4b0ba582b69559efb478' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2021-12-22 16:34:07,649] 5daa7d19a76a/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2021-12-22 16:34:07,649] 5daa7d19a76a/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2021-12-22 16:34:07,650] 5daa7d19a76a/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2021-12-22 16:34:56,765] 5daa7d19a76a/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2021-12-22 16:36:07,501] 5daa7d19a76a/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2021-12-22 16:36:07,526] 5daa7d19a76a/INFO/locust.runners: Client '8f00009b87a4_01d1d316d95e4643bb4a74ac07d2f32e' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:36:07,531] 5daa7d19a76a/INFO/locust.runners: Client 'b33f7d9802ed_0adfb6a6df12480a83fa5f983a4956f0' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:36:07,547] 5daa7d19a76a/INFO/locust.runners: Client 'eac3a90c0a18_b5143d3cc43a4b0ba582b69559efb478' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:36:07,548] 5daa7d19a76a/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2021-12-22 16:36:08,004] 5daa7d19a76a/INFO/locust.main: Running teardowns...
locust_1  | [2021-12-22 16:36:08,505] 5daa7d19a76a/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2021-12-22 16:36:08,505] 5daa7d19a76a/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                         2142     0(0.00%)  |    4316      49    5851    5400  |   17.87    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                  2138     0(0.00%)  |    4329      36    5847    5400  |   17.84    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                          4280     0(0.00%)  |    4322      36    5851    5400  |   35.71    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                              5400   5500   5600   5600   5600   5700   5800   5800   5800   5900   5900   2142
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                       5400   5500   5600   5600   5600   5700   5800   5800   5800   5800   5800   2138
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                           5400   5500   5600   5600   5600   5700   5800   5800   5800   5900   5900   4280
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.4s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        4.0s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.3s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes, 1 thumbor processes and  threads ********************

***************** BEGIN BENCHMARK FOR imagor:8000 WITH 1 Docker processes, 3 imagor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.7s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        2.2s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        2.0s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.6s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        2.1s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.8s
locust_1  | [2021-12-22 16:36:17,264] f02c2841fca4/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2021-12-22 16:36:17,268] f02c2841fca4/INFO/locust.runners: Client '7ef9c07f5956_3791a5bc5a5a42129980e46c30f344da' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2021-12-22 16:36:17,327] f02c2841fca4/INFO/locust.runners: Client 'c9b78090db00_3b4dd53fe4f2422e8bcde918b0579653' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2021-12-22 16:36:17,658] f02c2841fca4/INFO/locust.runners: Client '4a26e43cdfd2_ef438161bfad47a48a82e5f08ed5fbb4' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2021-12-22 16:36:18,271] f02c2841fca4/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2021-12-22 16:36:18,272] f02c2841fca4/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2021-12-22 16:36:18,272] f02c2841fca4/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2021-12-22 16:36:52,116] f02c2841fca4/WARNING/locust.runners: Worker c9b78090db00_3b4dd53fe4f2422e8bcde918b0579653 exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:36:52,155] f02c2841fca4/WARNING/locust.runners: Worker 7ef9c07f5956_3791a5bc5a5a42129980e46c30f344da exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:36:52,848] f02c2841fca4/WARNING/locust.runners: Worker 4a26e43cdfd2_ef438161bfad47a48a82e5f08ed5fbb4 exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:37:07,351] f02c2841fca4/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2021-12-22 16:38:18,133] f02c2841fca4/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2021-12-22 16:38:18,237] f02c2841fca4/INFO/locust.runners: Client '4a26e43cdfd2_ef438161bfad47a48a82e5f08ed5fbb4' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:38:18,254] f02c2841fca4/INFO/locust.runners: Client '7ef9c07f5956_3791a5bc5a5a42129980e46c30f344da' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:38:18,293] f02c2841fca4/INFO/locust.runners: Client 'c9b78090db00_3b4dd53fe4f2422e8bcde918b0579653' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:38:18,294] f02c2841fca4/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2021-12-22 16:38:18,636] f02c2841fca4/INFO/locust.main: Running teardowns...
locust_1  | [2021-12-22 16:38:19,138] f02c2841fca4/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2021-12-22 16:38:19,138] f02c2841fca4/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                        71700     0(0.00%)  |     139       3     549     140  |  597.75    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                 72107     0(0.00%)  |      99       2     518      98  |  601.14    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                        143807     0(0.00%)  |     119       2     549     110  | 1198.90    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                               140    160    170    180    210    230    260    290    400    460    550  71700
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                         98    110    120    130    150    170    200    230    310    500    520  72107
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                            110    130    150    160    190    210    240    270    380    490    550 143807
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.4s
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.4s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.2s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes, 3 thumbor processes and  threads ********************

***************** BEGIN BENCHMARK FOR thumbor WITH 1 Docker processes, 3 thumbor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.2s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.9s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.6s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.7s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.8s
locust_1  | [2021-12-22 16:38:23,907] 8b095bd6ba82/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2021-12-22 16:38:23,909] 8b095bd6ba82/INFO/locust.runners: Client 'd44264b3a7ff_1b47bc12906a48ab86bbf708973051f7' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2021-12-22 16:38:23,909] 8b095bd6ba82/INFO/locust.runners: Client 'cf3cf25d5d86_f665611baa454ced8b8daacdbd155b30' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2021-12-22 16:38:23,995] 8b095bd6ba82/INFO/locust.runners: Client '4c1528367281_cd06a396271842d097a1c6f852643a49' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2021-12-22 16:38:24,911] 8b095bd6ba82/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2021-12-22 16:38:24,911] 8b095bd6ba82/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2021-12-22 16:38:24,913] 8b095bd6ba82/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2021-12-22 16:39:13,989] 8b095bd6ba82/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2021-12-22 16:40:24,771] 8b095bd6ba82/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2021-12-22 16:40:24,812] 8b095bd6ba82/INFO/locust.runners: Client 'd44264b3a7ff_1b47bc12906a48ab86bbf708973051f7' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:40:24,822] 8b095bd6ba82/INFO/locust.runners: Client '4c1528367281_cd06a396271842d097a1c6f852643a49' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:40:24,854] 8b095bd6ba82/INFO/locust.runners: Client 'cf3cf25d5d86_f665611baa454ced8b8daacdbd155b30' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:40:24,854] 8b095bd6ba82/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2021-12-22 16:40:25,275] 8b095bd6ba82/INFO/locust.main: Running teardowns...
locust_1  | [2021-12-22 16:40:25,776] 8b095bd6ba82/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2021-12-22 16:40:25,776] 8b095bd6ba82/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                         6062     0(0.00%)  |    1563      29    2333    1900  |   50.57    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                  6030     0(0.00%)  |    1555      25    2328    1900  |   50.31    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                         12092     0(0.00%)  |    1559      25    2333    1900  |  100.88    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                              1900   1900   1900   2000   2000   2100   2200   2300   2300   2300   2300   6062
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                       1900   1900   1900   2000   2000   2100   2200   2300   2300   2300   2300   6030
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                           1900   1900   1900   2000   2000   2100   2200   2300   2300   2300   2300  12092
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.4s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.5s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes, 3 thumbor processes and  threads ********************

***************** BEGIN BENCHMARK FOR imagor:8000 WITH 1 Docker processes, 6 imagor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        2.0s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.7s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        2.0s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.4s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.3s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.8s
locust_1  | [2021-12-22 16:40:30,850] fbda4ce09794/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2021-12-22 16:40:30,884] fbda4ce09794/INFO/locust.runners: Client 'ddfac1fa95c4_2f9a23f51e124855a0cfccffc75b1fac' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2021-12-22 16:40:30,912] fbda4ce09794/INFO/locust.runners: Client 'eed109acb3c6_3db5463d21a4416b9e2b984d82ec7be1' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2021-12-22 16:40:30,983] fbda4ce09794/INFO/locust.runners: Client '4d3fa27bb458_a527961bc7e245a48bd5d02d340be671' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2021-12-22 16:40:31,854] fbda4ce09794/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2021-12-22 16:40:31,854] fbda4ce09794/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2021-12-22 16:40:31,855] fbda4ce09794/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2021-12-22 16:41:01,213] fbda4ce09794/WARNING/locust.runners: Worker 4d3fa27bb458_a527961bc7e245a48bd5d02d340be671 exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:41:05,692] fbda4ce09794/WARNING/locust.runners: Worker ddfac1fa95c4_2f9a23f51e124855a0cfccffc75b1fac exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:41:05,740] fbda4ce09794/WARNING/locust.runners: Worker eed109acb3c6_3db5463d21a4416b9e2b984d82ec7be1 exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:41:20,972] fbda4ce09794/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2021-12-22 16:42:31,715] fbda4ce09794/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2021-12-22 16:42:31,793] fbda4ce09794/INFO/locust.runners: Client '4d3fa27bb458_a527961bc7e245a48bd5d02d340be671' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:42:31,823] fbda4ce09794/INFO/locust.runners: Client 'ddfac1fa95c4_2f9a23f51e124855a0cfccffc75b1fac' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:42:31,846] fbda4ce09794/INFO/locust.runners: Client 'eed109acb3c6_3db5463d21a4416b9e2b984d82ec7be1' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:42:31,847] fbda4ce09794/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2021-12-22 16:42:32,217] fbda4ce09794/INFO/locust.main: Running teardowns...
locust_1  | [2021-12-22 16:42:32,718] fbda4ce09794/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2021-12-22 16:42:32,719] fbda4ce09794/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                        64661     0(0.00%)  |     151       5     817     140  |  539.25    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                 64963     0(0.00%)  |     109       4     711     100  |  541.77    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                        129624     0(0.00%)  |     130       4     817     120  | 1081.02    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                               140    170    180    190    230    260    320    360    620    770    820  64661
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                        100    120    130    140    170    200    250    290    600    670    710  64963
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                            120    140    160    170    210    240    290    340    610    720    820 129624
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.6s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.4s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes, 6 thumbor processes and  threads ********************

***************** BEGIN BENCHMARK FOR thumbor WITH 1 Docker processes, 6 thumbor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.9s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.7s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.2s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.9s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.3s
locust_1  | [2021-12-22 16:42:37,386] 5c9fcb8e1188/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2021-12-22 16:42:37,596] 5c9fcb8e1188/INFO/locust.runners: Client '709ec47be2ff_67c9c4a09ab346c793535ccd9a4e23d2' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2021-12-22 16:42:37,826] 5c9fcb8e1188/INFO/locust.runners: Client '88ded6d575d6_fb1fae5629334519a5e0a28c3fcc9a08' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2021-12-22 16:42:37,996] 5c9fcb8e1188/INFO/locust.runners: Client '48c2cb37e46f_68331a0832d048a88da57a729f479562' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2021-12-22 16:42:38,390] 5c9fcb8e1188/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2021-12-22 16:42:38,390] 5c9fcb8e1188/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2021-12-22 16:42:38,390] 5c9fcb8e1188/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2021-12-22 16:43:27,519] 5c9fcb8e1188/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2021-12-22 16:44:38,251] 5c9fcb8e1188/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2021-12-22 16:44:38,284] 5c9fcb8e1188/INFO/locust.runners: Client '48c2cb37e46f_68331a0832d048a88da57a729f479562' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:44:38,296] 5c9fcb8e1188/INFO/locust.runners: Client '709ec47be2ff_67c9c4a09ab346c793535ccd9a4e23d2' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:44:38,298] 5c9fcb8e1188/INFO/locust.runners: Client '88ded6d575d6_fb1fae5629334519a5e0a28c3fcc9a08' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:44:38,298] 5c9fcb8e1188/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2021-12-22 16:44:38,754] 5c9fcb8e1188/INFO/locust.main: Running teardowns...
locust_1  | [2021-12-22 16:44:39,255] 5c9fcb8e1188/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2021-12-22 16:44:39,255] 5c9fcb8e1188/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                         8065     0(0.00%)  |    1159      29    2699    1200  |   67.29    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                  8255     0(0.00%)  |    1157      25    2433    1200  |   68.87    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                         16320     0(0.00%)  |    1158      25    2699    1200  |  136.16    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                              1200   1400   1500   1500   1700   1800   1900   2000   2400   2700   2700   8065
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                       1200   1400   1400   1500   1600   1800   1900   2000   2400   2400   2400   8255
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                           1200   1400   1500   1500   1600   1800   1900   2000   2400   2500   2700  16320
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.5s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.4s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes, 6 thumbor processes and  threads ********************

***************** BEGIN BENCHMARK FOR imagor:8000 WITH 1 Docker processes, 9 imagor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        2.0s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.9s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.2s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        2.0s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.5s
locust_1  | [2021-12-22 16:44:44,349] f489868be5c6/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2021-12-22 16:44:44,433] f489868be5c6/INFO/locust.runners: Client 'fb40efa44b1f_908347dee9914b8c9c7bf3b888901cdf' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2021-12-22 16:44:44,502] f489868be5c6/INFO/locust.runners: Client 'dbbcbadacf51_6acd83d80bcc40dca912adfe1cd746ed' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2021-12-22 16:44:44,669] f489868be5c6/INFO/locust.runners: Client '5df048d66a27_f915284f70104dd0b4bdae2c26b8cb25' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2021-12-22 16:44:45,350] f489868be5c6/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2021-12-22 16:44:45,351] f489868be5c6/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2021-12-22 16:44:45,351] f489868be5c6/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2021-12-22 16:45:19,336] f489868be5c6/WARNING/locust.runners: Worker fb40efa44b1f_908347dee9914b8c9c7bf3b888901cdf exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:45:19,833] f489868be5c6/WARNING/locust.runners: Worker dbbcbadacf51_6acd83d80bcc40dca912adfe1cd746ed exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:45:20,515] f489868be5c6/WARNING/locust.runners: Worker 5df048d66a27_f915284f70104dd0b4bdae2c26b8cb25 exceeded cpu threshold (will only log this once per worker)
locust_1  | [2021-12-22 16:45:34,441] f489868be5c6/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2021-12-22 16:46:45,212] f489868be5c6/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2021-12-22 16:46:45,331] f489868be5c6/INFO/locust.runners: Client 'dbbcbadacf51_6acd83d80bcc40dca912adfe1cd746ed' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:46:45,341] f489868be5c6/INFO/locust.runners: Client '5df048d66a27_f915284f70104dd0b4bdae2c26b8cb25' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:46:45,356] f489868be5c6/INFO/locust.runners: Client 'fb40efa44b1f_908347dee9914b8c9c7bf3b888901cdf' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:46:45,357] f489868be5c6/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2021-12-22 16:46:45,714] f489868be5c6/INFO/locust.main: Running teardowns...
locust_1  | [2021-12-22 16:46:46,220] f489868be5c6/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2021-12-22 16:46:46,221] f489868be5c6/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                        67417     0(0.00%)  |     145       3     872     140  |  562.07    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                 67353     0(0.00%)  |     105       5     874     100  |  561.54    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                        134770     0(0.00%)  |     125       3     874     120  | 1123.61    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                               140    160    180    190    220    250    290    330    480    860    870  67417
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                        100    120    130    130    160    180    230    260    430    860    870  67353
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                            120    140    150    170    200    230    270    300    460    860    870 134770
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.4s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.6s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.1s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes, 9 thumbor processes and  threads ********************

***************** BEGIN BENCHMARK FOR thumbor WITH 1 Docker processes, 9 thumbor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.8s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.7s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.3s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.2s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.6s
locust_1  | [2021-12-22 16:46:51,095] 5d4d8874c429/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2021-12-22 16:46:51,128] 5d4d8874c429/INFO/locust.runners: Client '6c181d14bb34_3c582dad53ac476e94496f632c04c0f0' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2021-12-22 16:46:51,186] 5d4d8874c429/INFO/locust.runners: Client '6209c7fc9761_a09fac31b3dd40c3b07ce84be2dc8333' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2021-12-22 16:46:51,187] 5d4d8874c429/INFO/locust.runners: Client 'fc327a847f75_0f3a4e925ea0447bb8e4e65d04f5270a' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2021-12-22 16:46:52,098] 5d4d8874c429/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2021-12-22 16:46:52,099] 5d4d8874c429/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2021-12-22 16:46:52,099] 5d4d8874c429/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2021-12-22 16:47:41,186] 5d4d8874c429/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2021-12-22 16:48:51,959] 5d4d8874c429/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2021-12-22 16:48:52,011] 5d4d8874c429/INFO/locust.runners: Client '6c181d14bb34_3c582dad53ac476e94496f632c04c0f0' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:48:52,027] 5d4d8874c429/INFO/locust.runners: Client 'fc327a847f75_0f3a4e925ea0447bb8e4e65d04f5270a' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:48:52,046] 5d4d8874c429/INFO/locust.runners: Client '6209c7fc9761_a09fac31b3dd40c3b07ce84be2dc8333' quit. Currently 0 clients connected.
locust_1  | [2021-12-22 16:48:52,046] 5d4d8874c429/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2021-12-22 16:48:52,462] 5d4d8874c429/INFO/locust.main: Running teardowns...
locust_1  | [2021-12-22 16:48:52,966] 5d4d8874c429/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2021-12-22 16:48:52,966] 5d4d8874c429/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                         8545     0(0.00%)  |    1121      29    2744    1200  |   71.29    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                  8380     0(0.00%)  |    1109      25    2788    1200  |   69.92    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                         16925     0(0.00%)  |    1115      25    2788    1200  |  141.21    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                              1200   1400   1500   1500   1700   1900   2100   2300   2700   2700   2700   8545
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                       1200   1300   1500   1500   1700   1900   2100   2300   2600   2800   2800   8380
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                           1200   1400   1500   1500   1700   1900   2100   2300   2600   2700   2800  16925
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.5s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.4s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes, 9 thumbor processes and  threads ********************
jcupitt commented 2 years ago

libpng is very slow, so you're probably just timing library decode + encode, which is always single-threaded.

Maybe try JPG, or uncompressed TIF?

gingerlime commented 2 years ago

@cshum that's awesome! looks like a huge improvement. thumbor maxes at around 70req/s and imagor is at ~600req/s! Amazing.

Obviously though, since VIPS_CONCURRENCY_LEVEL doesn't actually limit the number of processes, whereas THUMBOR_NUM_PROCESSES does, so I guess a fair comparison would be to use THUMBOR_NUM_PROCESSES with the number of cpu cores. But looks like imagor leaves thumbor in the dust...

This was just an initial benchmark, but a great starting point. From here, like @jcupitt mentions, I think we can compare different image types and different operations, including multiple filters etc to get a better idea of some kind of a "mix" of transformations. But I'm very optimistic that imagor will outperform thumbor in all of those!

Happy to continue working together on this :)

cshum commented 2 years ago

Think GOMAXPROCS would be the closest analogy to THUMBOR_NUM_PROCESSES I guess, but of course there are differences. VIPS_CONCURRENCY (I renamed it back to avoid confusions) for some reason have adverse effect in performance for higher numbers, so I just make it 1 by default.

Go + libvips and Python + Pillow both have completely different architectures, imo it is not quite possible to define the same environments using just a variable or two.

Nevertheless these benchmarks and suggestions have been great motivations and directions for the project. It is such a great pleasure to have you @gingerlime @jcupitt on this :)

jcupitt commented 2 years ago

The libvips threading system has a small, fixed overhead (doesn't scale with complexity of pipeline, does scale with number of threads). This means for very simple operations, or operations with little parallelism, it can slow things down.

For example:

$ vipsheader st-francis.jpg 
st-francis.jpg: 30000x26319 uchar, 3 bands, srgb, jpegload
$ /usr/bin/time -f %M:%e vipsthumbnail st-francis.jpg 
67500:2.50
$ VIPS_CONCURRENCY=1 /usr/bin/time -f %M:%e vipsthumbnail st-francis.jpg 
44296:2.47

This is a big 32-thread machine, so the threading memory overhead is pretty significant too.

If you're doing something more complex (watermarking, sharpen, blur, etc.) then threads help a lot. For example:

$ VIPS_CONCURRENCY=1 /usr/bin/time -f %M:%e vips gaussblur st-francis.jpg x.jpg 10
102476:28.56
$ /usr/bin/time -f %M:%e vips gaussblur st-francis.jpg x.jpg 10
394036:6.29
jcupitt commented 2 years ago

On pillow vs. libvips, you're right, they are extremely different and hard to benchmark fairly.

pillow-simd has very fast operations (lots of hand-written AVX2 code from the very skilled homm) so it'll be faster if you are timing single operations with one thread. However, it does not have an automatic threading system, and it does not pipeline operations.

libvips currently has no hand-written SIMD (it relies on orc and gcc to generate portable SIMD), so it's slower if you time single operations with one core. But it has a fancy automatic threading system, and can do things like overlapping load and save.

tldr: pillow-simd will win on one-thread, single-operation benchmarks; libvips will win if you allow threading and if you time overall end-to-end latency. libvips should have lower memory use too.

Kleis has a branch of libvips were he's adapted the pillow-simd vector resize implementation to run inside libvips. This would potentially give us a best of both worlds resizer. It might get merged next year, who knows.

gingerlime commented 2 years ago

Thanks @jcupitt yeah, this is another reason to try some kind of a mix of transformations when benchmarking... some transformations might be faster with one implementation, but then others can be the opposite. Having a fair mix of filters and image operations should at least give us a ballpark idea when comparing the two. Of course, if someone wants to optimize for their use-case they might need to create their own benchmarks :) and at the same time, of course one can say that the mix is biased or unfair towards one library, so it's not going to be a perfect measurement.

btw, this just reminded me that the current benchmark doesn't use thumbor with SIMD, so would be interesting to also compare it.

gingerlime commented 2 years ago

Hi @cshum I improved the benchmarks a little to add more scenarios and another type of (jpeg) image. It's just another step. Results seem similar in terms of performance, but I noticed that there are some errors with one of the (more complex) image transformations ... it might be some kind of a concurrency bug? because it works, but it produces ~32% failures during the benchmark. I pushed the new benchmark to the same branch, so you can pull and run it yourself if you want...

Here's the benchmark raw output

for imagor

***************** BEGIN BENCHMARK FOR imagor:8000 WITH 1 Docker processes, 4 imagor processes ********************

Creating network "locust_default" with the default driver
Creating locust_locust_1       ... done
Creating locust_locust-slave_1 ... done
Creating locust_locust-slave_2 ... done
Creating locust_locust-slave_3 ... done
Creating locust_thumbor_1      ... done
Creating locust_imagor_1       ... done
Attaching to locust_locust_1
locust_1        | [2022-01-07 10:20:48,796] 52abb65fcd9e/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1        | [2022-01-07 10:20:48,817] 52abb65fcd9e/INFO/locust.runners: Client '6468953c4c18_eea4739d20304770bde24fbea53ce615' reported as ready. Currently 1 clients ready t
o swarm.
locust_1        | [2022-01-07 10:20:48,833] 52abb65fcd9e/INFO/locust.runners: Client '21eeefe80a1e_29a9a26d807a45518ebbebe435a2fbd7' reported as ready. Currently 2 clients ready t
o swarm.
locust_1        | [2022-01-07 10:20:48,882] 52abb65fcd9e/INFO/locust.runners: Client 'b68f68651817_6716036b4d8e44eda8d9f26b92cb4100' reported as ready. Currently 3 clients ready t
o swarm.
locust_1        | [2022-01-07 10:20:49,798] 52abb65fcd9e/INFO/locust.main: Run time limit set to 120 seconds
locust_1        | [2022-01-07 10:20:49,798] 52abb65fcd9e/INFO/locust.main: Starting Locust 2.5.1
locust_1        | [2022-01-07 10:20:49,799] 52abb65fcd9e/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1        | [2022-01-07 10:21:38,949] 52abb65fcd9e/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1        | [2022-01-07 10:22:49,798] 52abb65fcd9e/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1        | [2022-01-07 10:22:49,896] 52abb65fcd9e/INFO/locust.runners: Client '21eeefe80a1e_29a9a26d807a45518ebbebe435a2fbd7' quit. Currently 0 clients connected.
locust_1        | [2022-01-07 10:22:49,903] 52abb65fcd9e/INFO/locust.runners: Client 'b68f68651817_6716036b4d8e44eda8d9f26b92cb4100' quit. Currently 0 clients connected.
locust_1        | [2022-01-07 10:22:49,941] 52abb65fcd9e/INFO/locust.runners: Client '6468953c4c18_eea4739d20304770bde24fbea53ce615' quit. Currently 0 clients connected.
locust_1        | [2022-01-07 10:22:49,941] 52abb65fcd9e/INFO/locust.runners: The last worker quit, stopping test.
locust_1        | [2022-01-07 10:22:50,301] 52abb65fcd9e/INFO/locust.main: Running teardowns...
locust_1        | [2022-01-07 10:22:50,802] 52abb65fcd9e/INFO/locust.main: Shutting down (exit code 1), bye.
locust_1        | [2022-01-07 10:22:50,803] 52abb65fcd9e/INFO/locust.main: Cleaning up runner...
locust_1        |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1        | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1        |  GET /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                    11450     0(0.00%)  |     160       5     520     150  |   95.36    0.00
locust_1        |  GET /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   11449     0(0.00%)  |     188       4     548     180  |   95.35
  0.00
locust_1        |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                        11348     0(0.00%)  |     202       4     665     200  |   94.51    0.00
locust_1        |  GET /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   11348     0(0.00%)  |     215       7     534     210  |   94.51    0.
00
locust_1        |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                 11569     0(0.00%)  |     125       4     441     120  |   96.35    0.00
locust_1        |  GET /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):satu
ration(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   11524 3743(32.48%)  |     536       8    1455     500  |
95.97   31.17
locust_1        |  GET /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   11540     0(0.00%)  |     187       5     689     180  |   96.1
1    0.00
locust_1        | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1        |  Aggregated                                                                         80228  3743(4.67%)  |     230       4    1455     190  |  668.15   31.17
locust_1        |
locust_1        | Response time percentiles (approximated)
locust_1        |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99
%   100% # reqs
locust_1        | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|-----
-|------|------|
locust_1        |  GET      /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                           150    190    210    220    270    300    350    380    480    52
0    520  11450
locust_1        |  GET      /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      180    220    240    260    300    330    370    410    52
0    550    550  11449
locust_1        |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                               200    250    280    290    340    380    430    470    570    62
0    670  11348
locust_1        |  GET      /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      210    240    260    280    320    360    400    420    520
 530    530  11348
locust_1        |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                        120    140    160    180    210    240    280    310    380    44
0    440  11569
locust_1        |  GET      /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290)
:saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      500    640    750    820    930   1000   1100   120
0   1400   1400   1500  11524
locust_1        |  GET      /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      180    220    240    260    310    350    400    430
 600    690    690  11540
locust_1        | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|-----
-|------|------|
locust_1        |  None     Aggregated                                                                            190    230    270    300    410    630    890    990   1300   140
0   1500  80228
locust_1        |
locust_1        | Error report
locust_1        |  # occurrences      Error
locust_1        | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1        |  3743               GET /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,botto
m,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg: "HTTPError('406 Client Error: Not Accepta
ble for url: http://imagor:8000/unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(2
90):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg')"
locust_1        | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1        |
locust_locust_1 exited with code 1

for thumbor

***************** BEGIN BENCHMARK FOR thumbor WITH 1 Docker processes, 4 thumbor processes ********************

Creating network "locust_default" with the default driver
Creating locust_locust_1       ... done
Creating locust_thumbor_1      ... done
Creating locust_locust-slave_1 ... done
Creating locust_locust-slave_2 ... done
Creating locust_locust-slave_3 ... done
Creating locust_imagor_1       ... done
Attaching to locust_locust_1
locust_1        | [2022-01-07 10:22:59,314] 1b0964933a2b/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1        | [2022-01-07 10:22:59,361] 1b0964933a2b/INFO/locust.runners: Client '3ae8e98c3bb0_36a0700a4fc44b8f8c57929c8b150e28' reported as ready. Currently 1 clients ready to swarm.
locust_1        | [2022-01-07 10:23:00,117] 1b0964933a2b/INFO/locust.runners: Client 'fa8f19b67dd9_2666ca0d11674f39bc01b734d05264d4' reported as ready. Currently 2 clients ready to swarm.
locust_1        | [2022-01-07 10:23:00,324] 1b0964933a2b/INFO/root: Waiting for workers to be ready, 2 of 3 connected
locust_1        | [2022-01-07 10:23:00,397] 1b0964933a2b/INFO/locust.runners: Client '3f5cab43e3c3_c36787f454d24c629d09e063224f8d72' reported as ready. Currently 3 clients ready to swarm.
locust_1        | [2022-01-07 10:23:01,326] 1b0964933a2b/INFO/locust.main: Run time limit set to 120 seconds
locust_1        | [2022-01-07 10:23:01,326] 1b0964933a2b/INFO/locust.main: Starting Locust 2.5.1
locust_1        | [2022-01-07 10:23:01,333] 1b0964933a2b/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1        | [2022-01-07 10:23:50,482] 1b0964933a2b/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1        | [2022-01-07 10:25:01,326] 1b0964933a2b/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1        | [2022-01-07 10:25:01,362] 1b0964933a2b/INFO/locust.runners: Client '3ae8e98c3bb0_36a0700a4fc44b8f8c57929c8b150e28' quit. Currently 0 clients connected.
locust_1        | [2022-01-07 10:25:01,380] 1b0964933a2b/INFO/locust.runners: Client 'fa8f19b67dd9_2666ca0d11674f39bc01b734d05264d4' quit. Currently 0 clients connected.
locust_1        | [2022-01-07 10:25:01,383] 1b0964933a2b/INFO/locust.runners: Client '3f5cab43e3c3_c36787f454d24c629d09e063224f8d72' quit. Currently 0 clients connected.
locust_1        | [2022-01-07 10:25:01,384] 1b0964933a2b/INFO/locust.runners: The last worker quit, stopping test.
locust_1        | [2022-01-07 10:25:01,830] 1b0964933a2b/INFO/locust.main: Running teardowns...
locust_1        | [2022-01-07 10:25:02,331] 1b0964933a2b/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1        | [2022-01-07 10:25:02,332] 1b0964933a2b/INFO/locust.main: Cleaning up runner...
locust_1        |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1        | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1        |  GET /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                     1574     0(0.00%)  |    1640      38    3277    1800  |   13.12    0.00
locust_1        |  GET /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg    1550     0(0.00%)  |    1712      85    3297    1900  |   12.92    0.00
locust_1        |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                         1674     0(0.00%)  |    1683      65    3280    1900  |   13.95    0.00
locust_1        |  GET /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg    1640     0(0.00%)  |    1738      86    3313    1900  |   13.67    0.00
locust_1        |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                  1603     0(0.00%)  |    1647      50    3224    1900  |   13.36    0.00
locust_1        |  GET /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg    1571     0(0.00%)  |    1668      49    3277    1900  |   13.09    0.00
locust_1        |  GET /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg    1568     0(0.00%)  |    1688      44    3287    1900  |   13.07    0.00
locust_1        | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1        |  Aggregated                                                                         11180     0(0.00%)  |    1682      38    3313    1900  |   93.18    0.00
locust_1        |
locust_1        | Response time percentiles (approximated)
locust_1        |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1        | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1        |  GET      /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                          1800   2000   2100   2200   2300   2500   2700   2900   3300   3300   3300   1574
locust_1        |  GET      /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     1900   2100   2200   2200   2400   2500   2800   2900   3300   3300   3300   1550
locust_1        |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                              1900   2100   2200   2200   2400   2500   2700   2900   3300   3300   3300   1674
locust_1        |  GET      /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     1900   2100   2200   2200   2400   2600   2800   2900   3300   3300   3300   1640
locust_1        |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                       1900   2000   2100   2200   2300   2500   2700   2900   3200   3200   3200   1603
locust_1        |  GET      /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     1900   2000   2100   2200   2400   2500   2700   2900   3300   3300   3300   1571
locust_1        |  GET      /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     1900   2100   2200   2200   2300   2400   2600   2900   3300   3300   3300   1568
locust_1        | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1        |  None     Aggregated                                                                           1900   2100   2100   2200   2400   2500   2700   2900   3300   3300   3300  11180
locust_1        |
locust_locust_1 exited with code 0
jcupitt commented 2 years ago

Huh the fails are alarming. It's this URL:

/unsafe/fit-in/600x400/filters
    :fill(white)
    :watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10)
    :hue(290)
    :saturation(100)
    :fill(yellow)
    :format(jpeg)
    :quality(80)
    /upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg 

How easy would it be to run this case on it's own to get some more info?

cshum commented 2 years ago

https://github.com/MinimalCompact/thumbor/blob/imagor-thumbor-benchmark/benchmarks/locust/docker-compose.yml#L28

A quick look at the docker-compose, think its missing File loader config. The imagor load/storage works slightly different - loader only loads image and storage only saves image. Both file loader and file storage needs to be set in order to have the file getting reused.

Storage without loader means it always fallback to HTTP loader, which means the image always fetched live the file never get reused.

I will check out the new benchmark soon and investigate more in detail.

gingerlime commented 2 years ago

let me know how to configure the file loader :) happy to update it of course.

gingerlime commented 2 years ago

@jcupitt I'm not sure, because it does work on its own, and ~68% of requests are successful, that's why I thought maybe it's some kind of a concurrency issue? we can certainly run a benchmark just with this URL, but besides that I'm not sure how to isolate the problem. Hopefully @cshum has a much better idea :)

gingerlime commented 2 years ago

@jcupitt besides this issue, do you have some suggestions for the types of transformations we should benchmark to get a "realistic" scenario ... also the types of images we might want to include in the benchmark (different formats, the object(s) in the image, etc). I think it can help improve the benchmark, and also potentially be useful in future as some kind of a test suite ...

gingerlime commented 2 years ago

another side-note, I also tried to run the benchmark using thumbor with SIMD: and the performance difference wasn't that big compared to imagor. Without SIMD, thumbor was at 93 req/s and with SIMD it went up to 107req/s (compared to imagor with >600req/s)

jcupitt commented 2 years ago

For scenarios, you could try some larger JPG images. Many smartphones now will make 6k x 4k JPGs and downsizing them to something sane must be very common.

Progressive JPG and PNG images are very difficult to handle efficiently, but maybe not very common on use.

There are the up and coming formats, like HEIC and JXL. Perhaps that's out of scope?

cshum commented 2 years ago

@gingerlime I slightly update the docker-compose file, would you please try it out? https://github.com/cshum/thumbor/blob/imagor-thumbor-benchmark-2/benchmarks/locust/docker-compose.yml#L19

I did a quick run myself, the watermark case does indeed show some errors. Watermark is probably the more complex filter in the pipeline as it consists of recursive image loading, image compositions etc.

Though what causes the error, whether it is concurrency related or image composition related, I will need a closer look.

(base) ➜  locust git:(imagor-thumbor-benchmark-2) ✗ ./run_benchmark     
WARN[0000] The "TEST_HOST" variable is not set. Defaulting to a blank string. 
WARN[0000] The "TEST_HOST" variable is not set. Defaulting to a blank string. 
WARN[0000] The "THUMBOR_DOCKER_PROCS" variable is not set. Defaulting to a blank string. 
WARN[0000] The "THUMBOR_NUM_PROCESSES" variable is not set. Defaulting to a blank string. 
WARN[0000] The "TEST_HOST" variable is not set. Defaulting to a blank string. 
[+] Building 3.0s (9/9) FINISHED                                                                                                                                                                                        
 => [locust_locust internal] load build definition from Dockerfile                                                                                                                                                 0.0s
 => => transferring dockerfile: 31B                                                                                                                                                                                0.0s
 => [locust_locust-slave internal] load build definition from Dockerfile                                                                                                                                           0.0s
 => => transferring dockerfile: 31B                                                                                                                                                                                0.0s
 => [locust_locust internal] load .dockerignore                                                                                                                                                                    0.0s
 => => transferring context: 2B                                                                                                                                                                                    0.0s
 => [locust_locust-slave internal] load .dockerignore                                                                                                                                                              0.0s
 => => transferring context: 2B                                                                                                                                                                                    0.0s
 => [locust_locust internal] load metadata for docker.io/library/python:3.8                                                                                                                                        2.7s
 => [auth] library/python:pull token for registry-1.docker.io                                                                                                                                                      0.0s
 => [locust_locust-slave 1/2] FROM docker.io/library/python:3.8@sha256:4c4e6735f46e7727965d1523015874ab08f71377b3536b8789ee5742fc737059                                                                            0.0s
 => CACHED [locust_locust-slave 2/2] RUN python3.8 -m pip install locust                                                                                                                                           0.0s
 => [locust_locust] exporting to image                                                                                                                                                                             0.0s
 => => exporting layers                                                                                                                                                                                            0.0s
 => => writing image sha256:7c199897039b3eb8a9f0e1b654f2bef5b8688d610c26965a81d44cd3203f2288                                                                                                                       0.0s
 => => naming to docker.io/library/locust_locust-slave                                                                                                                                                             0.0s
 => => naming to docker.io/library/locust_locust                                                                                                                                                                   0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
WARN[0000] The "TEST_HOST" variable is not set. Defaulting to a blank string. 
WARN[0000] The "TEST_HOST" variable is not set. Defaulting to a blank string. 
WARN[0000] The "TEST_HOST" variable is not set. Defaulting to a blank string. 
WARN[0000] The "THUMBOR_DOCKER_PROCS" variable is not set. Defaulting to a blank string. 
WARN[0000] The "THUMBOR_NUM_PROCESSES" variable is not set. Defaulting to a blank string. 
[+] Running 1/0
 ⠿ locust  Warning: No resource found to remove                                                                                                                                                                    0.0s
./run_benchmark: line 7: nproc: command not found

***************** BEGIN BENCHMARK FOR imagor:8000 WITH 1 Docker processes,  imagor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.2s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.4s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.0s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.2s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.1s
locust_1  | [2022-01-07 12:20:08,805] bc636309b559/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2022-01-07 12:20:08,807] bc636309b559/INFO/locust.runners: Client '2d8d9079094b_8b0b11f963b34f289329d3660a80be2f' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2022-01-07 12:20:08,817] bc636309b559/INFO/locust.runners: Client 'bbfd37a44319_04c1de76f8c54d0b90022e39fa478002' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2022-01-07 12:20:08,843] bc636309b559/INFO/locust.runners: Client '1c61c51b04a2_21fae43e430c4f3085ece7d884dc32e7' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2022-01-07 12:20:09,809] bc636309b559/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2022-01-07 12:20:09,809] bc636309b559/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2022-01-07 12:20:09,810] bc636309b559/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2022-01-07 12:20:58,920] bc636309b559/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2022-01-07 12:22:09,674] bc636309b559/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2022-01-07 12:22:09,748] bc636309b559/INFO/locust.runners: Client '2d8d9079094b_8b0b11f963b34f289329d3660a80be2f' quit. Currently 0 clients connected.
locust_1  | [2022-01-07 12:22:09,771] bc636309b559/INFO/locust.runners: Client 'bbfd37a44319_04c1de76f8c54d0b90022e39fa478002' quit. Currently 0 clients connected.
locust_1  | [2022-01-07 12:22:09,786] bc636309b559/INFO/locust.runners: Client '1c61c51b04a2_21fae43e430c4f3085ece7d884dc32e7' quit. Currently 0 clients connected.
locust_1  | [2022-01-07 12:22:09,787] bc636309b559/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2022-01-07 12:22:10,176] bc636309b559/INFO/locust.main: Running teardowns...
locust_1  | [2022-01-07 12:22:10,678] bc636309b559/INFO/locust.main: Shutting down (exit code 1), bye.
locust_1  | [2022-01-07 12:22:10,678] bc636309b559/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                    13951     0(0.00%)  |     123       3     406     120  |  116.34    0.00
locust_1  |  GET /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   13946     0(0.00%)  |     138       3     483     130  |  116.30    0.00
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                        14343     0(0.00%)  |     166       3     526     160  |  119.61    0.00
locust_1  |  GET /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   14283     0(0.00%)  |     156       8     445     150  |  119.11    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                 14200     0(0.00%)  |      99       4     391      93  |  118.42    0.00
locust_1  |  GET /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   14075   427(3.03%)  |     486       5    1214     470  |  117.37    3.56
locust_1  |  GET /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   14200     0(0.00%)  |     138       2     401     130  |  118.42    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                         98998   427(0.43%)  |     186       2    1214     140  |  825.56    3.56
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                           120    140    160    170    200    230    260    290    370    410    410  13951
locust_1  |  GET      /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      130    160    180    190    230    260    300    320    410    480    480  13946
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                               160    190    220    230    270    310    360    390    500    520    530  14343
locust_1  |  GET      /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      150    170    190    200    240    270    310    330    390    440    450  14283
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                         93    110    130    140    160    190    230    260    300    360    390  14200
locust_1  |  GET      /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      470    600    670    710    810    890    970   1000   1100   1200   1200  14075
locust_1  |  GET      /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      130    160    180    190    220    250    280    310    390    400    400  14200
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                            140    180    210    230    360    590    770    850   1000   1100   1200  98998
locust_1  | 
locust_1  | Error report
locust_1  |  # occurrences      Error                                                                                               
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  427                GET /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg: "HTTPError('406 Client Error: Not Acceptable for url: http://imagor:8000/unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg')"
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  | 
locust_1 exited with code 1
[+] Running 7/7
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.3s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes,  imagor processes and  threads ********************

***************** BEGIN BENCHMARK FOR thumbor WITH 1 Docker processes,  thumbor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.3s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.2s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.4s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.2s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.0s
locust_1  | [2022-01-07 12:22:14,877] 8f912837a965/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2022-01-07 12:22:14,879] 8f912837a965/INFO/locust.runners: Client '0b4b47383568_5bf54bc5389347bd980db547a9e29a3d' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2022-01-07 12:22:14,880] 8f912837a965/INFO/locust.runners: Client 'c5223f4fe3b4_4801117cc1f54bd4a6afebf819e66231' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2022-01-07 12:22:15,027] 8f912837a965/INFO/locust.runners: Client '0f84debf5eaf_a78156f533b04b3d87e3aebb2804ea79' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2022-01-07 12:22:15,880] 8f912837a965/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2022-01-07 12:22:15,881] 8f912837a965/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2022-01-07 12:22:15,881] 8f912837a965/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2022-01-07 12:23:05,000] 8f912837a965/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2022-01-07 12:24:15,746] 8f912837a965/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2022-01-07 12:24:15,772] 8f912837a965/INFO/locust.runners: Client '0b4b47383568_5bf54bc5389347bd980db547a9e29a3d' quit. Currently 0 clients connected.
locust_1  | [2022-01-07 12:24:15,807] 8f912837a965/INFO/locust.runners: Client 'c5223f4fe3b4_4801117cc1f54bd4a6afebf819e66231' quit. Currently 0 clients connected.
locust_1  | [2022-01-07 12:24:15,822] 8f912837a965/INFO/locust.runners: Client '0f84debf5eaf_a78156f533b04b3d87e3aebb2804ea79' quit. Currently 0 clients connected.
locust_1  | [2022-01-07 12:24:15,822] 8f912837a965/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2022-01-07 12:24:16,249] 8f912837a965/INFO/locust.main: Running teardowns...
locust_1  | [2022-01-07 12:24:16,750] 8f912837a965/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2022-01-07 12:24:16,750] 8f912837a965/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                      559     0(0.00%)  |    4846      43    6601    6200  |    4.66    0.00
locust_1  |  GET /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     526     0(0.00%)  |    5093      47    6606    6200  |    4.39    0.00
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                          545     0(0.00%)  |    4690      80    6601    6200  |    4.55    0.00
locust_1  |  GET /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     521     0(0.00%)  |    4945     114    6624    6200  |    4.35    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                   560     0(0.00%)  |    4788      33    6575    6200  |    4.67    0.00
locust_1  |  GET /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     539     0(0.00%)  |    4792     117    6574    6200  |    4.50    0.00
locust_1  |  GET /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     527     0(0.00%)  |    5052     119    6629    6200  |    4.40    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                          3777     0(0.00%)  |    4884      33    6629    6200  |   31.51    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                          6200   6300   6300   6300   6400   6500   6500   6500   6600   6600   6600    559
locust_1  |  GET      /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     6200   6300   6400   6400   6400   6500   6500   6500   6600   6600   6600    526
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                              6200   6300   6300   6400   6400   6500   6500   6500   6600   6600   6600    545
locust_1  |  GET      /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     6200   6300   6400   6400   6400   6500   6500   6600   6600   6600   6600    521
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                       6200   6300   6300   6400   6400   6500   6500   6500   6600   6600   6600    560
locust_1  |  GET      /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     6200   6300   6300   6400   6400   6500   6500   6500   6600   6600   6600    539
locust_1  |  GET      /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     6200   6300   6300   6400   6400   6500   6500   6500   6600   6600   6600    527
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                           6200   6300   6300   6400   6400   6500   6500   6500   6600   6600   6600   3777
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        4.9s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.1s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes,  thumbor processes and  threads ********************

(base) ➜  locust git:(imagor-thumbor-benchmark-2) ✗ 
cshum commented 2 years ago

@jcupitt Imagor will produce more verbose logs if -debug argument enabled, including the VIPS related logs.

http://localhost:8000/unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg

e.g. the above image url request will have the following logs printed. the "filter" log group will print out the time it took to process, indeed "watermark" consumes the heaviest, taking ~122ms, while other filters take around a few milliseconds:

(base) ➜  ~ docker run -p 8000:8000 shumc/imagor -imagor-unsafe -debug
2022-01-07T12:28:15.031Z    DEBUG   imagor/imagor.go:380    imagor  {"unsafe": true, "request_timeout": "30s", "load_timeout": "20s", "save_timeout": "20s", "cache_header_ttl": "24h0m0s", "loaders": ["HTTPLoader"], "storages": [], "result_loaders": [], "result_storages": [], "processors": ["VipsProcessor"]}
2022-01-07T12:28:15.033Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "found /usr/local/lib/vips-modules-8.12"}
2022-01-07T12:28:15.033Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "using configure-time prefix"}
2022-01-07T12:28:15.033Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "VIPS_PREFIX = /usr/local"}
2022-01-07T12:28:15.033Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "VIPS_LIBDIR = /usr/local/lib"}
2022-01-07T12:28:15.033Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "prefix = /usr/local"}
2022-01-07T12:28:15.033Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "libdir = /usr/local/lib"}
2022-01-07T12:28:15.035Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "searching \"/usr/local/lib/vips-modules-8.12\""}
2022-01-07T12:28:15.035Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "loading \"/usr/local/lib/vips-modules-8.12/vips-heif.so\""}
2022-01-07T12:28:15.040Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "loading \"/usr/local/lib/vips-modules-8.12/vips-magick.so\""}
2022-01-07T12:28:15.042Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "loading \"/usr/local/lib/vips-modules-8.12/vips-poppler.so\""}
2022-01-07T12:28:15.058Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "loading \"/usr/local/lib/vips-modules-8.12/vips-openslide.so\""}
2022-01-07T12:28:15.062Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "searching \"/usr/local/lib/vips-plugins-8.12\""}
2022-01-07T12:28:15.064Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "vips 8.12.1 started with concurrency=1 cache_max_files=0 cache_max_mem=0 cache_max=0"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=svg"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=tiff"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=webp"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=heif"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=heif"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=pdf"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=png"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=magick"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=gif"}
2022-01-07T12:28:15.068Z    INFO    vipsprocessor/vips.go:82    govips  {"log": "registered image type loader type=jpeg"}
2022-01-07T12:28:15.068Z    INFO    server/server.go:82 listen  {"addr": ":8000"}
2022-01-07T12:28:17.317Z    DEBUG   imagor/imagor.go:339    acquire {"key": "res:fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg"}
2022-01-07T12:28:17.317Z    DEBUG   imagor/imagor.go:339    acquire {"key": "img:upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg"}
2022-01-07T12:28:17.646Z    DEBUG   imagor/imagor.go:287    loaded  {"key": "upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg"}
2022-01-07T12:28:17.647Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "thumbnailing 227801 bytes of data"}
2022-01-07T12:28:17.649Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "selected loader is VipsForeignLoadJpegBuffer"}
2022-01-07T12:28:17.649Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "input size is 1200 x 800"}
2022-01-07T12:28:17.649Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "loading with factor 1 pre-shrink"}
2022-01-07T12:28:17.651Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "pre-shrunk size is 1200 x 800"}
2022-01-07T12:28:17.651Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "converting to processing space srgb"}
2022-01-07T12:28:17.651Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "residual reducev by 0.5"}
2022-01-07T12:28:17.651Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "reducev: 13 point mask"}
2022-01-07T12:28:17.653Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "reducev: using vector path"}
2022-01-07T12:28:17.654Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "reducev sequential line cache"}
2022-01-07T12:28:17.654Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "residual reduceh by 0.5"}
2022-01-07T12:28:17.654Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "reduceh: 13 point mask"}
2022-01-07T12:28:17.655Z    DEBUG   vipsprocessor/vips.go:80    govips  {"log": "created imageref 0xc0000780f0"}
2022-01-07T12:28:17.656Z    DEBUG   vipsprocessor/process.go:119    filter  {"name": "fill", "args": "white", "took": "216.4µs"}
2022-01-07T12:28:17.656Z    DEBUG   imagor/imagor.go:339    acquire {"key": "img:raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png"}
2022-01-07T12:28:17.774Z    DEBUG   imagor/imagor.go:287    loaded  {"key": "raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png"}
2022-01-07T12:28:17.774Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "thumbnailing 6962 bytes of data"}
2022-01-07T12:28:17.775Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "selected loader is VipsForeignLoadPngBuffer"}
2022-01-07T12:28:17.775Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "input size is 202 x 259"}
2022-01-07T12:28:17.775Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "loading with factor 1 pre-shrink"}
2022-01-07T12:28:17.775Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "pre-shrunk size is 202 x 259"}
2022-01-07T12:28:17.775Z    INFO    vipsprocessor/vips.go:82    VIPS    {"log": "converting to processing space srgb"}
2022-01-07T12:28:17.775Z    DEBUG   vipsprocessor/vips.go:80    govips  {"log": "created imageref 0xc00048e050"}
2022-01-07T12:28:17.778Z    DEBUG   vipsprocessor/process.go:119    filter  {"name": "watermark", "args": "raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10", "took": "121.9689ms"}
2022-01-07T12:28:17.783Z    DEBUG   vipsprocessor/process.go:119    filter  {"name": "hue", "args": "290", "took": "4.9938ms"}
2022-01-07T12:28:17.788Z    DEBUG   vipsprocessor/process.go:119    filter  {"name": "saturation", "args": "100", "took": "4.7259ms"}
2022-01-07T12:28:17.790Z    DEBUG   vipsprocessor/process.go:119    filter  {"name": "fill", "args": "yellow", "took": "1.6883ms"}
2022-01-07T12:28:17.790Z    DEBUG   vipsprocessor/process.go:119    filter  {"name": "format", "args": "jpeg", "took": "11.9µs"}
2022-01-07T12:28:17.790Z    DEBUG   vipsprocessor/process.go:119    filter  {"name": "quality", "args": "80", "took": "10.7µs"}
2022-01-07T12:28:18.060Z    DEBUG   vipsprocessor/vips.go:80    govips  {"log": "closing image 0xc00048e050"}
2022-01-07T12:28:18.064Z    DEBUG   imagor/imagor.go:204    processed   {"params": {"path":"fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg","image":"upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg","unsafe":true,"fit_in":true,"width":600,"height":400,"filters":[{"name":"fill","args":"white"},{"name":"watermark","args":"raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10"},{"name":"hue","args":"290"},{"name":"saturation","args":"100"},{"name":"fill","args":"yellow"},{"name":"format","args":"jpeg"},{"name":"quality","args":"80"}]}, "meta": {"format":"jpeg","content_type":"image/jpeg","width":600,"height":400,"orientation":1}}
z
cshum commented 2 years ago

@jcupitt for repeated watermark, imagor exhaustively composite the image looped by the number of repeats: https://github.com/cshum/imagor/blob/master/processor/vipsprocessor/filter.go#L163

jcupitt commented 2 years ago

For larger images, that'll be pretty slow, I think, and there's a danger of stack overflow. I expect it would be faster to expand the image to a tile, then repeat the tile with replicate and composite it once.

eg. in Python:

#!/usr/bin/python3

import sys
import pyvips

im = pyvips.Image.new_from_file(sys.argv[1], access="sequential")

# make the watermark 
text = pyvips.Image.text(sys.argv[3], width = 500, dpi = 300).rotate(45)
blue = text.new_from_image([128, 128, 255]).copy(interpretation ="srgb")
# use the text as the alpha, scaled down to make it semi-transparent
text = blue.bandjoin(text * 0.3).cast("uchar")

# replicate many times to cover the image
overlay = text \
        .replicate(1 + im.width / text.width, 1 + im.height / text.height) \
        .crop(0, 0, im.width, im.height)

# composite on top of the image
im = im.composite(overlay, "over")

im.write_to_file(sys.argv[2])

Run with:

$ time ./watermark.py ~/pics/theo.jpg x.jpg "HELLO^MTHERE"
real    0m0.216s
user    0m1.880s
sys 0m0.086s

So ~200ms for a 6k x 4k jpg.

x

gingerlime commented 2 years ago

@cshum I updated it. Just curious but wouldn't FILE_LOADER_BASE_DIR cache the results on the filesystem? at least with thumbor, that's why I specified no_storage, so it's a true benchmark, but I'm not sure how imagor uses the loader and storage space...

cshum commented 2 years ago

@gingerlime
FILE_RESULT_LOADER_BASE_DIR + FILE_RESULT_STORAGE_BASE_DIR caches the processed image, FILE_LOADER_BASE_DIR + FILE_STORAGE_BASE_DIR caches the original image.

Without FILE_LOADER_BASE_DIR Imagor will fallback to the last resort i.e. HTTP Loader.

In this case, we want the image fetched from file (instead of http request to the remote image every single time), but we want to process it. So we should enable "file loader + storage" but disable "file result loader + storage".

I do agree this needs to be better documented https://github.com/cshum/imagor/issues/11#issuecomment-1003992109 Will try to plan that out at some point...

gingerlime commented 2 years ago

Ok, cool. Just curious, but why do you need two different folders for loader and for results? or in other words, what's the difference between LOADER and STORAGE?

cshum commented 2 years ago

Basically loader and storage are the source and destination of the original image; result loader and result storage are the source and destination of the processed image.

The Imagor library accepts an array of loader and storage adaptors. The image pipeline currently works this way:

File Result Loader -> S3 Result Loader -> File Loader -> S3 Loader -> HTTP Loader -> File Storage -> S3 Storage -> VIPS Processor -> File Result Storage -> S3 Result Storage

The reason for separated config for loader and storage is to give flexibility. For example:

Though in normal circumstances, yes you would want the loader and storage pair to be on the same location.

gingerlime commented 2 years ago

Gotcha! Thank you. So for the benchmark, we actually don't need to specify the loader base dir, only the storage one, right? (we will always initially load images from HTTP). Although I guess it doesn't hurt.

gingerlime commented 2 years ago

Though in normal circumstances, yes you would want the loader and storage pair to be on the same bucket.

I guess it depends, but I would probably store locally (as a local cache for better performance) rather than on S3, even if I might load from S3 ...

cshum commented 2 years ago

If we always fetch live from HTTP, then there are lots of external factors that can fluctuate the result. e.g. ratelimit from the remote image source, network speed etc.

gingerlime commented 2 years ago

Yes, that's another reason to want to keep a local copy after the initial load. But since you mentioned using bucket for both loader and storage, I would say that it's better for performance (and cost) to store locally rather than on an S3 bucket.

cshum commented 2 years ago

The above loader/storage options are all optional; you choose to enable the adaptors that fits your use case.

My production use case heavily relies on S3 though. Of course it is slower, but in a server cluster environment (e.g. k8s, ECS, etc) it is relatively hassle free in comparison with, say EFS. And with the server/pod/instance becomes stateless (not depending on file system), it simplifies deployment a lot e.g. you can place it inside a random EC2 spot instance without worrying too much.

cshum commented 2 years ago

@jcupitt The vips_replicate function has been implemented. Also fixed a number of IO/concurrency related cases.

Having checked the remaining error messages,

png2vips: unable to read source source

seems to be VIPS specific. What would be the usual cause of it?

Attached are the verbose logs from the benchmark. However they are highly concurrent so it is not easy to form the sequence of 1 request. vips.log

There would be no error from the benchmark, if watermark is disabled. So watermark would be the place to look for.

jcupitt commented 2 years ago

There should be an error before that, but it's not in the log, unfortunately.

Could it be a GC issue? Perhaps the memory area is being freed and reused before the PNG reader gets to work? That would cause exactly this kind of intermittent read failure. Watermark is the slowest operation, so it's where you'd expect races on GC to occur.

It's a horrible suggestion, I know :( the worst kind of thing to try to debug.

jcupitt commented 2 years ago

Those error messages are being generated here:

https://github.com/libvips/libvips/blob/master/libvips/foreign/vipspng.c#L789-L822

It looks like govips is just showing the last error message in the vips error log, so the earlier ones (which might be more meaningful) are being hidden.

You could try deleting those two vips_error lines and see if you get something more useful.

(I've removed these two not-useful error messages from master)

cshum commented 2 years ago

@jcupitt interestingly I omitted closing the overlay image at the end of the watermark function, https://github.com/cshum/imagor/blob/master/processor/vipsprocessor/filter.go#L108 and that error from the benchmark test seems to be gone...

This is obviously a bad idea though, haha. But it does give clues that this is GC/memory related, or at least the right direction to look into.

The govips's image.Close method calls the c function g_clear_object: https://github.com/davidbyttow/govips/blob/master/vips/image.go#L492 https://github.com/davidbyttow/govips/blob/3a8eb34fbba3ff92c8a7450dd8309a117ed5d447/vips/image.c

jcupitt commented 2 years ago

I'd think you should just unref overlay, but I don't know how govips handles this stuff.

cshum commented 2 years ago

@jcupitt Is it safe to unref overlay right after vips_composite? Or should that be done after the main image export?

jcupitt commented 2 years ago

The docs are here:

https://www.libvips.org/API/current/VipsObject.html#VipsObject.description

You can safely unref after last use, ie. when you pass an object as an argument, the thing you pass it to takes a ref if it needs one.

cshum commented 2 years ago

@jcupitt from my limited understand (which I could be wrong), the image ref on govips contains the Go buffer of the image, which shares the same memory block with vips buf https://www.libvips.org/API/current/VipsForeignSave.html#vips-pngload-buffer. And this memory block is not part of the VipsObject (which I could be wrong), so it may be out of the VipsObject lifecycle and it is handled differently.

So Go and C shares this same memory block (again, I could be wrong). And the govips image.Close() method tells Go's runtime the memory block can be GC-ed. And if Go's GC kicks in prematurely for some reason, then it could lead to error. I made changes to move image.Close() to be called at the very end, after the vips save_buffer is completed. That particular error seems to be fixed now, though I am also inspecting it in production for memory usage, side effect etc.

@gingerlime I deployed docker image shumc/imagor:0.7.2, which attempts to fix these issues. I also updated the configuration that combines "loader" to become part of the "storage", as the previous configuration confused a bit. So now only FILE_STORAGE_BASE_DIR needs to be set: https://github.com/cshum/thumbor/blob/imagor-thumbor-benchmark-2/benchmarks/locust/docker-compose.yml#L18

jcupitt commented 2 years ago

If Go is using memory blocks (one per request, perhaps?) then I'd leave all unrefs until the end of the request. Maybe have a to-be-unreffed list somewhere and just add refs to that?

cshum commented 2 years ago

@jcupitt yes I did exactly that. Basically a method that adds to the ref list, and a method to unref all. https://github.com/cshum/imagor/blob/master/processor/vipsprocessor/imagerefs.go

I observed the memory usage in my production for around a day, and it has been pretty stable. I also ran the benchmark test and it all passed, with the result below.

I would call it a wrap :)

***************** BEGIN BENCHMARK FOR imagor:8000 WITH 1 Docker processes,  imagor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.4s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.0s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.3s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.3s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.4s
locust_1  | [2022-01-10 16:11:34,435] eaded605f176/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2022-01-10 16:11:34,453] eaded605f176/INFO/locust.runners: Client '29a6d1127ed2_f82bafccb6e9465caf8846f51e9ae85e' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2022-01-10 16:11:34,552] eaded605f176/INFO/locust.runners: Client 'bd36831851da_30a7b38732f4437d8150237709b8ed28' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2022-01-10 16:11:34,566] eaded605f176/INFO/locust.runners: Client '1ebd49a128d6_ddc83eab2b7e4883b6f0626a1ec49215' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2022-01-10 16:11:35,439] eaded605f176/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2022-01-10 16:11:35,440] eaded605f176/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2022-01-10 16:11:35,440] eaded605f176/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2022-01-10 16:12:24,523] eaded605f176/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2022-01-10 16:13:35,298] eaded605f176/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2022-01-10 16:13:35,320] eaded605f176/INFO/locust.runners: Client '29a6d1127ed2_f82bafccb6e9465caf8846f51e9ae85e' quit. Currently 0 clients connected.
locust_1  | [2022-01-10 16:13:35,325] eaded605f176/INFO/locust.runners: Client 'bd36831851da_30a7b38732f4437d8150237709b8ed28' quit. Currently 0 clients connected.
locust_1  | [2022-01-10 16:13:35,328] eaded605f176/INFO/locust.runners: Client '1ebd49a128d6_ddc83eab2b7e4883b6f0626a1ec49215' quit. Currently 0 clients connected.
locust_1  | [2022-01-10 16:13:35,328] eaded605f176/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2022-01-10 16:13:35,801] eaded605f176/INFO/locust.main: Running teardowns...
locust_1  | [2022-01-10 16:13:36,303] eaded605f176/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2022-01-10 16:13:36,303] eaded605f176/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                    13636     0(0.00%)  |     125       2     875     120  |  113.80    0.00
locust_1  |  GET /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   13636     0(0.00%)  |     143       3     929     140  |  113.80    0.00
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                        13809     0(0.00%)  |     164       3    1153     150  |  115.24    0.00
locust_1  |  GET /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   13755     0(0.00%)  |     162       4     945     150  |  114.79    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                 13833     0(0.00%)  |     102       3     991      94  |  115.44    0.00
locust_1  |  GET /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   13488     0(0.00%)  |     515       7    1532     500  |  112.56    0.00
locust_1  |  GET /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg   13800     0(0.00%)  |     140       3     955     130  |  115.16    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                         95957     0(0.00%)  |     192       2    1532     140  |  800.78    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                           120    140    160    170    210    240    280    310    630    870    880  13636
locust_1  |  GET      /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      140    160    180    190    230    270    310    350    870    920    930  13636
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                               150    190    210    230    270    310    370    420   1000   1100   1200  13809
locust_1  |  GET      /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      150    180    200    210    250    280    330    390    610    920    950  13755
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                         94    110    130    140    160    190    250    290    620    910    990  13833
locust_1  |  GET      /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      500    640    710    750    860    950   1000   1100   1300   1500   1500  13488
locust_1  |  GET      /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg      130    160    180    190    230    260    310    350    920    950    960  13800
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                            140    180    210    230    380    630    820    910   1100   1300   1500  95957
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.4s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.1s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes,  imagor processes and  threads ********************

***************** BEGIN BENCHMARK FOR thumbor WITH 1 Docker processes,  thumbor processes ********************

[+] Running 7/7
 ⠿ Network locust_default           Created                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_3  Started                                                                                                                                                                        1.6s
 ⠿ Container locust_thumbor_1       Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust_1        Started                                                                                                                                                                        1.3s
 ⠿ Container locust_locust-slave_1  Started                                                                                                                                                                        1.5s
 ⠿ Container locust_locust-slave_2  Started                                                                                                                                                                        1.3s
 ⠿ Container locust_imagor_1        Started                                                                                                                                                                        1.2s
locust_1  | [2022-01-10 16:13:40,577] c84fd63ea22c/INFO/root: Waiting for workers to be ready, 0 of 3 connected
locust_1  | [2022-01-10 16:13:40,687] c84fd63ea22c/INFO/locust.runners: Client 'c85b8ef5e7fb_f01d352ed24e4d41a758c173124e0816' reported as ready. Currently 1 clients ready to swarm.
locust_1  | [2022-01-10 16:13:40,697] c84fd63ea22c/INFO/locust.runners: Client '7fca76661c9f_68856614dc0846d6aa3a05268623ffeb' reported as ready. Currently 2 clients ready to swarm.
locust_1  | [2022-01-10 16:13:40,789] c84fd63ea22c/INFO/locust.runners: Client '1c1835217843_2a1e4fd0c1564083b5a45da65026b8c0' reported as ready. Currently 3 clients ready to swarm.
locust_1  | [2022-01-10 16:13:41,544] c84fd63ea22c/INFO/locust.main: Run time limit set to 120 seconds
locust_1  | [2022-01-10 16:13:41,544] c84fd63ea22c/INFO/locust.main: Starting Locust 2.5.1
locust_1  | [2022-01-10 16:13:41,544] c84fd63ea22c/INFO/locust.runners: Sending spawn jobs of 200 users at 4.00 spawn rate to 3 ready clients
locust_1  | [2022-01-10 16:14:30,657] c84fd63ea22c/INFO/locust.runners: All users spawned: {"Benchmark": 196} (196 total users)
locust_1  | [2022-01-10 16:15:41,404] c84fd63ea22c/INFO/locust.main: --run-time limit reached. Stopping Locust
locust_1  | [2022-01-10 16:15:41,438] c84fd63ea22c/INFO/locust.runners: Client '7fca76661c9f_68856614dc0846d6aa3a05268623ffeb' quit. Currently 0 clients connected.
locust_1  | [2022-01-10 16:15:41,443] c84fd63ea22c/INFO/locust.runners: Client '1c1835217843_2a1e4fd0c1564083b5a45da65026b8c0' quit. Currently 0 clients connected.
locust_1  | [2022-01-10 16:15:41,466] c84fd63ea22c/INFO/locust.runners: Client 'c85b8ef5e7fb_f01d352ed24e4d41a758c173124e0816' quit. Currently 0 clients connected.
locust_1  | [2022-01-10 16:15:41,466] c84fd63ea22c/INFO/locust.runners: The last worker quit, stopping test.
locust_1  | [2022-01-10 16:15:41,907] c84fd63ea22c/INFO/locust.main: Running teardowns...
locust_1  | [2022-01-10 16:15:42,408] c84fd63ea22c/INFO/locust.main: Shutting down (exit code 0), bye.
locust_1  | [2022-01-10 16:15:42,408] c84fd63ea22c/INFO/locust.main: Cleaning up runner...
locust_1  |  Name                                                                              # reqs      # fails  |     Avg     Min     Max  Median  |   req/s failures/s
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  GET /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                      557     0(0.00%)  |    4870      92    6818    6200  |    4.65    0.00
locust_1  |  GET /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     524     0(0.00%)  |    5108     123    6850    6300  |    4.37    0.00
locust_1  |  GET /unsafe/500x150/i.imgur.com/Nfn80ck.png                                          516     0(0.00%)  |    4793      34    6831    6100  |    4.31    0.00
locust_1  |  GET /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     493     0(0.00%)  |    5124     129    6829    6200  |    4.11    0.00
locust_1  |  GET /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                   540     0(0.00%)  |    4826      94    6814    6200  |    4.51    0.00
locust_1  |  GET /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     559     0(0.00%)  |    4998      69    6830    6300  |    4.67    0.00
locust_1  |  GET /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     516     0(0.00%)  |    5072     130    6831    6200  |    4.31    0.00
locust_1  | ----------------------------------------------------------------------------------------------------------------------------------------------------------------
locust_1  |  Aggregated                                                                          3705     0(0.00%)  |    4968      34    6850    6200  |   30.92    0.00
locust_1  | 
locust_1  | Response time percentiles (approximated)
locust_1  |  Type     Name                                                                                  50%    66%    75%    80%    90%    95%    98%    99%  99.9% 99.99%   100% # reqs
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  GET      /unsafe/100x150/top/i.imgur.com/Nfn80ck.png                                          6200   6400   6400   6500   6600   6700   6800   6800   6800   6800   6800    557
locust_1  |  GET      /unsafe/400x400/top/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     6300   6400   6500   6500   6600   6700   6800   6800   6900   6900   6900    524
locust_1  |  GET      /unsafe/500x150/i.imgur.com/Nfn80ck.png                                              6100   6400   6400   6500   6500   6700   6700   6800   6800   6800   6800    516
locust_1  |  GET      /unsafe/500x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     6200   6400   6400   6500   6600   6700   6700   6800   6800   6800   6800    493
locust_1  |  GET      /unsafe/fit-in/100x150/i.imgur.com/Nfn80ck.png                                       6200   6400   6500   6500   6600   6700   6800   6800   6800   6800   6800    540
locust_1  |  GET      /unsafe/fit-in/600x400/filters:fill(white):watermark(raw.githubusercontent.com/cshum/imagor/master/testdata/gopher-front.png,repeat,bottom,10):hue(290):saturation(100):fill(yellow):format(jpeg):quality(80)/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     6300   6400   6500   6500   6600   6700   6700   6800   6800   6800   6800    559
locust_1  |  GET      /unsafe/fit-in/600x400/upload.wikimedia.org/wikipedia/commons/a/a9/St_Francis_Seminary.jpg     6200   6400   6400   6500   6600   6700   6800   6800   6800   6800   6800    516
locust_1  | --------|--------------------------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
locust_1  |  None     Aggregated                                                                           6200   6400   6400   6500   6600   6700   6800   6800   6800   6900   6900   3705
locust_1  | 
locust_1 exited with code 0
[+] Running 7/7
 ⠿ Container locust_imagor_1        Removed                                                                                                                                                                        0.3s
 ⠿ Container locust_thumbor_1       Removed                                                                                                                                                                        4.9s
 ⠿ Container locust_locust-slave_1  Removed                                                                                                                                                                        0.2s
 ⠿ Container locust_locust-slave_2  Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust_1        Removed                                                                                                                                                                        0.1s
 ⠿ Container locust_locust-slave_3  Removed                                                                                                                                                                        0.1s
 ⠿ Network locust_default           Removed                                                                                                                                                                        0.1s

***************** END BENCHMARK WITH 1 Docker processes,  thumbor processes and  threads ********************