Serving a 13MB image over 100 connections 1000 times. Measured via ApacheBench 2.3.
Concurrency Level: 100
Time taken for tests: 2.357 seconds
Complete requests: 1000
Failed requests: 0
Keep-Alive requests: 0
Total transferred: 14488596000 bytes
HTML transferred: 14488498000 bytes
Requests per second: 424.24 [#/sec] (mean)
Time per request: 235.718 [ms] (mean)
Time per request: 2.357 [ms] (mean, across all concurrent requests)
Transfer rate: 6002519.76 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.0 0 5
Processing: 178 233 12.7 235 280
Waiting: 0 6 10.0 3 64
Total: 178 234 12.6 235 280
Percentage of the requests served within a certain time (ms)
50% 235
66% 236
75% 237
80% 237
90% 243
95% 243
98% 266
99% 273
100% 280 (longest request)
About 6 to 7 Gbit/s on a single mobile core. That's faster than nginx in both response time and troughput on my machine, although cpu usage remains higher.
motivation
Serving static files with Served is painfully slow and has a terrible memory footprint.
served::response::to_buffer()
seems to copy a lot.proposed user facing changes
Add a single methode
possible usage
benchmarks
Serving a 13MB image over 100 connections 1000 times. Measured via ApacheBench 2.3.
About 6 to 7 Gbit/s on a single mobile core. That's faster than nginx in both response time and troughput on my machine, although cpu usage remains higher.