Varnish is very flexible and as such can be used as a caching engine , a load balancer , a web application firewall, an edge authentication and authorization mechanism. Other use cases are HTTP routing, hotlinking protection, DDoS attack defender and, a lot, more.
Varnish is a reverse-proxy HTTP accelerator designed for heavily consumed API endpoints and also for dynamic, heavy-content, high-traffic websites.
Varnish is an HTTP accelerator designed for content-heavy dynamic web sites as well as APIs
To compare its performance I am going to use it as a HTTP accelerator in front of a simple REST API — a GET endpoint written in Golang that just returns “Hello World” without reading/writing from/to disk or to the network:
package main
// run with: env PORT=8081 go run http-server.go
import (
"fmt"
"log"
"net/http"
"os"
)
func main() {
port := os.Getenv("PORT")
if port == "" {
log.Fatal("Please specify the HTTP port as environment variable, e.g. env PORT=8081 go run http-server.go")
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request){
fmt.Fprintf(w, "Hello World")
})
log.Fatal(http.ListenAndServe(":" + port, nil))
}
The VMs I am going to use are the same from my previous similar posts:
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: 0x48
Model: 0
Stepping: 0x1
CPU max MHz: 2400.0000
CPU min MHz: 2400.0000
BogoMIPS: 200.00
L1d cache: 512 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 32 MiB
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
Note: the VMs are as close as possible in their hardware capabilities — same type and amount of RAM, same disks, network cards and bandwidth. Also the CPUs are as similar as possible but there are some differences:
the CPU frequency: 3000 MHz (x86_64) vs 2400 MHz (aarch64)
BogoMIPS: 6000 (x86_64) vs 200 (aarch64)
Level 1 caches: 128 KiB (x86_64) vs 512 KiB (aarch64)
Both VMs run Ubuntu 20.04 with latest software updates.
Varnish Cache is built from source for the master branch!
As load testing client I will use Vegeta. The client application runs on a third VM in the same network with the two above!
Usually Vegeta is used to measure the latency while using a constant rate/throughput — by using -rate N/s, where N is some positive number. As explained here I am going to use -rate infinity -max-workers M, where M is empirically found positive number that will load the load client VM CPU at 80–85%. By using -rate infinity I want to find the highest throughput the backend can serve.
Note: In the first version of this test I’ve used WRK as a HTTP load testing client but as noticed by the Varnish community it seems there is a bug in the calculation of its latency statistics — the standard deviation is bigger than the average, e.g.:
Running 30s test @ http://192.168.0.232:8080
8 threads and 96 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 655.40us 798.70us 28.43ms 90.52%
Req/Sec 20.95k 1.92k 28.68k 68.25%
5010594 requests in 30.07s, 611.64MB read
Requests/sec: 166625.40
Transfer/sec: 20.34MB
This could happen only if some requests have negative latency. Highly unlikely!
To set a baseline I will execute the load client directly against the Golang based service running on both VMs:
env PORT=8080 go run http-server.go (Fish shell syntax)
where XYZ is the IP of the other VM, i.e. Varnish Cache running on aarch64 points to Golang HTTP server running on x86_64, and vise versa. This is not really important because Varnish hits the backend server just once and from there on serves the response from its cache, so a backend server running on the same host won’t extra load the system in this specific test.
The results from running Vegeta 5 times against Varnish are:
The throughput is almost the same for both architectures and the aarch64 VM gives slightly better latency!
Here is the output of varnishstat -n /home/ubuntu/varnish/work -1 for both instances:
aarch64
MGT.uptime 6930 1.00 Management process uptime
MGT.child_start 1 0.00 Child process started
MGT.child_exit 0 0.00 Child process normal exit
MGT.child_stop 0 0.00 Child process unexpected exit
MGT.child_died 0 0.00 Child process died (signal)
MGT.child_dump 0 0.00 Child process core dumped
MGT.child_panic 0 0.00 Child process panic
MAIN.summs 471698 68.06 stat summ operations
MAIN.uptime 6931 1.00 Child process uptime
MAIN.sess_conn 714 0.10 Sessions accepted
MAIN.sess_fail 0 0.00 Session accept failures
MAIN.sess_fail_econnaborted 0 0.00 Session accept failures: connection aborted
MAIN.sess_fail_eintr 0 0.00 Session accept failures: interrupted system call
MAIN.sess_fail_emfile 0 0.00 Session accept failures: too many open files
MAIN.sess_fail_ebadf 0 0.00 Session accept failures: bad file descriptor
MAIN.sess_fail_enomem 0 0.00 Session accept failures: not enough memory
MAIN.sess_fail_other 0 0.00 Session accept failures: other
MAIN.client_req_400 0 0.00 Client requests received, subject to 400 errors
MAIN.client_req_417 0 0.00 Client requests received, subject to 417 errors
MAIN.client_req 47090895 6794.24 Good client requests received
MAIN.cache_hit 47090857 6794.24 Cache hits
MAIN.cache_hit_grace 0 0.00 Cache grace hits
MAIN.cache_hitpass 0 0.00 Cache hits for pass.
MAIN.cache_hitmiss 0 0.00 Cache hits for miss.
MAIN.cache_miss 2 0.00 Cache misses
MAIN.backend_conn 2 0.00 Backend conn. success
MAIN.backend_unhealthy 0 0.00 Backend conn. not attempted
MAIN.backend_busy 0 0.00 Backend conn. too many
MAIN.backend_fail 0 0.00 Backend conn. failures
MAIN.backend_reuse 0 0.00 Backend conn. reuses
MAIN.backend_recycle 2 0.00 Backend conn. recycles
MAIN.backend_retry 0 0.00 Backend conn. retry
MAIN.fetch_head 0 0.00 Fetch no body (HEAD)
MAIN.fetch_length 2 0.00 Fetch with Length
MAIN.fetch_chunked 0 0.00 Fetch chunked
MAIN.fetch_eof 0 0.00 Fetch EOF
MAIN.fetch_bad 0 0.00 Fetch bad T-E
MAIN.fetch_none 0 0.00 Fetch no body
MAIN.fetch_1xx 0 0.00 Fetch no body (1xx)
MAIN.fetch_204 0 0.00 Fetch no body (204)
MAIN.fetch_304 0 0.00 Fetch no body (304)
MAIN.fetch_failed 0 0.00 Fetch failed (all causes)
MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread)
MAIN.pools 2 . Number of thread pools
MAIN.threads 200 . Total number of threads
MAIN.threads_limited 0 0.00 Threads hit max
MAIN.threads_created 200 0.03 Threads created
MAIN.threads_destroyed 0 0.00 Threads destroyed
MAIN.threads_failed 0 0.00 Thread creation failed
MAIN.thread_queue_len 0 . Length of session queue
MAIN.busy_sleep 36 0.01 Number of requests sent to sleep on busy objhdr
MAIN.busy_wakeup 36 0.01 Number of requests woken after sleep on busy objhdr
MAIN.busy_killed 0 0.00 Number of requests killed after sleep on busy objhdr
MAIN.sess_queued 0 0.00 Sessions queued for thread
MAIN.sess_dropped 0 0.00 Sessions dropped for thread
MAIN.req_dropped 0 0.00 Requests dropped
MAIN.n_object 2 . object structs made
MAIN.n_vampireobject 0 . unresurrected objects
MAIN.n_objectcore 158 . objectcore structs made
MAIN.n_objecthead 158 . objecthead structs made
MAIN.n_backend 1 . Number of backends
MAIN.n_expired 0 0.00 Number of expired objects
MAIN.n_lru_nuked 0 0.00 Number of LRU nuked objects
MAIN.n_lru_moved 757 0.11 Number of LRU moved objects
MAIN.n_lru_limited 0 0.00 Reached nuke_limit
MAIN.losthdr 0 0.00 HTTP header overflows
MAIN.s_sess 714 0.10 Total sessions seen
MAIN.n_pipe 0 . Number of ongoing pipe sessions
MAIN.pipe_limited 0 0.00 Pipes hit pipe_sess_max
MAIN.s_pipe 0 0.00 Total pipe sessions seen
MAIN.s_pass 0 0.00 Total pass-ed requests seen
MAIN.s_fetch 2 0.00 Total backend fetches initiated
MAIN.s_synth 0 0.00 Total synthetic responses made
MAIN.s_req_hdrbytes 6304619557 909626.25 Request header bytes
MAIN.s_req_bodybytes 0 0.00 Request body bytes
MAIN.s_resp_hdrbytes 10756809096 1551985.15 Response header bytes
MAIN.s_resp_bodybytes 517999449 74736.61 Response body bytes
MAIN.s_pipe_hdrbytes 0 0.00 Pipe request header bytes
MAIN.s_pipe_in 0 0.00 Piped bytes from client
MAIN.s_pipe_out 0 0.00 Piped bytes to client
MAIN.sess_closed 0 0.00 Session Closed
MAIN.sess_closed_err 0 0.00 Session Closed with error
MAIN.sess_readahead 0 0.00 Session Read Ahead
MAIN.sess_herd 235170 33.93 Session herd
MAIN.sc_rem_close 640 0.09 Session OK REM_CLOSE
MAIN.sc_req_close 0 0.00 Session OK REQ_CLOSE
MAIN.sc_req_http10 0 0.00 Session Err REQ_HTTP10
MAIN.sc_rx_bad 0 0.00 Session Err RX_BAD
MAIN.sc_rx_body 0 0.00 Session Err RX_BODY
MAIN.sc_rx_junk 0 0.00 Session Err RX_JUNK
MAIN.sc_rx_overflow 0 0.00 Session Err RX_OVERFLOW
MAIN.sc_rx_timeout 0 0.00 Session Err RX_TIMEOUT
MAIN.sc_rx_close_idle 73 0.01 Session Err RX_CLOSE_IDLE
MAIN.sc_tx_pipe 0 0.00 Session OK TX_PIPE
MAIN.sc_tx_error 0 0.00 Session Err TX_ERROR
MAIN.sc_tx_eof 0 0.00 Session OK TX_EOF
MAIN.sc_resp_close 0 0.00 Session OK RESP_CLOSE
MAIN.sc_overload 0 0.00 Session Err OVERLOAD
MAIN.sc_pipe_overflow 0 0.00 Session Err PIPE_OVERFLOW
MAIN.sc_range_short 0 0.00 Session Err RANGE_SHORT
MAIN.sc_req_http20 0 0.00 Session Err REQ_HTTP20
MAIN.sc_vcl_failure 0 0.00 Session Err VCL_FAILURE
MAIN.client_resp_500 0 0.00 Delivery failed due to insufficient workspace.
MAIN.ws_backend_overflow 0 0.00 workspace_backend overflows
MAIN.ws_client_overflow 0 0.00 workspace_client overflows
MAIN.ws_thread_overflow 0 0.00 workspace_thread overflows
MAIN.ws_session_overflow 0 0.00 workspace_session overflows
MAIN.shm_records 1930732811 278564.83 SHM records
MAIN.shm_writes 94189204 13589.55 SHM writes
MAIN.shm_flushes 0 0.00 SHM flushes due to overflow
MAIN.shm_cont 3055616 440.86 SHM MTX contention
MAIN.shm_cycles 659 0.10 SHM cycles through buffer
MAIN.backend_req 2 0.00 Backend requests made
MAIN.n_vcl 1 . Number of loaded VCLs in total
MAIN.n_vcl_avail 1 . Number of VCLs available
MAIN.n_vcl_discard 0 . Number of discarded VCLs
MAIN.vcl_fail 0 0.00 VCL failures
MAIN.bans 1 . Count of bans
MAIN.bans_completed 1 . Number of bans marked 'completed'
MAIN.bans_obj 0 . Number of bans using obj.*
MAIN.bans_req 0 . Number of bans using req.*
MAIN.bans_added 1 0.00 Bans added
MAIN.bans_deleted 0 0.00 Bans deleted
MAIN.bans_tested 0 0.00 Bans tested against objects (lookup)
MAIN.bans_obj_killed 0 0.00 Objects killed by bans (lookup)
MAIN.bans_lurker_tested 0 0.00 Bans tested against objects (lurker)
MAIN.bans_tests_tested 0 0.00 Ban tests tested against objects (lookup)
MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested against objects (lurker)
MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by bans (lurker)
MAIN.bans_lurker_obj_killed_cutoff 0 0.00 Objects killed by bans for cutoff (lurker)
MAIN.bans_dups 0 0.00 Bans superseded by other bans
MAIN.bans_lurker_contention 0 0.00 Lurker gave way for lookup
MAIN.bans_persisted_bytes 16 . Bytes used by the persisted ban lists
MAIN.bans_persisted_fragmentation 0 . Extra bytes in persisted ban lists due to fragmentation
MAIN.n_purges 0 0.00 Number of purge operations executed
MAIN.n_obj_purged 0 0.00 Number of purged objects
MAIN.exp_mailed 2 0.00 Number of objects mailed to expiry thread
MAIN.exp_received 2 0.00 Number of objects received by expiry thread
MAIN.hcb_nolock 47090859 6794.24 HCB Lookups without lock
MAIN.hcb_lock 2 0.00 HCB Lookups with lock
MAIN.hcb_insert 2 0.00 HCB Inserts
MAIN.esi_errors 0 0.00 ESI parse errors (unlock)
MAIN.esi_warnings 0 0.00 ESI parse warnings (unlock)
MAIN.vmods 0 . Loaded VMODs
MAIN.n_gzip 0 0.00 Gzip operations
MAIN.n_gunzip 0 0.00 Gunzip operations
MAIN.n_test_gunzip 0 0.00 Test gunzip operations
LCK.backend.creat 1 0.00 Created locks
LCK.backend.destroy 0 0.00 Destroyed locks
LCK.backend.locks 4 0.00 Lock Operations
LCK.backend.dbg_busy 0 0.00 Contended lock operations
LCK.backend.dbg_try_fail 0 0.00 Contended trylock operations
LCK.ban.creat 1 0.00 Created locks
LCK.ban.destroy 0 0.00 Destroyed locks
LCK.ban.locks 288 0.04 Lock Operations
LCK.ban.dbg_busy 0 0.00 Contended lock operations
LCK.ban.dbg_try_fail 0 0.00 Contended trylock operations
LCK.busyobj.creat 158 0.02 Created locks
LCK.busyobj.destroy 2 0.00 Destroyed locks
LCK.busyobj.locks 17 0.00 Lock Operations
LCK.busyobj.dbg_busy 0 0.00 Contended lock operations
LCK.busyobj.dbg_try_fail 0 0.00 Contended trylock operations
LCK.cli.creat 1 0.00 Created locks
LCK.cli.destroy 0 0.00 Destroyed locks
LCK.cli.locks 2323 0.34 Lock Operations
LCK.cli.dbg_busy 0 0.00 Contended lock operations
LCK.cli.dbg_try_fail 0 0.00 Contended trylock operations
LCK.exp.creat 1 0.00 Created locks
LCK.exp.destroy 0 0.00 Destroyed locks
LCK.exp.locks 25 0.00 Lock Operations
LCK.exp.dbg_busy 0 0.00 Contended lock operations
LCK.exp.dbg_try_fail 0 0.00 Contended trylock operations
LCK.hcb.creat 1 0.00 Created locks
LCK.hcb.destroy 0 0.00 Destroyed locks
LCK.hcb.locks 41 0.01 Lock Operations
LCK.hcb.dbg_busy 0 0.00 Contended lock operations
LCK.hcb.dbg_try_fail 0 0.00 Contended trylock operations
LCK.lru.creat 2 0.00 Created locks
LCK.lru.destroy 0 0.00 Destroyed locks
LCK.lru.locks 759 0.11 Lock Operations
LCK.lru.dbg_busy 0 0.00 Contended lock operations
LCK.lru.dbg_try_fail 0 0.00 Contended trylock operations
LCK.mempool.creat 5 0.00 Created locks
LCK.mempool.destroy 0 0.00 Destroyed locks
LCK.mempool.locks 507808 73.27 Lock Operations
LCK.mempool.dbg_busy 0 0.00 Contended lock operations
LCK.mempool.dbg_try_fail 0 0.00 Contended trylock operations
LCK.objhdr.creat 159 0.02 Created locks
LCK.objhdr.destroy 0 0.00 Destroyed locks
LCK.objhdr.locks 94181817 13588.49 Lock Operations
LCK.objhdr.dbg_busy 0 0.00 Contended lock operations
LCK.objhdr.dbg_try_fail 0 0.00 Contended trylock operations
LCK.perpool.creat 2 0.00 Created locks
LCK.perpool.destroy 0 0.00 Destroyed locks
LCK.perpool.locks 706869 101.99 Lock Operations
LCK.perpool.dbg_busy 0 0.00 Contended lock operations
LCK.perpool.dbg_try_fail 0 0.00 Contended trylock operations
LCK.pipestat.creat 1 0.00 Created locks
LCK.pipestat.destroy 0 0.00 Destroyed locks
LCK.pipestat.locks 0 0.00 Lock Operations
LCK.pipestat.dbg_busy 0 0.00 Contended lock operations
LCK.pipestat.dbg_try_fail 0 0.00 Contended trylock operations
LCK.probe.creat 1 0.00 Created locks
LCK.probe.destroy 0 0.00 Destroyed locks
LCK.probe.locks 1 0.00 Lock Operations
LCK.probe.dbg_busy 0 0.00 Contended lock operations
LCK.probe.dbg_try_fail 0 0.00 Contended trylock operations
LCK.sess.creat 690 0.10 Created locks
LCK.sess.destroy 712 0.10 Destroyed locks
LCK.sess.locks 677 0.10 Lock Operations
LCK.sess.dbg_busy 0 0.00 Contended lock operations
LCK.sess.dbg_try_fail 0 0.00 Contended trylock operations
LCK.tcp_pool.creat 2 0.00 Created locks
LCK.tcp_pool.destroy 0 0.00 Destroyed locks
LCK.tcp_pool.locks 8 0.00 Lock Operations
LCK.tcp_pool.dbg_busy 0 0.00 Contended lock operations
LCK.tcp_pool.dbg_try_fail 0 0.00 Contended trylock operations
LCK.vbe.creat 1 0.00 Created locks
LCK.vbe.destroy 0 0.00 Destroyed locks
LCK.vbe.locks 2314 0.33 Lock Operations
LCK.vbe.dbg_busy 0 0.00 Contended lock operations
LCK.vbe.dbg_try_fail 0 0.00 Contended trylock operations
LCK.vcapace.creat 1 0.00 Created locks
LCK.vcapace.destroy 0 0.00 Destroyed locks
LCK.vcapace.locks 0 0.00 Lock Operations
LCK.vcapace.dbg_busy 0 0.00 Contended lock operations
LCK.vcapace.dbg_try_fail 0 0.00 Contended trylock operations
LCK.vcl.creat 1 0.00 Created locks
LCK.vcl.destroy 0 0.00 Destroyed locks
LCK.vcl.locks 1452 0.21 Lock Operations
LCK.vcl.dbg_busy 0 0.00 Contended lock operations
LCK.vcl.dbg_try_fail 0 0.00 Contended trylock operations
LCK.vxid.creat 1 0.00 Created locks
LCK.vxid.destroy 0 0.00 Destroyed locks
LCK.vxid.locks 1522 0.22 Lock Operations
LCK.vxid.dbg_busy 0 0.00 Contended lock operations
LCK.vxid.dbg_try_fail 0 0.00 Contended trylock operations
LCK.waiter.creat 2 0.00 Created locks
LCK.waiter.destroy 0 0.00 Destroyed locks
LCK.waiter.locks 680043 98.12 Lock Operations
LCK.waiter.dbg_busy 0 0.00 Contended lock operations
LCK.waiter.dbg_try_fail 0 0.00 Contended trylock operations
LCK.wq.creat 1 0.00 Created locks
LCK.wq.destroy 0 0.00 Destroyed locks
LCK.wq.locks 7133 1.03 Lock Operations
LCK.wq.dbg_busy 0 0.00 Contended lock operations
LCK.wq.dbg_try_fail 0 0.00 Contended trylock operations
LCK.wstat.creat 1 0.00 Created locks
LCK.wstat.destroy 0 0.00 Destroyed locks
LCK.wstat.locks 235852 34.03 Lock Operations
LCK.wstat.dbg_busy 0 0.00 Contended lock operations
LCK.wstat.dbg_try_fail 0 0.00 Contended trylock operations
MEMPOOL.busyobj.live 0 . In use
MEMPOOL.busyobj.pool 10 . In Pool
MEMPOOL.busyobj.sz_wanted 65536 . Size requested
MEMPOOL.busyobj.sz_actual 65504 . Size allocated
MEMPOOL.busyobj.allocs 2 0.00 Allocations
MEMPOOL.busyobj.frees 2 0.00 Frees
MEMPOOL.busyobj.recycle 2 0.00 Recycled from pool
MEMPOOL.busyobj.timeout 0 0.00 Timed out from pool
MEMPOOL.busyobj.toosmall 0 0.00 Too small to recycle
MEMPOOL.busyobj.surplus 0 0.00 Too many for pool
MEMPOOL.busyobj.randry 0 0.00 Pool ran dry
MEMPOOL.req0.live 0 . In use
MEMPOOL.req0.pool 10 . In Pool
MEMPOOL.req0.sz_wanted 65536 . Size requested
MEMPOOL.req0.sz_actual 65504 . Size allocated
MEMPOOL.req0.allocs 120405 17.37 Allocations
MEMPOOL.req0.frees 120405 17.37 Frees
MEMPOOL.req0.recycle 120094 17.33 Recycled from pool
MEMPOOL.req0.timeout 1391 0.20 Timed out from pool
MEMPOOL.req0.toosmall 0 0.00 Too small to recycle
MEMPOOL.req0.surplus 0 0.00 Too many for pool
MEMPOOL.req0.randry 311 0.04 Pool ran dry
MEMPOOL.sess0.live 0 . In use
MEMPOOL.sess0.pool 10 . In Pool
MEMPOOL.sess0.sz_wanted 768 . Size requested
MEMPOOL.sess0.sz_actual 736 . Size allocated
MEMPOOL.sess0.allocs 365 0.05 Allocations
MEMPOOL.sess0.frees 365 0.05 Frees
MEMPOOL.sess0.recycle 51 0.01 Recycled from pool
MEMPOOL.sess0.timeout 365 0.05 Timed out from pool
MEMPOOL.sess0.toosmall 0 0.00 Too small to recycle
MEMPOOL.sess0.surplus 0 0.00 Too many for pool
MEMPOOL.sess0.randry 314 0.05 Pool ran dry
LCK.sma.creat 2 0.00 Created locks
LCK.sma.destroy 0 0.00 Destroyed locks
LCK.sma.locks 4 0.00 Lock Operations
LCK.sma.dbg_busy 0 0.00 Contended lock operations
LCK.sma.dbg_try_fail 0 0.00 Contended trylock operations
SMA.s0.c_req 4 0.00 Allocator requests
SMA.s0.c_fail 0 0.00 Allocator failures
SMA.s0.c_bytes 502 0.07 Bytes allocated
SMA.s0.c_freed 0 0.00 Bytes freed
SMA.s0.g_alloc 4 . Allocations outstanding
SMA.s0.g_bytes 502 . Bytes outstanding
SMA.s0.g_space 268434954 . Bytes available
SMA.Transient.c_req 0 0.00 Allocator requests
SMA.Transient.c_fail 0 0.00 Allocator failures
SMA.Transient.c_bytes 0 0.00 Bytes allocated
SMA.Transient.c_freed 0 0.00 Bytes freed
SMA.Transient.g_alloc 0 . Allocations outstanding
SMA.Transient.g_bytes 0 . Bytes outstanding
SMA.Transient.g_space 0 . Bytes available
MEMPOOL.req1.live 0 . In use
MEMPOOL.req1.pool 10 . In Pool
MEMPOOL.req1.sz_wanted 65536 . Size requested
MEMPOOL.req1.sz_actual 65504 . Size allocated
MEMPOOL.req1.allocs 115406 16.65 Allocations
MEMPOOL.req1.frees 115406 16.65 Frees
MEMPOOL.req1.recycle 115108 16.61 Recycled from pool
MEMPOOL.req1.timeout 1346 0.19 Timed out from pool
MEMPOOL.req1.toosmall 0 0.00 Too small to recycle
MEMPOOL.req1.surplus 0 0.00 Too many for pool
MEMPOOL.req1.randry 298 0.04 Pool ran dry
VBE.boot.default.happy 0 . Happy health probes
VBE.boot.default.bereq_hdrbytes 383 0.06 Request header bytes
VBE.boot.default.bereq_bodybytes 0 0.00 Request body bytes
VBE.boot.default.beresp_hdrbytes 234 0.03 Response header bytes
VBE.boot.default.beresp_bodybytes 22 0.00 Response body bytes
VBE.boot.default.pipe_hdrbytes 0 0.00 Pipe request header bytes
VBE.boot.default.pipe_out 0 0.00 Piped bytes to backend
VBE.boot.default.pipe_in 0 0.00 Piped bytes from backend
VBE.boot.default.conn 0 . Concurrent connections used
VBE.boot.default.req 2 0.00 Backend requests sent
VBE.boot.default.unhealthy 0 0.00 Fetches not attempted due to backend being unhealthy
VBE.boot.default.busy 0 0.00 Fetches not attempted due to backend being busy
VBE.boot.default.fail 0 0.00 Connections failed
VBE.boot.default.fail_eacces 0 0.00 Connections failed with EACCES or EPERM
VBE.boot.default.fail_eaddrnotavail 0 0.00 Connections failed with EADDRNOTAVAIL
VBE.boot.default.fail_econnrefused 0 0.00 Connections failed with ECONNREFUSED
VBE.boot.default.fail_enetunreach 0 0.00 Connections failed with ENETUNREACH
VBE.boot.default.fail_etimedout 0 0.00 Connections failed ETIMEDOUT
VBE.boot.default.fail_other 0 0.00 Connections failed for other reason
VBE.boot.default.helddown 0 0.00 Connection opens not attempted
MEMPOOL.sess1.live 0 . In use
MEMPOOL.sess1.pool 10 . In Pool
MEMPOOL.sess1.sz_wanted 768 . Size requested
MEMPOOL.sess1.sz_actual 736 . Size allocated
MEMPOOL.sess1.allocs 349 0.05 Allocations
MEMPOOL.sess1.frees 349 0.05 Frees
MEMPOOL.sess1.recycle 55 0.01 Recycled from pool
MEMPOOL.sess1.timeout 349 0.05 Timed out from pool
MEMPOOL.sess1.toosmall 0 0.00 Too small to recycle
MEMPOOL.sess1.surplus 0 0.00 Too many for pool
MEMPOOL.sess1.randry 294 0.04 Pool ran dry
x86_64
MGT.uptime 6907 1.00 Management process uptime
MGT.child_start 1 0.00 Child process started
MGT.child_exit 0 0.00 Child process normal exit
MGT.child_stop 0 0.00 Child process unexpected exit
MGT.child_died 0 0.00 Child process died (signal)
MGT.child_dump 0 0.00 Child process core dumped
MGT.child_panic 0 0.00 Child process panic
MAIN.summs 269074 38.95 stat summ operations
MAIN.uptime 6909 1.00 Child process uptime
MAIN.sess_conn 682 0.10 Sessions accepted
MAIN.sess_fail 0 0.00 Session accept failures
MAIN.sess_fail_econnaborted 0 0.00 Session accept failures: connection aborted
MAIN.sess_fail_eintr 0 0.00 Session accept failures: interrupted system call
MAIN.sess_fail_emfile 0 0.00 Session accept failures: too many open files
MAIN.sess_fail_ebadf 0 0.00 Session accept failures: bad file descriptor
MAIN.sess_fail_enomem 0 0.00 Session accept failures: not enough memory
MAIN.sess_fail_other 0 0.00 Session accept failures: other
MAIN.client_req_400 0 0.00 Client requests received, subject to 400 errors
MAIN.client_req_417 0 0.00 Client requests received, subject to 417 errors
MAIN.client_req 47352390 6853.73 Good client requests received
MAIN.cache_hit 47352352 6853.72 Cache hits
MAIN.cache_hit_grace 0 0.00 Cache grace hits
MAIN.cache_hitpass 0 0.00 Cache hits for pass.
MAIN.cache_hitmiss 0 0.00 Cache hits for miss.
MAIN.cache_miss 1 0.00 Cache misses
MAIN.backend_conn 1 0.00 Backend conn. success
MAIN.backend_unhealthy 0 0.00 Backend conn. not attempted
MAIN.backend_busy 0 0.00 Backend conn. too many
MAIN.backend_fail 0 0.00 Backend conn. failures
MAIN.backend_reuse 0 0.00 Backend conn. reuses
MAIN.backend_recycle 1 0.00 Backend conn. recycles
MAIN.backend_retry 0 0.00 Backend conn. retry
MAIN.fetch_head 0 0.00 Fetch no body (HEAD)
MAIN.fetch_length 1 0.00 Fetch with Length
MAIN.fetch_chunked 0 0.00 Fetch chunked
MAIN.fetch_eof 0 0.00 Fetch EOF
MAIN.fetch_bad 0 0.00 Fetch bad T-E
MAIN.fetch_none 0 0.00 Fetch no body
MAIN.fetch_1xx 0 0.00 Fetch no body (1xx)
MAIN.fetch_204 0 0.00 Fetch no body (204)
MAIN.fetch_304 0 0.00 Fetch no body (304)
MAIN.fetch_failed 0 0.00 Fetch failed (all causes)
MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread)
MAIN.pools 2 . Number of thread pools
MAIN.threads 200 . Total number of threads
MAIN.threads_limited 0 0.00 Threads hit max
MAIN.threads_created 200 0.03 Threads created
MAIN.threads_destroyed 0 0.00 Threads destroyed
MAIN.threads_failed 0 0.00 Thread creation failed
MAIN.thread_queue_len 0 . Length of session queue
MAIN.busy_sleep 36 0.01 Number of requests sent to sleep on busy objhdr
MAIN.busy_wakeup 36 0.01 Number of requests woken after sleep on busy objhdr
MAIN.busy_killed 0 0.00 Number of requests killed after sleep on busy objhdr
MAIN.sess_queued 0 0.00 Sessions queued for thread
MAIN.sess_dropped 0 0.00 Sessions dropped for thread
MAIN.req_dropped 0 0.00 Requests dropped
MAIN.n_object 1 . object structs made
MAIN.n_vampireobject 0 . unresurrected objects
MAIN.n_objectcore 149 . objectcore structs made
MAIN.n_objecthead 149 . objecthead structs made
MAIN.n_backend 1 . Number of backends
MAIN.n_expired 0 0.00 Number of expired objects
MAIN.n_lru_nuked 0 0.00 Number of LRU nuked objects
MAIN.n_lru_moved 761 0.11 Number of LRU moved objects
MAIN.n_lru_limited 0 0.00 Reached nuke_limit
MAIN.losthdr 0 0.00 HTTP header overflows
MAIN.s_sess 682 0.10 Total sessions seen
MAIN.n_pipe 0 . Number of ongoing pipe sessions
MAIN.pipe_limited 0 0.00 Pipes hit pipe_sess_max
MAIN.s_pipe 0 0.00 Total pipe sessions seen
MAIN.s_pass 0 0.00 Total pass-ed requests seen
MAIN.s_fetch 1 0.00 Total backend fetches initiated
MAIN.s_synth 1 0.00 Total synthetic responses made
MAIN.s_req_hdrbytes 6339659791 917594.41 Request header bytes
MAIN.s_req_bodybytes 0 0.00 Request body bytes
MAIN.s_resp_hdrbytes 11006221204 1593026.66 Response header bytes
MAIN.s_resp_bodybytes 520876140 75390.96 Response body bytes
MAIN.s_pipe_hdrbytes 0 0.00 Pipe request header bytes
MAIN.s_pipe_in 0 0.00 Piped bytes from client
MAIN.s_pipe_out 0 0.00 Piped bytes to client
MAIN.sess_closed 0 0.00 Session Closed
MAIN.sess_closed_err 0 0.00 Session Closed with error
MAIN.sess_readahead 0 0.00 Session Read Ahead
MAIN.sess_herd 133859 19.37 Session herd
MAIN.sc_rem_close 635 0.09 Session OK REM_CLOSE
MAIN.sc_req_close 0 0.00 Session OK REQ_CLOSE
MAIN.sc_req_http10 0 0.00 Session Err REQ_HTTP10
MAIN.sc_rx_bad 0 0.00 Session Err RX_BAD
MAIN.sc_rx_body 0 0.00 Session Err RX_BODY
MAIN.sc_rx_junk 0 0.00 Session Err RX_JUNK
MAIN.sc_rx_overflow 0 0.00 Session Err RX_OVERFLOW
MAIN.sc_rx_timeout 0 0.00 Session Err RX_TIMEOUT
MAIN.sc_rx_close_idle 41 0.01 Session Err RX_CLOSE_IDLE
MAIN.sc_tx_pipe 0 0.00 Session OK TX_PIPE
MAIN.sc_tx_error 0 0.00 Session Err TX_ERROR
MAIN.sc_tx_eof 0 0.00 Session OK TX_EOF
MAIN.sc_resp_close 0 0.00 Session OK RESP_CLOSE
MAIN.sc_overload 0 0.00 Session Err OVERLOAD
MAIN.sc_pipe_overflow 0 0.00 Session Err PIPE_OVERFLOW
MAIN.sc_range_short 0 0.00 Session Err RANGE_SHORT
MAIN.sc_req_http20 0 0.00 Session Err REQ_HTTP20
MAIN.sc_vcl_failure 0 0.00 Session Err VCL_FAILURE
MAIN.client_resp_500 0 0.00 Delivery failed due to insufficient workspace.
MAIN.ws_backend_overflow 0 0.00 workspace_backend overflows
MAIN.ws_client_overflow 0 0.00 workspace_client overflows
MAIN.ws_thread_overflow 0 0.00 workspace_thread overflows
MAIN.ws_session_overflow 0 0.00 workspace_session overflows
MAIN.shm_records 1941453917 281003.61 SHM records
MAIN.shm_writes 94712048 13708.50 SHM writes
MAIN.shm_flushes 0 0.00 SHM flushes due to overflow
MAIN.shm_cont 2241203 324.39 SHM MTX contention
MAIN.shm_cycles 668 0.10 SHM cycles through buffer
MAIN.backend_req 1 0.00 Backend requests made
MAIN.n_vcl 1 . Number of loaded VCLs in total
MAIN.n_vcl_avail 1 . Number of VCLs available
MAIN.n_vcl_discard 0 . Number of discarded VCLs
MAIN.vcl_fail 0 0.00 VCL failures
MAIN.bans 1 . Count of bans
MAIN.bans_completed 1 . Number of bans marked 'completed'
MAIN.bans_obj 0 . Number of bans using obj.*
MAIN.bans_req 0 . Number of bans using req.*
MAIN.bans_added 1 0.00 Bans added
MAIN.bans_deleted 0 0.00 Bans deleted
MAIN.bans_tested 0 0.00 Bans tested against objects (lookup)
MAIN.bans_obj_killed 0 0.00 Objects killed by bans (lookup)
MAIN.bans_lurker_tested 0 0.00 Bans tested against objects (lurker)
MAIN.bans_tests_tested 0 0.00 Ban tests tested against objects (lookup)
MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested against objects (lurker)
MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by bans (lurker)
MAIN.bans_lurker_obj_killed_cutoff 0 0.00 Objects killed by bans for cutoff (lurker)
MAIN.bans_dups 0 0.00 Bans superseded by other bans
MAIN.bans_lurker_contention 0 0.00 Lurker gave way for lookup
MAIN.bans_persisted_bytes 16 . Bytes used by the persisted ban lists
MAIN.bans_persisted_fragmentation 0 . Extra bytes in persisted ban lists due to fragmentation
MAIN.n_purges 0 0.00 Number of purge operations executed
MAIN.n_obj_purged 0 0.00 Number of purged objects
MAIN.exp_mailed 1 0.00 Number of objects mailed to expiry thread
MAIN.exp_received 1 0.00 Number of objects received by expiry thread
MAIN.hcb_nolock 47352353 6853.72 HCB Lookups without lock
MAIN.hcb_lock 1 0.00 HCB Lookups with lock
MAIN.hcb_insert 1 0.00 HCB Inserts
MAIN.esi_errors 0 0.00 ESI parse errors (unlock)
MAIN.esi_warnings 0 0.00 ESI parse warnings (unlock)
MAIN.vmods 0 . Loaded VMODs
MAIN.n_gzip 0 0.00 Gzip operations
MAIN.n_gunzip 0 0.00 Gunzip operations
MAIN.n_test_gunzip 0 0.00 Test gunzip operations
LCK.backend.creat 1 0.00 Created locks
LCK.backend.destroy 0 0.00 Destroyed locks
LCK.backend.locks 2 0.00 Lock Operations
LCK.backend.dbg_busy 0 0.00 Contended lock operations
LCK.backend.dbg_try_fail 0 0.00 Contended trylock operations
LCK.ban.creat 1 0.00 Created locks
LCK.ban.destroy 0 0.00 Destroyed locks
LCK.ban.locks 287 0.04 Lock Operations
LCK.ban.dbg_busy 0 0.00 Contended lock operations
LCK.ban.dbg_try_fail 0 0.00 Contended trylock operations
LCK.busyobj.creat 148 0.02 Created locks
LCK.busyobj.destroy 2 0.00 Destroyed locks
LCK.busyobj.locks 9 0.00 Lock Operations
LCK.busyobj.dbg_busy 0 0.00 Contended lock operations
LCK.busyobj.dbg_try_fail 0 0.00 Contended trylock operations
LCK.cli.creat 1 0.00 Created locks
LCK.cli.destroy 0 0.00 Destroyed locks
LCK.cli.locks 2315 0.34 Lock Operations
LCK.cli.dbg_busy 0 0.00 Contended lock operations
LCK.cli.dbg_try_fail 0 0.00 Contended trylock operations
LCK.exp.creat 1 0.00 Created locks
LCK.exp.destroy 0 0.00 Destroyed locks
LCK.exp.locks 162 0.02 Lock Operations
LCK.exp.dbg_busy 0 0.00 Contended lock operations
LCK.exp.dbg_try_fail 0 0.00 Contended trylock operations
LCK.hcb.creat 1 0.00 Created locks
LCK.hcb.destroy 0 0.00 Destroyed locks
LCK.hcb.locks 40 0.01 Lock Operations
LCK.hcb.dbg_busy 0 0.00 Contended lock operations
LCK.hcb.dbg_try_fail 0 0.00 Contended trylock operations
LCK.lru.creat 2 0.00 Created locks
LCK.lru.destroy 0 0.00 Destroyed locks
LCK.lru.locks 762 0.11 Lock Operations
LCK.lru.dbg_busy 0 0.00 Contended lock operations
LCK.lru.dbg_try_fail 0 0.00 Contended trylock operations
LCK.mempool.creat 5 0.00 Created locks
LCK.mempool.destroy 0 0.00 Destroyed locks
LCK.mempool.locks 304660 44.10 Lock Operations
LCK.mempool.dbg_busy 0 0.00 Contended lock operations
LCK.mempool.dbg_try_fail 0 0.00 Contended trylock operations
LCK.objhdr.creat 149 0.02 Created locks
LCK.objhdr.destroy 0 0.00 Destroyed locks
LCK.objhdr.locks 94704768 13707.45 Lock Operations
LCK.objhdr.dbg_busy 0 0.00 Contended lock operations
LCK.objhdr.dbg_try_fail 0 0.00 Contended trylock operations
LCK.perpool.creat 2 0.00 Created locks
LCK.perpool.destroy 0 0.00 Destroyed locks
LCK.perpool.locks 403610 58.42 Lock Operations
LCK.perpool.dbg_busy 0 0.00 Contended lock operations
LCK.perpool.dbg_try_fail 0 0.00 Contended trylock operations
LCK.pipestat.creat 1 0.00 Created locks
LCK.pipestat.destroy 0 0.00 Destroyed locks
LCK.pipestat.locks 0 0.00 Lock Operations
LCK.pipestat.dbg_busy 0 0.00 Contended lock operations
LCK.pipestat.dbg_try_fail 0 0.00 Contended trylock operations
LCK.probe.creat 1 0.00 Created locks
LCK.probe.destroy 0 0.00 Destroyed locks
LCK.probe.locks 1 0.00 Lock Operations
LCK.probe.dbg_busy 0 0.00 Contended lock operations
LCK.probe.dbg_try_fail 0 0.00 Contended trylock operations
LCK.sess.creat 639 0.09 Created locks
LCK.sess.destroy 677 0.10 Destroyed locks
LCK.sess.locks 665 0.10 Lock Operations
LCK.sess.dbg_busy 0 0.00 Contended lock operations
LCK.sess.dbg_try_fail 0 0.00 Contended trylock operations
LCK.tcp_pool.creat 2 0.00 Created locks
LCK.tcp_pool.destroy 0 0.00 Destroyed locks
LCK.tcp_pool.locks 5 0.00 Lock Operations
LCK.tcp_pool.dbg_busy 0 0.00 Contended lock operations
LCK.tcp_pool.dbg_try_fail 0 0.00 Contended trylock operations
LCK.vbe.creat 1 0.00 Created locks
LCK.vbe.destroy 0 0.00 Destroyed locks
LCK.vbe.locks 2306 0.33 Lock Operations
LCK.vbe.dbg_busy 0 0.00 Contended lock operations
LCK.vbe.dbg_try_fail 0 0.00 Contended trylock operations
LCK.vcapace.creat 1 0.00 Created locks
LCK.vcapace.destroy 0 0.00 Destroyed locks
LCK.vcapace.locks 0 0.00 Lock Operations
LCK.vcapace.dbg_busy 0 0.00 Contended lock operations
LCK.vcapace.dbg_try_fail 0 0.00 Contended trylock operations
LCK.vcl.creat 1 0.00 Created locks
LCK.vcl.destroy 0 0.00 Destroyed locks
LCK.vcl.locks 1336 0.19 Lock Operations
LCK.vcl.dbg_busy 0 0.00 Contended lock operations
LCK.vcl.dbg_try_fail 0 0.00 Contended trylock operations
LCK.vxid.creat 1 0.00 Created locks
LCK.vxid.destroy 0 0.00 Destroyed locks
LCK.vxid.locks 1545 0.22 Lock Operations
LCK.vxid.dbg_busy 0 0.00 Contended lock operations
LCK.vxid.dbg_try_fail 0 0.00 Contended trylock operations
LCK.waiter.creat 2 0.00 Created locks
LCK.waiter.destroy 0 0.00 Destroyed locks
LCK.waiter.locks 410736 59.45 Lock Operations
LCK.waiter.dbg_busy 0 0.00 Contended lock operations
LCK.waiter.dbg_try_fail 0 0.00 Contended trylock operations
LCK.wq.creat 1 0.00 Created locks
LCK.wq.destroy 0 0.00 Destroyed locks
LCK.wq.locks 7111 1.03 Lock Operations
LCK.wq.dbg_busy 0 0.00 Contended lock operations
LCK.wq.dbg_try_fail 0 0.00 Contended trylock operations
LCK.wstat.creat 1 0.00 Created locks
LCK.wstat.destroy 0 0.00 Destroyed locks
LCK.wstat.locks 134839 19.52 Lock Operations
LCK.wstat.dbg_busy 0 0.00 Contended lock operations
LCK.wstat.dbg_try_fail 0 0.00 Contended trylock operations
MEMPOOL.busyobj.live 0 . In use
MEMPOOL.busyobj.pool 10 . In Pool
MEMPOOL.busyobj.sz_wanted 65536 . Size requested
MEMPOOL.busyobj.sz_actual 65504 . Size allocated
MEMPOOL.busyobj.allocs 1 0.00 Allocations
MEMPOOL.busyobj.frees 1 0.00 Frees
MEMPOOL.busyobj.recycle 1 0.00 Recycled from pool
MEMPOOL.busyobj.timeout 0 0.00 Timed out from pool
MEMPOOL.busyobj.toosmall 0 0.00 Too small to recycle
MEMPOOL.busyobj.surplus 0 0.00 Too many for pool
MEMPOOL.busyobj.randry 0 0.00 Pool ran dry
MEMPOOL.req0.live 0 . In use
MEMPOOL.req0.pool 10 . In Pool
MEMPOOL.req0.sz_wanted 65536 . Size requested
MEMPOOL.req0.sz_actual 65504 . Size allocated
MEMPOOL.req0.allocs 67406 9.76 Allocations
MEMPOOL.req0.frees 67406 9.76 Frees
MEMPOOL.req0.recycle 67117 9.71 Recycled from pool
MEMPOOL.req0.timeout 1190 0.17 Timed out from pool
MEMPOOL.req0.toosmall 0 0.00 Too small to recycle
MEMPOOL.req0.surplus 0 0.00 Too many for pool
MEMPOOL.req0.randry 289 0.04 Pool ran dry
MEMPOOL.sess0.live 0 . In use
MEMPOOL.sess0.pool 10 . In Pool
MEMPOOL.sess0.sz_wanted 768 . Size requested
MEMPOOL.sess0.sz_actual 736 . Size allocated
MEMPOOL.sess0.allocs 340 0.05 Allocations
MEMPOOL.sess0.frees 340 0.05 Frees
MEMPOOL.sess0.recycle 54 0.01 Recycled from pool
MEMPOOL.sess0.timeout 339 0.05 Timed out from pool
MEMPOOL.sess0.toosmall 0 0.00 Too small to recycle
MEMPOOL.sess0.surplus 0 0.00 Too many for pool
MEMPOOL.sess0.randry 286 0.04 Pool ran dry
LCK.sma.creat 2 0.00 Created locks
LCK.sma.destroy 0 0.00 Destroyed locks
LCK.sma.locks 6 0.00 Lock Operations
LCK.sma.dbg_busy 0 0.00 Contended lock operations
LCK.sma.dbg_try_fail 0 0.00 Contended trylock operations
SMA.s0.c_req 2 0.00 Allocator requests
SMA.s0.c_fail 0 0.00 Allocator failures
SMA.s0.c_bytes 251 0.04 Bytes allocated
SMA.s0.c_freed 0 0.00 Bytes freed
SMA.s0.g_alloc 2 . Allocations outstanding
SMA.s0.g_bytes 251 . Bytes outstanding
SMA.s0.g_space 268435205 . Bytes available
SMA.Transient.c_req 2 0.00 Allocator requests
SMA.Transient.c_fail 0 0.00 Allocator failures
SMA.Transient.c_bytes 1401 0.20 Bytes allocated
SMA.Transient.c_freed 1401 0.20 Bytes freed
SMA.Transient.g_alloc 0 . Allocations outstanding
SMA.Transient.g_bytes 0 . Bytes outstanding
SMA.Transient.g_space 0 . Bytes available
MEMPOOL.req1.live 0 . In use
MEMPOOL.req1.pool 10 . In Pool
MEMPOOL.req1.sz_wanted 65536 . Size requested
MEMPOOL.req1.sz_actual 65504 . Size allocated
MEMPOOL.req1.allocs 67094 9.71 Allocations
MEMPOOL.req1.frees 67094 9.71 Frees
MEMPOOL.req1.recycle 66802 9.67 Recycled from pool
MEMPOOL.req1.timeout 1187 0.17 Timed out from pool
MEMPOOL.req1.toosmall 0 0.00 Too small to recycle
MEMPOOL.req1.surplus 0 0.00 Too many for pool
MEMPOOL.req1.randry 292 0.04 Pool ran dry
MEMPOOL.sess1.live 0 . In use
MEMPOOL.sess1.pool 10 . In Pool
MEMPOOL.sess1.sz_wanted 768 . Size requested
MEMPOOL.sess1.sz_actual 736 . Size allocated
MEMPOOL.sess1.allocs 342 0.05 Allocations
MEMPOOL.sess1.frees 342 0.05 Frees
MEMPOOL.sess1.recycle 55 0.01 Recycled from pool
MEMPOOL.sess1.timeout 342 0.05 Timed out from pool
MEMPOOL.sess1.toosmall 0 0.00 Too small to recycle
MEMPOOL.sess1.surplus 0 0.00 Too many for pool
MEMPOOL.sess1.randry 287 0.04 Pool ran dry
VBE.boot.default.happy 0 . Happy health probes
VBE.boot.default.bereq_hdrbytes 178 0.03 Request header bytes
VBE.boot.default.bereq_bodybytes 0 0.00 Request body bytes
VBE.boot.default.beresp_hdrbytes 117 0.02 Response header bytes
VBE.boot.default.beresp_bodybytes 11 0.00 Response body bytes
VBE.boot.default.pipe_hdrbytes 0 0.00 Pipe request header bytes
VBE.boot.default.pipe_out 0 0.00 Piped bytes to backend
VBE.boot.default.pipe_in 0 0.00 Piped bytes from backend
VBE.boot.default.conn 0 . Concurrent connections used
VBE.boot.default.req 1 0.00 Backend requests sent
VBE.boot.default.unhealthy 0 0.00 Fetches not attempted due to backend being unhealthy
VBE.boot.default.busy 0 0.00 Fetches not attempted due to backend being busy
VBE.boot.default.fail 0 0.00 Connections failed
VBE.boot.default.fail_eacces 0 0.00 Connections failed with EACCES or EPERM
VBE.boot.default.fail_eaddrnotavail 0 0.00 Connections failed with EADDRNOTAVAIL
VBE.boot.default.fail_econnrefused 0 0.00 Connections failed with ECONNREFUSED
VBE.boot.default.fail_enetunreach 0 0.00 Connections failed with ENETUNREACH
VBE.boot.default.fail_etimedout 0 0.00 Connections failed ETIMEDOUT
VBE.boot.default.fail_other 0 0.00 Connections failed for other reason
VBE.boot.default.helddown 0 0.00 Connection opens not attempted
If the backend service was a more realistic one, e.g. doing some calculations or IO operations, then Varnish Cache would definitely help! But it is interesting to find out whether something could be improved for this specific case of serving a static content!
译者: 作者: 原文链接:
替换我:在这里填写摘要。
{% raw %}
替换我:这里写中文
{% raw %}
What is Varnish Cache ?
From project’s Wiki page:
From Wikipedia:
To compare its performance I am going to use it as a HTTP accelerator in front of a simple REST API — a GET endpoint written in Golang that just returns “Hello World” without reading/writing from/to disk or to the network:
The VMs I am going to use are the same from my previous similar posts:
Note: the VMs are as close as possible in their hardware capabilities — same type and amount of RAM, same disks, network cards and bandwidth. Also the CPUs are as similar as possible but there are some differences:
Both VMs run Ubuntu 20.04 with latest software updates.
Varnish Cache is built from source for the master branch!
As load testing client I will use Vegeta. The client application runs on a third VM in the same network with the two above!
The command I use to run Vegeta is:
Usually Vegeta is used to measure the latency while using a constant rate/throughput — by using
-rate N/s
, where N is some positive number. As explained here I am going to use-rate infinity -max-workers M
, whereM
is empirically found positive number that will load the load client VM CPU at 80–85%. By using-rate infinity
I want to find the highest throughput the backend can serve.Note: In the first version of this test I’ve used WRK as a HTTP load testing client but as noticed by the Varnish community it seems there is a bug in the calculation of its latency statistics — the standard deviation is bigger than the average, e.g.:
This could happen only if some requests have negative latency. Highly unlikely!
To set a baseline I will execute the load client directly against the Golang based service running on both VMs:
env PORT=8080 go run http-server.go
(Fish shell syntax)So far the aarch64 VM gives slightly smaller throughput (37015.18 vs 37078.60) but also slightly better mean latency (1.695ms vs 2.019ms)!
Let’s involve Varnish Cache into the game!
I will stop the Golang based HTTP server running on port 8080, start a new one on port 8081 and start Varnish:
And
varnish.vcl
looks like:where
XYZ
is the IP of the other VM, i.e. Varnish Cache running on aarch64 points to Golang HTTP server running on x86_64, and vise versa. This is not really important because Varnish hits the backend server just once and from there on serves the response from its cache, so a backend server running on the same host won’t extra load the system in this specific test.The results from running Vegeta 5 times against Varnish are:
The throughput is almost the same for both architectures and the aarch64 VM gives slightly better latency!
Here is the output of
varnishstat -n /home/ubuntu/varnish/work -1
for both instances:If the backend service was a more realistic one, e.g. doing some calculations or IO operations, then Varnish Cache would definitely help! But it is interesting to find out whether something could be improved for this specific case of serving a static content!
Happy hacking and stay safe!
{% raw %}