codenoble / cache-crispies

Speedy Rails JSON serialization with built-in caching
MIT License
156 stars 16 forks source link

Direct JSON rendering with Oj::StringWriter #58

Closed nametoolong closed 1 year ago

nametoolong commented 1 year ago

This PR adds the ability to create a JSON string directly with Oj::StringWriter, without building any intermediate hashes. Caching is feasible but not implemented.

Performance comparison:

Document Path:          /courses/cache_crispies
Document Length:        901053 bytes

Concurrency Level:      1
Time taken for tests:   4.073 seconds
Complete requests:      20
Failed requests:        0
Total transferred:      18030040 bytes
HTML transferred:       18021060 bytes
Requests per second:    4.91 [#/sec] (mean)
Time per request:       203.636 [ms] (mean)
Time per request:       203.636 [ms] (mean, across all concurrent requests)
Transfer rate:          4323.27 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       0
Processing:   162  203  42.6    193     330
Waiting:      161  202  42.0    192     325
Total:        162  204  42.6    193     330

Percentage of the requests served within a certain time (ms)
  50%    193
  66%    196
  75%    207
  80%    208
  90%    303
  95%    330
  98%    330
  99%    330
 100%    330 (longest request)

Document Path:          /courses/cache_crispies_oj
Document Length:        901042 bytes

Concurrency Level:      1
Time taken for tests:   3.436 seconds
Complete requests:      20
Failed requests:        0
Total transferred:      18029820 bytes
HTML transferred:       18020840 bytes
Requests per second:    5.82 [#/sec] (mean)
Time per request:       171.787 [ms] (mean)
Time per request:       171.787 [ms] (mean, across all concurrent requests)
Transfer rate:          5124.72 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   139  172  40.0    157     291
Waiting:      137  170  40.2    156     290
Total:        139  172  40.0    157     291

Percentage of the requests served within a certain time (ms)
  50%    157
  66%    172
  75%    201
  80%    211
  90%    239
  95%    291
  98%    291
  99%    291
 100%    291 (longest request)

All records are cached in Redis to avoid disk IO overhead. The latency reduction is over 15%. Here is a brief list of changes (not easy to break into their own commits though):

Limitation:

adamcrown commented 1 year ago

NOTE: I'm rebasing this branch to pull in pull in the latest CI runs