rocicorp / replicache

Realtime Sync for Any Backend Stack
https://doc.replicache.dev
1.05k stars 37 forks source link

feat: Disable a bunch of assertions in prod #891

Closed arv closed 2 years ago

arv commented 2 years ago

When process.env.NODE_ENV === 'production' we skip validating the shape of the chunks (is it a Commit? is it a B+Tree?) as well as skipping validating that the JSONValue is really a JSONValue.

Fixes #876

vercel[bot] commented 2 years ago

This pull request is being automatically deployed with Vercel (learn more).
To see the status of your deployment, click below or on the icon next to each commit.

🔍 Inspect: https://vercel.com/rocicorp/replicache/D9XsTCBTsP6QXLxMPnXWjFRq2uaB
✅ Preview: https://replicache-git-arv-skip-asserts-in-prod-rocicorp.vercel.app

grgbkr commented 2 years ago

Are these asserts only used on reads? We probably don't want to remove asserts on write paths in prod.

arv commented 2 years ago

We would probably want to build 2 versions of replicache after this: replicache.dev.js and replicache.prod.js Or we could try to keep process.env.NODE_ENV in the generated code.

arv commented 2 years ago

Are these asserts only used on reads? We probably don't want to remove asserts on write paths in prod.

We don't have these asserts in write since we control what we write to the chunks...

...but we do a deepClone in put which means that the value we end up with is a JSONValue for sure. We have talked about removing some of the clones as well since they show up.

arv commented 2 years ago

I ran the perf tests twice and picked the best run of each test.

arv/skip-asserts-in-prod

writeSubRead 1MB total, 64 subs total, 5 subs dirty, 16kb read per sub x 1428.57 ops/sec ±1.5% (19 runs sampled)
writeSubRead 4MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 1428.57 ops/sec ±2.2% (17 runs sampled)
writeSubRead 16MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 909.09 ops/sec ±0.8% (7 runs sampled)
writeSubRead 64MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 769.23 ops/sec ±10.1% (7 runs sampled)
populate 1024x1000 (clean, indexes: 0) x 9.33 MB/s ±22.8% (7 runs sampled)
populate 1024x1000 (clean, indexes: 1) x 37.13 MB/s ±6.5% (17 runs sampled)
populate 1024x1000 (clean, indexes: 2) x 26.76 MB/s ±8.3% (12 runs sampled)
populate 1024x10000 (clean, indexes: 0) x 48.73 MB/s ±37.2% (7 runs sampled)
populate 1024x10000 (clean, indexes: 1) x 41.38 MB/s ±38.2% (7 runs sampled)
populate 1024x10000 (clean, indexes: 2) x 28.36 MB/s ±30.7% (7 runs sampled)
scan 1024x1000 x 1085.07 MB/s ±1.1% (19 runs sampled)
scan 1024x10000 x 1236.16 MB/s ±3.5% (19 runs sampled)
create index 1024x5000 x 23.31 ops/sec ±14.8% (10 runs sampled)
startup read 1024x100 from 1024x100000 stored x 1.95 MB/s ±22.3% (10 runs sampled)
startup scan 1024x100 from 1024x100000 stored x 6.42 MB/s ±51.8% (19 runs sampled)

main

writeSubRead 1MB total, 64 subs total, 5 subs dirty, 16kb read per sub x 1250.00 ops/sec ±1.5% (19 runs sampled)
writeSubRead 4MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 1111.11 ops/sec ±2.3% (17 runs sampled)
writeSubRead 16MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 769.23 ops/sec ±0.7% (7 runs sampled)
writeSubRead 64MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 666.67 ops/sec ±10.2% (7 runs sampled)
populate 1024x1000 (clean, indexes: 0) x 9.42 MB/s ±10.2% (7 runs sampled)
populate 1024x1000 (clean, indexes: 1) x 37.42 MB/s ±10.9% (17 runs sampled)
populate 1024x1000 (clean, indexes: 2) x 26.25 MB/s ±9.0% (12 runs sampled)
populate 1024x10000 (clean, indexes: 0) x 48.61 MB/s ±27.8% (7 runs sampled)
populate 1024x10000 (clean, indexes: 1) x 40.11 MB/s ±31.6% (7 runs sampled)
populate 1024x10000 (clean, indexes: 2) x 28.05 MB/s ±45.4% (7 runs sampled)
scan 1024x1000 x 887.78 MB/s ±1.8% (19 runs sampled)
scan 1024x10000 x 879.79 MB/s ±11.0% (19 runs sampled)
create index 1024x5000 x 22.17 ops/sec ±16.5% (9 runs sampled)
startup read 1024x100 from 1024x100000 stored x 2.09 MB/s ±36.0% (9 runs sampled)
startup scan 1024x100 from 1024x100000 stored x 9.57 MB/s ±49.7% (19 runs sampled)
aboodman commented 2 years ago

YAS

On Tue, Mar 22, 2022 at 10:07 AM Erik Arvidsson @.***> wrote:

arv/skip-asserts-in-prod

writeSubRead 1MB total, 64 subs total, 5 subs dirty, 16kb read per sub x 1428.57 ops/sec ±1.5% (19 runs sampled)

writeSubRead 4MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 1428.57 ops/sec ±2.2% (17 runs sampled)

writeSubRead 16MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 909.09 ops/sec ±0.8% (7 runs sampled)

writeSubRead 64MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 769.23 ops/sec ±10.1% (7 runs sampled)

populate 1024x1000 (clean, indexes: 0) x 9.33 MB/s ±22.8% (7 runs sampled)

populate 1024x1000 (clean, indexes: 1) x 37.13 MB/s ±6.5% (17 runs sampled)

populate 1024x1000 (clean, indexes: 2) x 26.76 MB/s ±8.3% (12 runs sampled)

populate 1024x10000 (clean, indexes: 0) x 48.73 MB/s ±37.2% (7 runs sampled)

populate 1024x10000 (clean, indexes: 1) x 41.38 MB/s ±38.2% (7 runs sampled)

populate 1024x10000 (clean, indexes: 2) x 28.36 MB/s ±30.7% (7 runs sampled)

scan 1024x1000 x 1085.07 MB/s ±1.1% (19 runs sampled)

scan 1024x10000 x 1236.16 MB/s ±3.5% (19 runs sampled)

create index 1024x5000 x 23.31 ops/sec ±14.8% (10 runs sampled)

startup read 1024x100 from 1024x100000 stored x 1.95 MB/s ±22.3% (10 runs sampled)

startup scan 1024x100 from 1024x100000 stored x 6.42 MB/s ±51.8% (19 runs sampled)

main

writeSubRead 1MB total, 64 subs total, 5 subs dirty, 16kb read per sub x 1250.00 ops/sec ±1.5% (19 runs sampled)

writeSubRead 4MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 1111.11 ops/sec ±2.3% (17 runs sampled)

writeSubRead 16MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 769.23 ops/sec ±0.7% (7 runs sampled)

writeSubRead 64MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 666.67 ops/sec ±10.2% (7 runs sampled)

populate 1024x1000 (clean, indexes: 0) x 9.42 MB/s ±10.2% (7 runs sampled)

populate 1024x1000 (clean, indexes: 1) x 37.42 MB/s ±10.9% (17 runs sampled)

populate 1024x1000 (clean, indexes: 2) x 26.25 MB/s ±9.0% (12 runs sampled)

populate 1024x10000 (clean, indexes: 0) x 48.61 MB/s ±27.8% (7 runs sampled)

populate 1024x10000 (clean, indexes: 1) x 40.11 MB/s ±31.6% (7 runs sampled)

populate 1024x10000 (clean, indexes: 2) x 28.05 MB/s ±45.4% (7 runs sampled)

scan 1024x1000 x 887.78 MB/s ±1.8% (19 runs sampled)

scan 1024x10000 x 879.79 MB/s ±11.0% (19 runs sampled)

create index 1024x5000 x 22.17 ops/sec ±16.5% (9 runs sampled)

startup read 1024x100 from 1024x100000 stored x 2.09 MB/s ±36.0% (9 runs sampled)

startup scan 1024x100 from 1024x100000 stored x 9.57 MB/s ±49.7% (19 runs sampled)

— Reply to this email directly, view it on GitHub https://github.com/rocicorp/replicache/pull/891#issuecomment-1075589278, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAATUBC53WKNCENNBC7GNBDVBISAJANCNFSM5RLMWOWQ . You are receiving this because your review was requested.Message ID: @.***>

aboodman commented 2 years ago

Inject it straight into my veins

On Tue, Mar 22, 2022 at 11:06 AM Aaron Boodman @.***> wrote:

YAS

On Tue, Mar 22, 2022 at 10:07 AM Erik Arvidsson @.***> wrote:

arv/skip-asserts-in-prod

writeSubRead 1MB total, 64 subs total, 5 subs dirty, 16kb read per sub x 1428.57 ops/sec ±1.5% (19 runs sampled)

writeSubRead 4MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 1428.57 ops/sec ±2.2% (17 runs sampled)

writeSubRead 16MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 909.09 ops/sec ±0.8% (7 runs sampled)

writeSubRead 64MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 769.23 ops/sec ±10.1% (7 runs sampled)

populate 1024x1000 (clean, indexes: 0) x 9.33 MB/s ±22.8% (7 runs sampled)

populate 1024x1000 (clean, indexes: 1) x 37.13 MB/s ±6.5% (17 runs sampled)

populate 1024x1000 (clean, indexes: 2) x 26.76 MB/s ±8.3% (12 runs sampled)

populate 1024x10000 (clean, indexes: 0) x 48.73 MB/s ±37.2% (7 runs sampled)

populate 1024x10000 (clean, indexes: 1) x 41.38 MB/s ±38.2% (7 runs sampled)

populate 1024x10000 (clean, indexes: 2) x 28.36 MB/s ±30.7% (7 runs sampled)

scan 1024x1000 x 1085.07 MB/s ±1.1% (19 runs sampled)

scan 1024x10000 x 1236.16 MB/s ±3.5% (19 runs sampled)

create index 1024x5000 x 23.31 ops/sec ±14.8% (10 runs sampled)

startup read 1024x100 from 1024x100000 stored x 1.95 MB/s ±22.3% (10 runs sampled)

startup scan 1024x100 from 1024x100000 stored x 6.42 MB/s ±51.8% (19 runs sampled)

main

writeSubRead 1MB total, 64 subs total, 5 subs dirty, 16kb read per sub x 1250.00 ops/sec ±1.5% (19 runs sampled)

writeSubRead 4MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 1111.11 ops/sec ±2.3% (17 runs sampled)

writeSubRead 16MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 769.23 ops/sec ±0.7% (7 runs sampled)

writeSubRead 64MB total, 128 subs total, 5 subs dirty, 16kb read per sub x 666.67 ops/sec ±10.2% (7 runs sampled)

populate 1024x1000 (clean, indexes: 0) x 9.42 MB/s ±10.2% (7 runs sampled)

populate 1024x1000 (clean, indexes: 1) x 37.42 MB/s ±10.9% (17 runs sampled)

populate 1024x1000 (clean, indexes: 2) x 26.25 MB/s ±9.0% (12 runs sampled)

populate 1024x10000 (clean, indexes: 0) x 48.61 MB/s ±27.8% (7 runs sampled)

populate 1024x10000 (clean, indexes: 1) x 40.11 MB/s ±31.6% (7 runs sampled)

populate 1024x10000 (clean, indexes: 2) x 28.05 MB/s ±45.4% (7 runs sampled)

scan 1024x1000 x 887.78 MB/s ±1.8% (19 runs sampled)

scan 1024x10000 x 879.79 MB/s ±11.0% (19 runs sampled)

create index 1024x5000 x 22.17 ops/sec ±16.5% (9 runs sampled)

startup read 1024x100 from 1024x100000 stored x 2.09 MB/s ±36.0% (9 runs sampled)

startup scan 1024x100 from 1024x100000 stored x 9.57 MB/s ±49.7% (19 runs sampled)

— Reply to this email directly, view it on GitHub https://github.com/rocicorp/replicache/pull/891#issuecomment-1075589278, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAATUBC53WKNCENNBC7GNBDVBISAJANCNFSM5RLMWOWQ . You are receiving this because your review was requested.Message ID: @.***>

aboodman commented 2 years ago

Or we could try to keep process.env.NODE_ENV in the generated code.

Sounds good to me if possible.