Repeating 3 times is done because I was unable to get reproducible results with more time in the pgbench tests(went up to 5 minutes, -T 300). 1 run usually gives a different result, but the other 2 agree and make sense.
Change client instance to m5a.4xlarge and change pgbench to use 16 cores. (should probably be m5a.8xlarge for more precise results on the m5a.8xlarge pg instance, will follow up with a script that adjusts the client based on the pg instance).
delete old pgbench results
update readme
The conclusion is still the same as https://github.com/supabase/benchmarks/pull/29, TPS' do scale with a bigger instance, but here we have some more details in the results with a more parseable format. I'm intending to follow up on this with a python script that parses the results and converts them to csv or a markdown table.
Following up on https://github.com/supabase/benchmarks/pull/29#pullrequestreview-958406334.
Obtained results of the different
pgbench
built-in scripts:select-only
,simple-update
andtpcb-like
. Ran with 10, 50 and 100 clients.The above can be done programatically with:
Repeating 3 times is done because I was unable to get reproducible results with more time in the pgbench tests(went up to 5 minutes,
-T 300
). 1 run usually gives a different result, but the other 2 agree and make sense.Change client instance to m5a.4xlarge and change pgbench to use 16 cores. (should probably be
m5a.8xlarge
for more precise results on them5a.8xlarge
pg instance, will follow up with a script that adjusts the client based on the pg instance).delete old pgbench results
update readme
The conclusion is still the same as https://github.com/supabase/benchmarks/pull/29, TPS' do scale with a bigger instance, but here we have some more details in the results with a more parseable format. I'm intending to follow up on this with a python script that parses the results and converts them to csv or a markdown table.