I've tested this in 1.5.4, 1.6.0, and the latest main branch all with the same results
What happened
Currently it seems that Cloudberry doesn't actually enforce the max_statement_mem GUC. This seems to be due to a missing return statement in the part of the code that checks to see if the statement_mem > max_statement_mem.
What you think should happen instead
Clearly max_statement_mem should prevent users from setting their own local statement_mem higher than it. This can create all kinds of issues since users can technically overallocate memory and operate outside of the bounds of their workload management.
How to reproduce
You can run this on any fresh install (or where the max_statement_mem is <= 2000MB
CREATE USER testuser;
SET ROLE testuser;
SHOW statement_mem;
SHOW max_statement_mem;
SET statement_mem = '5000MB';
SHOW statement_mem;
EXPLAIN ANALYZE SELECT * FROM gp_segment_configuration;
The output is as follows for me on any of the versions mentioned earlier
gpadmin=# CREATE USER testuser;
SET ROLE testuser;
SHOW statement_mem;
SHOW max_statement_mem;
SET statement_mem = '5000MB';
SHOW statement_mem;
EXPLAIN ANALYZE SELECT * FROM gp_segment_configuration;
NOTICE: resource queue required -- using default resource queue "pg_default"
CREATE ROLE
SET
statement_mem
I have the change locally staged and tested on my machine (it's just a single "return false;" statement). I can easily write a regression test for this using the .sql and expected output formats I see in the test directory, just point me in the right direction for WHERE to put this test (I don't see an obvious place for it).
Also shoutout Louis Mugnano for helping me track this down
Cloudberry Database version
I've tested this in 1.5.4, 1.6.0, and the latest main branch all with the same results
What happened
Currently it seems that Cloudberry doesn't actually enforce the max_statement_mem GUC. This seems to be due to a missing return statement in the part of the code that checks to see if the statement_mem > max_statement_mem.
What you think should happen instead
Clearly max_statement_mem should prevent users from setting their own local statement_mem higher than it. This can create all kinds of issues since users can technically overallocate memory and operate outside of the bounds of their workload management.
How to reproduce
You can run this on any fresh install (or where the max_statement_mem is <= 2000MB
CREATE USER testuser; SET ROLE testuser; SHOW statement_mem; SHOW max_statement_mem; SET statement_mem = '5000MB'; SHOW statement_mem; EXPLAIN ANALYZE SELECT * FROM gp_segment_configuration;
The output is as follows for me on any of the versions mentioned earlier
gpadmin=# CREATE USER testuser; SET ROLE testuser; SHOW statement_mem; SHOW max_statement_mem; SET statement_mem = '5000MB'; SHOW statement_mem; EXPLAIN ANALYZE SELECT * FROM gp_segment_configuration; NOTICE: resource queue required -- using default resource queue "pg_default" CREATE ROLE SET statement_mem
125MB (1 row)
max_statement_mem
2500MB (1 row)
SET statement_mem
5000MB (1 row)
Seq Scan on gp_segment_configuration (cost=0.00..1.01 rows=1 width=112) (actual time=0.015..0.017 rows=10 loops=1) Planning Time: 13.007 ms (slice0) Executor memory: 109K bytes. Memory used: 5120000kB Optimizer: Postgres query optimizer Execution Time: 0.231 ms (6 rows)
Operating System
rocky 9 linux (should be an OS agnostic bug)
Anything else
I have the change locally staged and tested on my machine (it's just a single "return false;" statement). I can easily write a regression test for this using the .sql and expected output formats I see in the test directory, just point me in the right direction for WHERE to put this test (I don't see an obvious place for it).
Also shoutout Louis Mugnano for helping me track this down
Are you willing to submit PR?
Code of Conduct