@malin1993ml says that the DBMS's RSS blows up in size when you load a small amount of data. AFAIK, we are not checking for memory usage in any of our experiments or testing right now.
We can start simple with this. We should implement a simple test scenario that loads in a bunch of data and measures the size of the process. It should get reported as part of the output in the nightly microbenchmark runs:
This should then automatically get picked up by the dashboard in the lab. One potential problem is the current benchmark throws an error if these measurements decrease since they are measuring throughput. If we reduce memory consumption (which is a good thing), then it will throw an error.
@malin1993ml says that the DBMS's RSS blows up in size when you load a small amount of data. AFAIK, we are not checking for memory usage in any of our experiments or testing right now.
We can start simple with this. We should implement a simple test scenario that loads in a bunch of data and measures the size of the process. It should get reported as part of the output in the nightly microbenchmark runs:
https://github.com/cmu-db/terrier/blob/master/script/micro_bench/run_micro_bench.py
This should then automatically get picked up by the dashboard in the lab. One potential problem is the current benchmark throws an error if these measurements decrease since they are measuring throughput. If we reduce memory consumption (which is a good thing), then it will throw an error.