Open krlmlr opened 2 days ago
I think that shouldn't be in the output as it would only be needed in rare cases (certainly nobody asked before). One could run the container interactively and explore there - if the check in the container finishes at some machine of yours, that might be a way to go (unless you just wanted to run several experiments with different limits). If you want to reduce the memory use, say in a custom build of the container, you can reduce the maximum number of states in the Makefile - then, functions that cannot be checked within that limit will be skipped (but possibly it will not reduce the amount of functions skipped for you, maybe the ones that are already being skipped will be skipped sooner). That is, in my experience, when some function couldn't be checked in the given amount of states, increasing the amount often didn't help.
I can't run duckdb on GitHub Actions, the process is being killed.
I'm now running in the cloud, and I'd like to understand how much memory I need.
Monitoring peak memory usage for process trees, especially when running Docker, is challenging. How do you feel about running the tools (like
bcheck
) with/usr/bin/time -v
to display peak memory usage and other stats? Would that interfere with the logic?https://github.com/kalibera/rchk/blob/8fe3f2769e834cf183577d5ddfa97f9a224fe612/scripts/check_package.sh#L104