Open cingmanwu opened 4 months ago
Hi,
unfortunately it is not uncommon for implementations of static analysis algorithms to use a lot of memory. This is true for any tool that does analyses that maintain a complex state. For most of our analyses memory consumption should approximately correlate linearly with program size (but program characteristics can also play a role). We frequently encounter cases where we use more then 20GiB of RAM on "large" programs.
There is not much that can be done about that as we often need to trade resource consumption for precision. I'd generally recommend a system with at least 64GiB of RAM. If you run many analyses in parallel or want to avoid rendering the system unresponsive due to swapping I'd recommend limiting the memory usage of containers.
thanks very much, I will try to limit the memory usage of docker container and observe the difference of time usage
OS: Ubuntu 24 Server CPU: 8 cores Mem: 64G
Even with these specs I face the issue of memory exhaustion. I understand that I maybe overdoing it bit by queuing decent number of programs (all are greater than 1mb, nearing to 500mb collectively; most of them acquired from /bin directory of ubuntu itself)
But I had some questions regarding how data is handled by cwe_checker docker on memory.
1) When multiple files are queued for analysis, will cwe_checker hold onto the static analysis data for each binary onto the memory or is the data stored somewhere temporarily until the analysis of all files are completed?
2) Also, I believe that limiting the threads will just delay the memory issue by a huge time and is not really a workaround for memory issues. Right?
3) Lastly, Is CWE_checker plugin or the tool in its entirety, thread pooled? Usually when monitoring my system, I see that Ghidra is thread pooled and consumes all of my cores (since I haven't placed any limits), but after disassembly comes cwe_checker which is under 1 process and utilizing only 1 core.
My knowledge in thread pooling and static analysis is a bit weak so please help me out here with any suggestions towards optimizing and best configuration I could use to improve performance and avoid code -137.
Thank you!
EDIT: updated figures after testing with a smaller binary
Facing the same issue here, on a beefy machine. Running cwe-checker as a Docker container, latest image as of today.
OS: Ubuntu 24.04, on WSL CPU: 32 cores for WSL (out of 36 on hardware) RAM: 192 GB for WSL (out of 256 GB on hardware) Swap: 48 GB Docker: 27.3.1 in WSL only (i.e. not Docker Desktop on the Windows host)
I'm testing cwe-checker on lz4
version 1.9.4, x64 ELF format. The file is 200 kB. This is not a "large" binary by any means. I would expect cwe-checker to go through it pretty easily.
Timeline is roughly (by watching running processes), in mm:ss
cat /proc/pid/status | grep VmHWM
)cwe-checker
Also, despite the --verbose
option, and I also tried -o ouptut_file
, not a single line of output is printed. Docker logs are empty as well.
System with more RAM are not easy to come by and I doubt it will solve the problem anyway.
Can someone help give me some suggestions?
OS: Ubuntu 20.04.1 CPU: 4 cores Mem: 16G
I run the cwe_checker:v0.8 docker container to scan a binary file, the I use "docker stats" command to monitor the memory usage, the memory usage of cwe_checker container reaches to 10.19G (see the picture below)
Could someone tell me how can I reduce the memory usage? Such as
(the binary file contains some sensitive data, so I can't upload here, sorry)