Closed nstarke closed 5 years ago
These kind of problems are exactly why I use celery 😁.
Three things you can try:
$ celery -A proj worker --loglevel=INFO --concurrency=<SOME NUMBER HERE>
By default it will spawn workers equal to the number of local cores. If you're resource starved, sometimes you just need fewer workers.
An old suggestion from the angr guys is swapping python interpreters to pypy. They report up to 10 times speed increase. But I've found that the analysis accuracy tends to drop.
You could also swap to using radare2 for the backend. Toggling the use_ghidra
to false
in discover and dump will do that. The ghidra_handler print_function might not like this and I probably will need to fix that. Radare2 should run faster and it generally consumes less memory, but you won't get nearly as nice function prototypes
One final piece I forgot to mention. If ghidra takes over an hour the celery task will die. You can remove this 1 hour limit here: https://github.com/ChrisTheCoolHut/Firmware_Slap/blob/abc2a9e0e8e16d8fd9992c8a0ff97c48aeafdf35/bin/Discover_And_Dump.py#L190
Thanks @ChrisTheCoolHut - we'll give that a shot. Closing now.
Hello @ChrisTheCoolHut
@toobus and I are attempting to run Slap on a rather large ELF binary (~5MB in size), and we're experiencing an issue where the celery task doesn't complete and the whole
Discover_and_Dump.py
process just hangs. Is there any tunable parameter we can adjust to help Slap deal with such a large binary?For reference, I am running ghidra's
analyzeHeadless
against the binary and its using something like 72GB of RAM to process, which probably means its swapping out to disk on @toobus workstation.Any thoughts you have on the matter would be greatly appreciated. Thanks!