Open luis261 opened 2 weeks ago
I think I'll just start off by establishing a test directory which simply provides a simple comparison script (unixoids only for now) that checks out master, runs a sample, then checks out the target branch and checks the diff in terms of stdout and stderr, so stay tuned for that.
This simple, heuristic approach should prove quite effective at identifying the more obvious errors considering the way this program writes to the console.
more ideas:
@CYB3RMX if you could go ahead and assign this to me (:
I think I'll just start off by establishing a test directory which simply provides a simple comparison script
btw, here's what such a script could look like (a preliminary draft) - "base_cmp_stdout.sh":
#!/bin/bash
# TODO add informational echo messages
# [ARRANGE]
# TODO parameterize the actual invocation that gets compared, either via config or cmd line args
# [ACT]
# TODO need to ignore the ASCII art header, maybe it might
# make sense to introduce a --no-header option for that purpose?
python3 ./qu1cksc0pe.py --file /usr/bin/ls --analyze > ./current.stdout.txt
git checkout master
python3 ./qu1cksc0pe.py --file /usr/bin/ls --analyze > ./baseline.stdout.txt
# [ASSERT]
diff --text --ignore-all-space ./baseline.stdout.txt ./current.stdout.txt
# TODO actually check result and exit accordingly instead of just emitting it
# TODO also diff any report files that might be present?
# [CLEANUP]
# TODO need to store the current branch above and check it out again here
# TODO cleanup files
This project could benefit from a suite of automated tests (at different levels, mainly system-level/integration tests but also unit tests for in-depth testing the behavior of the analyzer modules)
I have quite a bit of unit testing experience, specifically in Python with unittest/pytest. Also got some CI pipeline tinkering under my belt (mainly Azure DevOps/GitLab, but GH actions should be manageable too).
I think I might start with some very high level, simple integration testing via a bash script and then some unittest code coverage for the bearing constructs/fiddly analysis logic of some of the modules I'll be working on.
The difficult part will be the malware samples later I think, if and when I ever get to that point. Also getting the project installed on the CI runners properly might not be that simple considering its dependencies. Until then the tests will still be local, albeit (partially) automated.