Closed ash09 closed 6 years ago
I would check the logs that Driller produces in fuzzer-err.log
and driller-err.log
. Chances are that AFL needs some extra environment options set. You can also check AFL's logs in your FUZZER_WORK_DIR
Further on this topic, it seems like run.py runs AFL as a fuzzer, and test_driller.py runs ANGR as a symbolic analysis tool, but none of the supplied scripts do both? Or am I missing something?
Can you provide any guidance about how the fuzzer and ANGR would be integrated?
None of the scripts do both. The way we ran Driller was with the "Drilling" jobs on one machine and the AFL jobs on another. I think we used redis.
It is not hard to write a script that runs both on the same machine. Simply check for if AFL no longer has pending fav's then start a driller job with any inputs that have not been passed to Driller.
If you can get Redis and Celery running on the same machine, then you can execute
./run.py <bin_dir>
in one terminal and
./node.py
in another terminal. You'll be dividing your machine's resources between fuzzing and symbolic execution, though.
My config.py
also looks something like this:
### Redis Options
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_DB = 1
### Celery Options
BROKER_URL = 'pyamqp://myuser:mypasswd@localhost:5672/myvhost'
CELERY_ROUTES = {'driller.tasks.fuzz': {'queue': 'fuzzer'}, 'driller.tasks.drill': {'queue': 'driller'}}
### Environment Options
# directory contain driller-qemu versions, relative to the directoy node.py is invoked in
QEMU_DIR = None
# directory containing the binaries, used by the driller node to find binaries
BINARY_DIR = '/drill-bins'
# directory containing the pcap corpus
PCAP_DIR = '/pcaps'
### Driller options
# how long to drill before giving up in seconds
DRILL_TIMEOUT = 60 * 60 * 2
# 16 GB
MEM_LIMIT = 16*1024*1024*1024
# where to write a debug file that contains useful debugging information like
# AFL's fuzzing bitmap, input used, binary path, time started.
# Uses following naming convention:
# <binary_basename>_<input_str_md5>.py
DEBUG_DIR = '/drill-logs'
### Fuzzer options
# how often to check for crashes in seconds
CRASH_CHECK_INTERVAL = 60
# how long to fuzz before giving up in seconds
FUZZ_TIMEOUT = 60 * 60 * 24 * 5
# how long before we kill a dictionary creation process
DICTIONARY_TIMEOUT = 60 * 60
# how many fuzzers should be spun up when a fuzzing job is received
FUZZER_INSTANCES = 4
# where the fuzzer should place it's results on the filesystem
FUZZER_WORK_DIR = '/media/fuzz-ramdisk'
I'm using RabbitMQ with Celery, and I don't believe DRILL_TIMEOUT
or FUZZ_TIMEOUT
works with run.py
or node.py
(although I haven't tested it extensively). I would also look at https://github.com/mechaphish/worker and https://github.com/mechaphish/meister if you really want to get into it.
When restarting the jobs, I kill everything with pkill python; pkill celery; pkill afl-fuzz
. Use that at your own risk if you have multiple python processes running that aren't related to angr.
I also clear the Celery queues with
rabbitmqadmin -u myuser -p mypasswd -V myvhost purge queue name=driller
and
rabbitmqadmin -u myuser -p mypasswd -V myvhost purge queue name=fuzzer
You can view the number of jobs in each queue with
rabbitmqadmin -u myuser -p mypasswd -V myvhost list queues vhost name node messages message_stats.publish_details.rate
I just uploaded the following script: https://github.com/shellphish/fuzzer/blob/master/shellphuzz
It facilitates drilling on a single machine, and is definitely easier than the whole redis/celery or kubernetes setup.
Hello, Could someone explain me how to run Driller? When I execute the run.py script, Driller listens for crashes but the fuzzer doesn't seem to start fuzzing. Thank you in advance.