Open jiliguluss opened 3 weeks ago
Can you please write which version you are using and on which platform?
I build AFL++ with dev
branch with commit at 477063e9
.
The os is:
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
window 1:
AFL_FUZZER_STATS_UPDATE_INTERVAL=10 afl-fuzz -i in -o out -Q -- ./te
window 2:
while : ; do grep run_time out/default/fuzzer_stats; sleep 10; done
run_time : 0
run_time : 10
run_time : 20
run_time : 30
my guess is your tooling is keeping the file descriptor open and re-reads the same information over and over. you most open the file again for every new read. (there would be a race condition otherwise if afl-fuzz would rewrite into the same file directly.)
another possibility is that there is a very time consuming event where also the UI would not be updated, and the this can lag behind (and that is OK, fuzzing performance is more important than stat updates). this is a rare occurrence though and only applies to cmplog and syncing with slow + large targets.
window 1:
AFL_FUZZER_STATS_UPDATE_INTERVAL=10 afl-fuzz -i in -o out -Q -- ./te
window 2:
while : ; do grep run_time out/default/fuzzer_stats; sleep 10; done run_time : 0 run_time : 10 run_time : 20 run_time : 30
my guess is your tooling is keeping the file descriptor open and re-reads the same information over and over. you most open the file again for every new read. (there would be a race condition otherwise if afl-fuzz would rewrite into the same file directly.)
I use Python's with
statement to read the fuzzer_stats file. After reading the content, the file handle will be closed and will not be occupied.
In addition, I read fuzzer_stats every 5 minutes. After releasing the file handle, afl++ still not update fuzzer_stats within 5 minutes, which seems a little strange.
how about you use your python script and in parallel the while : ; do grep ...
command I did and see if this produces different results?
@bancsorin10 honestely I have never used dumb mode in my life :) It makes zero sense to use it. If that does not work than it is a very different reason to the original poster
@bancsorin10 there was a bug in non-instrumented mode and no-forkserver mode (both are pointless to use) when trying to exit fuzzing, fixed that in dev.
how about you use your python script and in parallel the
while : ; do grep ...
command I did and see if this produces different results?
I do an experiment with python script:
import subprocess as sp
fuzz_cmd = f'afl-fuzz -i in -o res -Q -- ./test/cares @@'
grep_cmd = 'while : ; do grep run_time res/default/fuzzer_stats; sleep 10; done'
fuzz_proc = sp.Popen(fuzz_cmd, shell=True, stdout=sp.PIPE, stderr=sp.PIPE)
grep_proc = sp.Popen(grep_cmd, shell=True, stdout=sp.PIPE, stderr=sp.PIPE)
try:
while True:
line = grep_proc.stdout.readline()
if not line:
break
print(line.decode().strip())
except KeyboardInterrupt:
print("Interrupted by user")
The run time is always 0, but the queue is not empty:
Describe the bug Hi, I am using afl++'s qemu mode to test
cares
target. During the test, I want to check the progress of fuzzing regularly. I check the fuzzer_stats file every 5 minutes, but I always find that the run time and last find are not updated.I used the dev branch's AFL_FUZZER_STATS_UPDATE_INTERVAL environment variable and set it to update fuzzer_stats every 30 seconds, but I found that this didn't work either.
To Reproduce
run_time
andlast_find-start_time
.Expected behavior I expect run_time and last_find to change over time, especially run_time. If I understand correctly, run time should be the value of the fuzz running time, it should not remain constant.
Screen output/Screenshots But I found that as time goes by, run time and last find are never updated.![image](https://github.com/AFLplusplus/AFLplusplus/assets/38446864/bac1a9b9-5e77-42b9-8470-5da08c84c25a)
I also checked the plot data file and I found that the updates of this file are more lagging than the fuzzer stats. At the time I took the screenshot, the plot data did not have any data.![image](https://github.com/AFLplusplus/AFLplusplus/assets/38446864/fa0f87ef-0fce-409b-9e14-3b53378a8861)
I originally thought this was a random occurrence, but I tested it many times and changed other targets. It turns out that this is not a low-probability event.
Additional context Is there any way to get reliable fuzz progress data? I hope to use this data to determine whether fuzz has entered a stagnant stage and then decide whether intervention is needed.