Open marcellomaugeri opened 3 years ago
Try afl-qemu-trace with your target, maybe it’s aarch64
Try afl-qemu-trace with your target, maybe it’s aarch64
afl-qemu-trace works
thanks @vanhauser-thc !
thanks @vanhauser-thc !
afl-qemu-trace works, but the problem still remains, I can't fuzz it
What's the error are you seeing?
What's the error are you seeing?
afl-fuzz gives me the error above
Hello,
maybe I can chime in with a more complete bug description (stumbled across the same bug just now):
Minimal example, adapted from this stackoverflow answer so that we can exclude libraries in the process:
File notmain.c
, exhibiting a straight buffer overrun:
void write ( unsigned int, char *, unsigned int );
int read ( unsigned int, char *, unsigned int );
int notmain ( void )
{
char buf[40];
int len = read(0, buf, 100);
write(1,buf,len);
return(0);
}
File startup.s
.thumb
.thumb_func
.global _start
_start:
@mov r0,=0x10000
@mov sp,r0
bl notmain
mov r7,#0x1
mov r0,#0
swi #0
.word 0xFFFFFFFF
b .
.thumb_func
.globl write
write:
push {r7,lr}
mov r7,#0x04
swi 0
pop {r7,pc}
b .
.thumb_func
.globl read
read:
push {r7,lr}
mov r7,#0x03
swi 0
pop {r7,pc}
b .
.end
File hello.ld
ENTRY(_start)
MEMORY
{
ram : ORIGIN = 0x00010000, LENGTH = 0x1000
}
SECTIONS
{
.text : { *(.text*) } > ram
.rodata : { *(.rodata*) } > ram
.bss : { *(.bss*) } > ram
}
Building:
arm-none-eabi-as --warn --fatal-warnings start.s -o start.o
arm-none-eabi-gcc -Wall -Werror -O2 -nostdlib -nostartfiles -ffreestanding -mthumb -c notmain.c -o notmain.o
arm-none-eabi-ld -o notmain.elf -T hello.ld start.o notmain.o
arm-none-eabi-objdump -D notmain.elf > notmain.list
arm-none-eabi-objcopy notmain.elf notmain.bin -O binary
Running with qemu:
qemu-arm-static -d in_asm,cpu,cpu_reset -D hello -cpu cortex-m4 ./notmain.elf
Spawns an instance of the elf doing echo (e.g.: enter asdf<enter>
, echos and exits).
make
cd ./qemu-mode
CPU_TARGET=arm ./build_qemu_support.sh
Works as expected:
$ ./afl-qemu-trace ./notmain.elf
asdf
asdf
mkdir sampleinput findings
echo -n "asdf" > ./sampleinput/asdf
$ AFL_SKIP_CPUFREQ=1 ./afl-fuzz -Q -m 500MB -i ./sampleinput -o findings/ -- ./notmain.elf
afl-fuzz 2.57b by <lcamtuf@google.com>
[+] You have 80 CPU cores and 2 runnable tasks (utilization: 2%).
[+] Try parallel jobs - see docs/parallel_fuzzing.txt.
[*] Checking CPU core loadout...
[+] Found a free CPU core, binding to #0.
[*] Checking core_pattern...
[*] Setting up output directories...
[+] Output directory exists but deemed OK to reuse.
[*] Deleting old session data...
[+] Output dir cleanup successful.
[*] Scanning './sampleinput'...
[+] No auto-generated dictionary tokens to reuse.
[*] Creating hard links for all input files...
[*] Validating target binary...
[*] Attempting dry run with 'id:000000,orig:asdf'...
[*] Spinning up the fork server...
[-] Hmm, looks like the target binary terminated before we could complete a
handshake with the injected code. There are two probable explanations:
- The current memory limit (500 MB) is too restrictive, causing an OOM
fault in the dynamic linker. This can be fixed with the -m option. A
simple way to confirm the diagnosis may be:
( ulimit -Sv $[499 << 10]; /path/to/fuzzed_app )
Tip: you can use http://jwilk.net/software/recidivm to quickly
estimate the required amount of virtual memory for the binary.
- Less likely, there is a horrible bug in the fuzzer. If other options
fail, poke <lcamtuf@coredump.cx> for troubleshooting tips.
[-] PROGRAM ABORT : Fork server handshake failed
Location : init_forkserver(), afl-fuzz.c:2282
I'm not 100% sure how to apply that debugging hints to the arm binary. But setting the ulimt to afl-qemu-trace
actually produces a failure:
$ ulimit -Sv $[499 << 10];
$ ./afl-qemu-trace ./notmain.elf
Unable to reserve 0xf7000000 bytes of virtual address space for use as guest address space (check your virtual memory ulimit setting or reserve less using -R option)
But as the binary itself has really modest memory requirements, I think something else seems to go wrong somewhere, because no matter how much memory one hands to afl-fuzz
, it breaks down any ways:
$ AFL_SKIP_CPUFREQ=1 ./afl-fuzz -Q -m 100000MB -i ./sampleinput -o findings/ -- ./notmain.elf
afl-fuzz 2.57b by <lcamtuf@google.com>
[+] You have 80 CPU cores and 1 runnable tasks (utilization: 1%).
[+] Try parallel jobs - see docs/parallel_fuzzing.txt.
[*] Checking CPU core loadout...
[+] Found a free CPU core, binding to #0.
[*] Checking core_pattern...
[*] Setting up output directories...
[+] Output directory exists but deemed OK to reuse.
[*] Deleting old session data...
[+] Output dir cleanup successful.
[*] Scanning './sampleinput'...
[+] No auto-generated dictionary tokens to reuse.
[*] Creating hard links for all input files...
[*] Validating target binary...
[*] Attempting dry run with 'id:000000,orig:asdf'...
[*] Spinning up the fork server...
[-] Hmm, looks like the target binary terminated before we could complete a
handshake with the injected code. There are two probable explanations:
- The current memory limit (97.7 GB) is too restrictive, causing an OOM
fault in the dynamic linker. This can be fixed with the -m option. A
simple way to confirm the diagnosis may be:
( ulimit -Sv $[99999 << 10]; /path/to/fuzzed_app )
Tip: you can use http://jwilk.net/software/recidivm to quickly
estimate the required amount of virtual memory for the binary.
- Less likely, there is a horrible bug in the fuzzer. If other options
fail, poke <lcamtuf@coredump.cx> for troubleshooting tips.
[-] PROGRAM ABORT : Fork server handshake failed
Location : init_forkserver(), afl-fuzz.c:2282
I've tested various git-versions, but it seems to be broken for all of them:
Hopefully this bugreport helps clear things up, I'd be happy to provide more information if you need it. Just let me know.
Thank you for AFL (and bearing with me that far), Simon
I'm trying to fuzz an arm binary, I've built qemu 5.1.0 with CPU_TARGET=arm and afl-qemu-trace works correctly. In order to make it works I disabled all patching files. However, when I do
AFL_SKIP_CPUFREQ=1 afl-fuzz -i ../testcases/text/ -o ../output -Q ../arm-bin
it gives me:
I'm almost sure that is not a memory limit exceeding