Closed pallas closed 4 years ago
Ok, at least one issue is in find_nal_unit
, which can over read its buffer. Patch coming soon, although I need to take a break at the moment.
Also, NAL_UNIT_TYPE_SUBSET_SPS
in read_nal_unit
shallow copies sps_subset
into sps_subset_table
, causing a double free.
@pallas That is exceptional work - thank you!
If you find any input sequences which cause a crash/buffer overrun, I should be able to tell how (and fix it).
Note: this has been fuzzed before, but a very long time ago, circa 2012. The bsread* functions can safely (try to) read past the end of the existing buffer, they just read an infinite string of 0's - this is one of the main mechanisms protecting against overruns. I suspect that there could be issues in code which bypasses bs_read (mainly because of performance reasons).
Thanks, I've got some patches coming. One question: it looks like the .in.c files are not in sync with the .c files.. should I be updating .in.c and regenerating or should I just update the .cs directly?
Ok, I found a few different issues and now the fuzzer has been running for half an hour with no new error paths. I'm going to let it run overnight, refine my patches, & likely will make a PR tomorrow.
I'm seeing similar issues in read_pred_weight_table and read_slice_header, at least the debug versions, and at least one more double free somewhere (later assert failure in malloc, so no backtrace).
Ok, I've lost my momentum to keep working on this. The following functions have array overrun problems and/or double-frees post the PR I made a few weeks ago.
read_debug_dec_ref_pic_marking
read_debug_pred_weight_table
read_debug_ref_pic_list_reordering
read_debug_slice_header
read_debug_slice_header_in_scalable_extension
read_debug_slice_layer_rbsp
You can run the fuzzer by
CC=/usr/bin/afl-clang-fast CXX=/usr/bin/afl-clang-fast++ ./configure --disable-debug --disable-shared
env AFL_HARDEN=1 make
env AFL_SKIP_CPUFREQ=1 afl-fuzz -i samples/ -o results/ ./h264_analyze -p @@
I'm analyzing the results with
mkdir -p traces/
for i in results/crashes/id* ; do echo $i && valgrind --track-origins=yes ./h264_analyze -p $i 2>&1 | tee traces/$(basename $i).out ; done
ag -H 'of size' traces/ -C 10| grep \\.c: | sed s,^.\\\+0x,0x, | sed s,^.\[^:\]\\\+:,, | sort | uniq -c
Hey, it seems pretty obvious that I gave up on this. Basically, ever time a dynamically sized array is created/used/destroyed, the bounds are never checked. I'm going to go ahead and close since I don't intend to do any more work here.
I'm using AFL++ to fuzz using the samples. So far there are a bunch of crashes. I'm making this note to let folks know I'm doing this and intend to update with collated results.