Frames are now flagged as corrupted if an invalid block size is detected, preventing memory access beyond allocated bounds.
Frames are marked as corrupted when an unsupported bit depth is parsed (currently restricted to 16-bit).
Correctly handle Rice partitions when encountering an escape code and when the bits per sample is zero.
Ensure that failed frames are not retried for decoding if the buffer hasn’t changed, avoiding potential infinite loops.
Discard processed buffer data after detecting a corrupted frame to force new data reading and initiate a fresh sync-code search.
Reject frames marked as potentially failed when they are the last frame of a stream or file, preventing deadlocks.
Residuals and predicted samples are now written directly to the frame_buffer, eliminating the need for dynamic memory allocation by avoiding the use of a vector.
None of the files in the FLAC bitstream test set (https://github.com/ietf-wg-cellar/flac-test-files) cause the decoder to crash anymore, but not all playable files are decoded correctly with this implementation. In addition to unsupported formats, such as 24-bit and multi-channel files, the following issues remain:
Remaining Issues:
There are some limitations because the current implementation of the decode function requires the entire frame to be stored in the processing buffer and cannot request additional data on its own:
The amount of data that can be processed is limited by the buffer size. For example, if the file or stream header contains large meta blocks (such as images) that exceed the buffer capacity, the file cannot be decoded. While the meta block could be discarded, the current implementation lacks a mechanism to resume reading from the last position when the decode function is called with new data.
This also impacts performance. Whenever a frame cannot be fully decoded due to insufficient bits in the buffer, all
previously decoded samples must be decoded again when new data becomes available.
FLAC supports block sizes as small as 16 samples per block. However, the overhead of calling the decode function separately for each block and waiting for new data between blocks decreases throughput, making it difficult to decode small block sizes in real-time.
The decoder doesn’t support sample rate or bit depth changes within a file/stream.
Resolved Issues:
None of the files in the FLAC bitstream test set (https://github.com/ietf-wg-cellar/flac-test-files) cause the decoder to crash anymore, but not all playable files are decoded correctly with this implementation. In addition to unsupported formats, such as 24-bit and multi-channel files, the following issues remain:
Remaining Issues:
There are some limitations because the current implementation of the decode function requires the entire frame to be stored in the processing buffer and cannot request additional data on its own:
The amount of data that can be processed is limited by the buffer size. For example, if the file or stream header contains large meta blocks (such as images) that exceed the buffer capacity, the file cannot be decoded. While the meta block could be discarded, the current implementation lacks a mechanism to resume reading from the last position when the decode function is called with new data.
This also impacts performance. Whenever a frame cannot be fully decoded due to insufficient bits in the buffer, all previously decoded samples must be decoded again when new data becomes available.
FLAC supports block sizes as small as 16 samples per block. However, the overhead of calling the decode function separately for each block and waiting for new data between blocks decreases throughput, making it difficult to decode small block sizes in real-time.
The decoder doesn’t support sample rate or bit depth changes within a file/stream.