Closed jwd83 closed 1 year ago
Imagine there's a continuously running radio stream. With this info present in every frame header you can just "tune in" at any moment and start decoding.
Now, granted, this isn't necessary for static files. But having a different file header and different frame headers for files and streams would complicate the format. So these values are always present to keep it consistent.
If QOA was multiplexed or streamed, it would need a way to have timestamps. Because the radio emitter would not go exactly at the same audio-rate than the received audio card, so the clocks would diverge.
Often those timestamps are in the MP4/TS layer and not in codec.
AFAIK QOA is already "muxable" apart from having no way to (non-statistically) resynchronize in case of missed data. (see also: H264 "NAL"). It's been a long time, I don't know how really needed that is, often the mux layer can add this framing.
Typical broadcast/streaming scenario also includes codec changes in the middle of things, so it's cool to have type information every frame like QOA, else the codec must include type changes messaging (like H.264 SEI) which in broadcast is done every 2 to 10 seconds. </unhelpful blabber>
Thank you for taking the time to respond - I appreciate hearing your perspectives.
https://github.com/phoboslab/qoa/blob/c545d4e93f2078da619c44af17db64f53a22e9fc/qoa.h#L24
I was reading through some of the code here after seeing this project mentioned in the new raylib release. I am curious why num channels and sample rate are present in every frame if they are consistent (and therefore redundant?) in a valid file. Why not move this into the header and save ~4 bytes per frame.
Is this based on wanting 64 bit aligned reads specifically? Sorry just curious, I love how straight forward this is to implement and wish you success!