Been in need of a major refactor for some time. May still take some time to get to. Rough plan:
APIs and internals:
provide an AVAssetReader and AVAsserWriter-like API for decode and encode; the current OGVDecoder API is too confusing to use and if anything goes wrong it's unclear what happened. Add a seek method on top of the track readers.
always copy to sample buffers for both video and audio decodes, instead of playing locking/lifetime games with the buffers
use AVSampleBufferDisplayLayer and drop OGVFrameView entirely (upstream blockers fixed in iOS 11)
decouple codecs from demuxer in the decode side as well as the encode side
allow each codec and de/muxer module to register itself with core at init time
drop the old AVAssetReader-based MP4 reader (limited to local files, no URLs, no support for Opus/VP9/etc) and replace with a proper demuxer. Use VideoToolbox or whatever is current for decoding H.264.
Build system:
drop CocoaPods requirement (if end up publishing as a pod, do it with the framework output)
have a single Xcode project that generates multiple frameworks
Been in need of a major refactor for some time. May still take some time to get to. Rough plan:
APIs and internals:
Build system: