For subtitles, there is just a decode method instead of the send_packet and receive frame. For that there needs to be a Frame(Subtitle) type in the frames folder I believe.
Is this api design intentional?
I don't have much prior experience working with ffmpeg, I just want to shows subtitles using the egui-video crate in my own app.
This is intentional. This Rust library is a relatively thin wrapper around the FFmpeg API, and they use a function called avcodec_subtitle_decode2 for subtitles instead of their usual packet-based API.
For subtitles, there is just a decode method instead of the send_packet and receive frame. For that there needs to be a Frame(Subtitle) type in the frames folder I believe. Is this api design intentional?
I don't have much prior experience working with ffmpeg, I just want to shows subtitles using the egui-video crate in my own app.
Please and Thank you.