Closed sylvaneau closed 4 years ago
Thanks for your contributions!
Here is an overview of what got changed by this pull request:
Complexity increasing per file
==============================
- Unosquare.FFME/Constants.cs 2
- Unosquare.FFME/Container/DataComponent.cs 5
- Unosquare.FFME.Windows/Common/RenderingDataEventArgs.cs 2
- Unosquare.FFME/Container/MediaContainer.cs 1
- Unosquare.FFME/Container/MediaComponent.cs 2
- Unosquare.FFME.Windows/Rendering/DataRenderer.cs 3
- Unosquare.FFME/Engine/MediaEngineState.cs 5
- Unosquare.FFME.Windows/Platform/MediaConnector.cs 1
- Unosquare.FFME/Container/DataFrame.cs 3
- Unosquare.FFME/Container/DataBlock.cs 1
- Unosquare.FFME/Engine/TimingController.cs 1
See the complete overview on Codacy
After careful consideration and testing, I came to the conclusion that data packets have to be processed in a different way than audio, video, or subtitle (multimedia) packets because of the following reasons:
While I have taken the main ideas presented in this PR, the approach has changed significantly. You can handle packets manually by subscribing to the DataFrameReceived
event which contains the DataFrame
. The start time is guessed if no packet timing info is available. The raw data is also available in the PacketData
property.
Please check it out and open a new issue if you believe I have missed something. Thanks so much @sylvaneau !
FFmpeg is able to decode Data streams as well as Video, Audio or Subtitle streams.
One "common" example of data stream is provided by MPEG2-TS containers vidéos produced by "drones". These videos include a data stream with data encoded using the KLV (Key Length Value) format. This stream contains the drone position and attitude.
We use your player to display drone videos and we had implemented the data stream decoding feature.
As there are very few codecs available for data stream decoding/encoding, we've decided to bypass the "codec" part and to materialize Frame/Blocks with the byte array containing the actual data. The decoding process is done externally.
There are few KLV decoding libraries, we have developed our own and we are currently discussing to make it open source.
Here is an example of video with data stream : http://samples.ffmpeg.org/MPEG2/mpegts-klv/Day%20Flight.mpg
Just a clue about how PTS (presentation time stamp) are computed for data packet. There is two ways to encode PTS for packet in data streams :
This way the events linked to the Data stream are raised as possible to their actual position in the source stream.
I hope everything is clear and I would be glad to see our contribution merged in this great project ;-)
Regards