Allows developers to install and customize their connected camera and other devices to securely stream video, audio, and time-encoded data to Kinesis Video Streams
Apache License 2.0
78
stars
76
forks
source link
When I'm learning Kinesis Video Streams and I have some questions, could you help me? #185
Check the Fragments carried by the current PutMedia API, whether they are sent sequentially one by one, or multiple fragments are sent together using one PutMedia, or multiple PutMedia APIs are called simultaneously and sent in parallel. I don't see the implementation from the code at the moment
decrease/increase Fragment length (1s->2s->0.5s), decrease/increase message size, whether it helps Fragment upload
the conditions for generating FRAGMENT_METADATA, whether there are both Video frames and Audio frames, whether FRAGMENT cannot be generated if some type of frame is missing.
whether Fragment cannot be concurrent, the later Fragment must be later than the previous one
The earliest frame timestamp in a fragment must be after the latest frame timestamp in the previous fragment.
What is the source of FragmentTimeCode and is it the timestamp of the Fragment generation ACK received by KVS service?
FragmentTimeCode - Fragment timecode for which acknowledgement is sent
server_timestamp - Timestamp of when Kinesis Video Streams started receiving fragments, where should I check?
server_timestamp - Timestamp when Kinesis Video Streams started receiving the fragment
In general shorter fragment means more keyframes which means higher bit rate, but it all means lower end to end latency so if you're very sensitive to super low latency then you may choose a fragment length (key frame interval) for example of 1s. Typically for the encoder to do it's job it's not advisable to have a lower key frame interval than that, 1s or 2s should be fine. It should not impact the upload reliability it is only for end to end latency considerations.
If you have multiple tracks, you must be sending frames for both tracks, if one track suddenly stops and stops producing frames, then the SDK will not proceed until it receives new frames from the track that has stopped. If you need to remove a track you need to terminate the streaming and start a new streaming session with the new number of active tracks.
This is correct, for a frame you supply a timestamp and a duration, so the next frame must not overlap this frame and it definitely cannot be earlier it must be greater.