Closed sanek11591 closed 3 years ago
Which container format does the scream have?
I expect the first connected client to receive header (with multimedia format, codec parameters) and start with a keyframe and hence play (relatively) well.
But subsequent clients start receiving the video from the middle, without any header or keyframes and may fail to play.
Switching (and remuxing) to streaming-oridented format (such as mpegts) may make streams reattacheable.
For example, this server:
ffmpeg -re -v warning -i some_video.mkv -c copy -f mpegts - | websocat -b -s 127.0.0.1:5002
supports multiple clients such as this:
websocat -b ws://127.0.0.1:5002/ | mpv -
So for you use case, it may be beneficial to transcode or transmux your video to more streamable format prior to feeding it to Websocat.
Modified command line may look like:
nc 10.200.200.2 5001 | ffmpeg -v warning -i - -c copy -f mpegts - | websocat -b -s 10.200.200.1:5002
Note that Websocat's broadcast mode (which is default for websocat -s
) causes it to skip some content for clients that are reading info slowly.
This would cause video to be broken if you pause it in the player (and you'll see warnings from Websocat server). Mpegts format would allow it to resynchronize back into playable state although.
Thanks for the answer,
video transmitted codec H264
Is it possible to send a header when a new client connects or when a connection is lost ? I'm afraid the transcoding will lead to long delays, but I'll try it anyway, thanks for the advice.
video transmitted codec H264
Just a raw AnnexB stream or inside some container like mp4, mkv or mpegts?
transcoding will lead to long delays
Transcoding is resource-intensive and can indeed lead to long delays (unless you use something like -tune zerolatency
).
But transmuxing (which is done by FFmpeg's -c copy
) is expected to be fast and should not consume much resources or add much latency.
Is it possible to send a header when a new client connects or when a connection is lost ?
Not with the current of version of Websocat. And even with future version of Websocat it would be tricky.
Better just to duplicate headers every N seconds, which is typically done my mpegts container format.
Note that based on your questions you do not seem to be experienced in video broadcasting. Trying to make such makeshift video server based on Websocat may work for small and unimportant use cases, but don't expect it to provide scalability to many clients or high quality or high reliability. For that you'll need a proper multimedia server and proper protocols (and websockets may be not one of them).
Yes,it is just a raw AnnexB stream
I tried the way you described and it works fine with no more than 1 second delay. Thanks a lot for the advice.
Yes, this is my first experience with broadcasting a video, now the task is to make a broadcast with sound. I tried VLC with RTSP and other methods, but they are not very stable.
now the task is to make a broadcast with sound
FFmpeg can have both AnnexB H.264 and some audio tracks as input and produce merged mpegts for Websocat as output.
Synchronising audio and video may be tricky although.
Hi,
I stream video from raspi by netcat: command on raspi: raspivid -a 12 -t 0 -o -n -w 800 -h 400 -fps 15 | nc -l 5001 and resive this stream on myserver: command on server: nc 10.200.200.2 5001 | websocat -b -s 10.200.200.1:5002 after I transmit this stream to websocat by pipe I recive this stream on my windows computer by command : websocat_nossl_win64.exe -b ws://10.200.200.1:5002/ | mplayer\mplayer.exe -vo direct3d -cache 512 -fps 24 - This works.
But If I try strart secondary stream on PC Mplayer can not read this stream. And If i close Mplayer and restart, stream not read. I need the server to broadcast 1 stream to several users by Websocat. How can I do this ?
Thank you in advance for your help.