Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy, record and playback video and audio streams.
It would be nice to have stream info for runOnReady, so we can construct propper command based on input info.
For example:
If source video codec is already H264, but audio codec is AAC, we can construct ffmpeg like:
ffmpeg -i "srt://localhost:8890?streamid=read:$RTSP_PATH" -c:v copy -c:a libopus -b:a 256k -f mpegts "srt://localhost:8890?streamid=publish:webrtc_$RTSP_PATH"
If source video codec is already H264, and audio codec is already Opus, we can construct ffmpeg like:
ffmpeg -i "srt://localhost:8890?streamid=read:$RTSP_PATH" -c copy -f mpegts "srt://localhost:8890?streamid=publish:webrtc_$RTSP_PATH"
Or dynamically construct gstreamer pipeline
Here is little idea. Putting codec info by format and RTPMap/FMTPMap for additional info like SampleRate, ChannelCount and so on
Describe the feature
It would be nice to have stream info for runOnReady, so we can construct propper command based on input info. For example: If source video codec is already H264, but audio codec is AAC, we can construct ffmpeg like:
ffmpeg -i "srt://localhost:8890?streamid=read:$RTSP_PATH" -c:v copy -c:a libopus -b:a 256k -f mpegts "srt://localhost:8890?streamid=publish:webrtc_$RTSP_PATH"
If source video codec is already H264, and audio codec is already Opus, we can construct ffmpeg like:
ffmpeg -i "srt://localhost:8890?streamid=read:$RTSP_PATH" -c copy -f mpegts "srt://localhost:8890?streamid=publish:webrtc_$RTSP_PATH"
Or dynamically construct gstreamer pipeline
Here is little idea. Putting codec info by format and RTPMap/FMTPMap for additional info like SampleRate, ChannelCount and so on