Open harlanc opened 1 year ago
need GB28181, go go
谢谢你带来的项目,我没使用过,盲写几个需求
Push local file or local data stream to rtmp server, just like the requirement in #35.
不知道能不能实现这样的效果: 某个rtmp流长时间没有人看的时候能暂时关闭流 有人看的时候再打开
Dynamic forward would be nice. SRS v5 supports this, and it communicates for that with a backend.
I have modify the example go backend they share, and now it monitors a json config file and when that file is changed it sends the targets SRS.
A conf.d folder where we can place different configuration files for different vhosts could be also useful.
Seems that we can publish audio/video streams from browser using Media over QUIC. After research, pushing H.264/AAC streams are supported, so maybe we can implement stream reception and transmuxing (to rtmp/hls/httpflv) on the server side.
Is it possible to implement WebRTC over UDP?
"GB28181" and "Media Over QUIC"
twitch supported and add unpublished feature named "multitrack"
I don't even know if it is possible, but how about generating WEBVTT based thumbnails for HLS streams:
https://developer.bitmovin.com/playback/docs/webvtt-based-thumbnails
Let's say you have a HLS live stream with DVR for one hour, this combining with WEBVTT thumbnails would be awesome.
That would almost be a unique selling point.
Perhaps it would be good if we can push RTMP to a particular server depending on the application name or domain of the input stream. So for example:
[[rtmp.push]]
domain = "example.com"
address = "stream-relay.example.com"
[[rtmp.push]]
app_name = "live"
address = "localhost:1936"
This way you would be able to use the server as a sort of RTMP reverse proxy.
RTMP, RTMPS, HLS, DASH, FLV, SRT, WEB-RTC, RTSP
Need API to manage
SERVER RESOURCE SINGLE STREAM STATS ALL STREAM STATS SINGLE CLIENT STATS ALL CLIENT STATS KICKOUT CLIENT/ STREAM COUNTS OF TOTAL STREAMS, TOTAL CLIENTS, TOTAL INPUT BANDWIDTH AND TOTAL OUTPUT BANDWITH
S3-compatible object storage support for recording VODs. I am currently working on it. On that note, any particular reason why the filesystem I/O is using blocking operations, other than for simplicity's sake?
S3-compatible object storage support for recording VODs. I am currently working on it. On that note, any particular reason why the filesystem I/O is using blocking operations, other than for simplicity's sake?
Sorry for late reply, I haven’t given this much thought, just for simplicity as you said. Will writing files synchronously affect functionality (e.g., causing lags) or impact performance? @abhemanyus
S3-compatible object storage support for recording VODs. I am currently working on it. On that note, any particular reason why the filesystem I/O is using blocking operations, other than for simplicity's sake?
Sorry for late relay, I haven’t given this much thought, just for simplicity as you said. Will writing files synchronously affect functionality (e.g., causing lags) or impact performance? @abhemanyus
Yes, writing files synchronously will block the entire thread on disk I/O. If said thread is also performing the video transcoding, that will be pretty noticeable. Multiple streams, high-definition videos, codec conversion, etc, are all cases where this will become an issue.
Also, I tried out the POC. It worked, sort of. There was also the overhead of saving the same file twice, once to disk and once to S3. It also looked extremely ugly, for which I am solely to blame. The larger application had no async support, I had to jerry rig that. I have abandoned that project since then and moved on to Nginx + RTMP module + Gstreamer with S3 sink.
So it's better to place time-consuming tasks, such as disk writes and S3 uploads, in separate threads to avoid blocking other tokio threads. Since it has already been uploaded to S3, why do we need to save it locally (write to disk)? You can open a new issue and it won't disturb others..
S3-compatible object storage support for recording VODs. I am currently working on it. On that note, any particular reason why the filesystem I/O is using blocking operations, other than for simplicity's sake?
Sorry for late relay, I haven’t given this much thought, just for simplicity as you said. Will writing files synchronously affect functionality (e.g., causing lags) or impact performance? @abhemanyus
Yes, writing files synchronously will block the entire thread on disk I/O. If said thread is also performing the video transcoding, that will be pretty noticeable. Multiple streams, high-definition videos, codec conversion, etc, are all cases where this will become an issue.
Also, I tried out the POC. It worked, sort of. There was also the overhead of saving the same file twice, once to disk and once to S3. It also looked extremely ugly, for which I am solely to blame. The larger application had no async support, I had to jerry rig that. I have abandoned that project since then and moved on to Nginx + RTMP module + Gstreamer with S3 sink.
Add user_agent for publisher and subscriber! send_bitrate(kbits/s) (total bitrate sending outside for all subscribers)
Everyone can list your important requirements here, which may be implemented in the future.