Open mrpackethead opened 6 years ago
I'm sure more active participants on the group will give you a fuller answer, but the ffmpeg workflow is setup for HLS so you would have to re-work that for RTMP completely, and the caddy HTTP server is really the wrong server for handling RTMP.
There are RTMP plugins for nginx, but again that would be a complete rework of streamline..
Ultimately (from my POV) streamline is not the right workflow for RTMP.
You could use GStreamer to receive the RTMP and then pipe it to whatever you want which will be usable by this thing here. Like encode it and package it to HLS. Even without using FFmpeg at all. But you can combine both as well. For example you could probably push it into FFmpeg as TS via a local socket or through a unix pipe.
@mrpackethead You could use much of the work from this project, like the hardware, the encoder build script, the player creation, etc. However, you would want to re-write the scripts to package into FLV and transmit via RTMP (which is pretty simple). You would then have an RTMP server like Wowza or NGINX + RTMP module to receive the stream and package it up. As mentioned before, Caddy would not be the right sever for doing this. It's actually also possible to make FFMPEG "listen" as an RTMP server if you really wanted but I'm not sure this is any better than doing NGINX RTMP.
RTMP be pretty straight forward to do, but, architecturally very different. If I were to do a simple RTMP contribution based system I would likely build it on top of a sub $200 Android box or basic Intel NUC and do server side transcoding instead of how I'm doing it now.
What is your use case?
Great comments everyone.
The part of this project that really appeals to me is how AWS Cloudfront is used for distribution. Right now, i'm using LiveStream studio to mix and stream to Youtube, Facebook and Livestream..
It lets you set up an RTMP stream, but not HLS
I know that on the client side ( when people watch the livestream service ) its using HLS. I guess somewhere 'in the cloud' that there is something that is doing some conversion.
I could just build another encoder and drop my program into it, but its another box.
If you want to pay Amazon for the processing also, have a look at Elemental MediaLive. I just did a small corporate event on it and apart from 30s delay between RTMP ingest and HLS stream everything went smooth. But then you wouldn't need streamline at all - just OBS or ffmpeg sending a single beefy stream that a NUC or a Compute Stick would be able to encode even on the CPU (but you could use QuickSync too).
Thats a good idea. Its hard work keeping up with all the Amazon Offerings some days!. It loosk like you can do a lot of trickery with it.
Are you trying to sent one RTMP to multiple social media live streaming platforms? Or are you trying to send RTMP to 'your own' streaming service (in AWS for ex?) (Sorry - im a little confused :)
@dom-robinson , both.
For DIY, it would be pretty straightforward to remove the SDI / HDMI ingest and use an RTMP ingest. I've done it. It runs fine on AWS GPU instances.
You can, as other people mentioned, use Elemental MediaLive.
I personally would recommend you use StreamShark if you want a turn key service that white label does everything you listed.
You could also look at using NGINX RTMP module.
Not sure if this article might help:
@mrpackethead did you get what you need?
I got much more than i originally wanted. You guys provided me with a whole lot of useful ideas. the AWS services definately are useful! Hppefully i might get some more time to look at this soon.
Somethign that i'd like to be able to do is be able to use a reliable transport to 'pseduo' stream video material to multiple locations, accepting that my playbacks could be quite latent ( ranging from a few seconds to potentially potentially a few hours.. ... When its a few seconds, you are starting to playback before the event is finished. But if it was a 'reliable' transport theres not going to be any glitches. ( in theory ). When its days or weeks, its easy to do at a file level of course.
TCP is 'reliable' by definition. All the new funky protocols that are around at the moment ultimately help with window-size management over long fat network links, but ultimately emulate TCP, but in a less 'good citizen' way than TCP does, so do not 'back off' as aggressively. But if 'timeliness' ('live') is not an issue then TCP will by definition reliably deliver your video. It has been doing reliable delivery wonderfully since 1973 :)
Sorry for the off topic nature of this. I've sucessfully used MediaLive to ingest a live stream, but theres no way to take a file in. ( it takes a variety of RTMP streams and can reoutput them ) On the flip side mediaconvert happily takes files in, but wont' create a RTMP stream. Looks like i might have to put a server in AWS to create an RTMP stream from files.. DOes anyone have any experience with that.
Try this ffmpeg(from file to RTMP publishing live encoding). My case had to fake flash useragent header flashver
. Script outputs stream1(h264,aac) and stream2(h264) to the Wowza rtmp packaging server. Feel free to edit ffmpeg encoding parameters.
set input=C:\video\test1080p.mp4
set output1=rtmp://11.22.33.44:1935/app1/stream1_360p flashver=FMLE/3.0\20(compatible;\20FMSc/1.0) live=true pubUser=myuser pubPasswd=mypwd
set output2=rtmp://11.22.33.44:1935/app1/stream1_180p flashver=FMLE/3.0\20(compatible;\20FMSc/1.0) live=true pubUser=myuser pubPasswd=mypwd
set FPS=25
set GOP=75
ffmpeg -loglevel verbose -re -fflags +genpts -stream_loop -1 -i "%input%" ^
-preset fast -c:v libx264 -pix_fmt yuv420p -profile:v main -level 3.1 -b:v 512k ^
-aspect 16:9 -s:v 640x360 ^
-r %FPS% -g %GOP% -keyint_min %FPS% -b_strategy 1 -flags +cgop -sc_threshold 0 ^
-c:a aac -strict experimental -b:a 128k -af aresample=44100 -ar 44100 -ac 2 ^
-f flv "%output1%" ^
-preset fast -c:v libx264 -pix_fmt yuv420p -profile:v main -level 3.1 -b:v 320k ^
-aspect 16:9 -s:v 320x180 ^
-r %FPS% -g %GOP% -keyint_min %FPS% -b_strategy 1 -flags +cgop -sc_threshold 0 ^
-an ^
-f flv "%output2%"
Honestly, I have no idea what you just said.
What's your native language?
I'm very open to feedback. Maybe you can write it in your native language and I can get someone to translate?
On Mon, Dec 3, 2018, 1:10 AM Emre Karataşoğlu <notifications@github.com wrote:
Clent side usage is harder than traditional usage. . if the project had Rtmp Handler, so in client side we could publish rtmp with obs xsplit or vmix like software program with any kind of source. Then the rtmp handler, handle this package and convert to hls and convert it anywhere else
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/streamlinevideo/streamline/issues/10#issuecomment-443638633, or mute the thread https://github.com/notifications/unsubscribe-auth/ACdZaXu0V2U-MpquQe1JvQXp2xXKTLI9ks5u1OqOgaJpZM4Uh7BZ .
Are you saying it would be more helpful if there was a model for this project where RTMP input and HLS / DASH output?
On Mon, Dec 3, 2018, 1:15 AM Colleen Kelly Henry <colleenkhenry@gmail.com wrote:
Honestly, I have no idea what you just said.
What's your native language?
I'm very open to feedback. Maybe you can write it in your native language and I can get someone to translate?
On Mon, Dec 3, 2018, 1:10 AM Emre Karataşoğlu <notifications@github.com wrote:
Clent side usage is harder than traditional usage. . if the project had Rtmp Handler, so in client side we could publish rtmp with obs xsplit or vmix like software program with any kind of source. Then the rtmp handler, handle this package and convert to hls and convert it anywhere else
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/streamlinevideo/streamline/issues/10#issuecomment-443638633, or mute the thread https://github.com/notifications/unsubscribe-auth/ACdZaXu0V2U-MpquQe1JvQXp2xXKTLI9ks5u1OqOgaJpZM4Uh7BZ .
If i wanted to send a RTMP stream up how does this work?