m1k1o / go-transcode

On-demand transcoding origin server for live inputs and static files in Go using ffmpeg. Also with NVIDIA GPU hardware acceleration.
Apache License 2.0
208 stars 38 forks source link

simple routes #13

Open klahaha opened 2 years ago

klahaha commented 2 years ago

test is reserved stream name, we can use "/.route" if we need more static routes (no suggestion here) it will not match regex for streams

klahaha commented 2 years ago

about checking stream exists two times if we have url start with stream name http server can check stream exists before dispatch

also can dispatch to vod handler with different logic if stream type is not livestream for this point let's talk in #12

what do you think for new routes?

m1k1o commented 2 years ago

/id/720p.m3u8

So instead of <profile>/<stream>/index.m3u8 would you prefer <stream>/<profile>.m3u8? But how would we identify chunks? Right now, every chunk is named as live_%d.ts. It could be modified in a profile, but that would mean, user must take care of that when creating new playlist. And on the server side, we would need to match string in URL for chunks, what is kind of overhead, than just serving whole directory.

Or maybe just replacing it? To have it correctly scoped (one source -> multiple profiles) as <stream>/<profile>/index.m3u8.

/id/720p.mp4

For HTTP it could work without problems.

/id.m3u8 <-- master playlist for quality list,

I think, as adaptive bitrate streaming having each quality starting separately would not be sustainable. Consider you are watching something, and your quality drops.Your player instantly switches to a diferent quality but it takes some time, when it starts. You might see interruption while watching. That's why, I think, for ABR we should start all available qualities.

More information from my different project (that does not start on demand): https://github.com/m1k1o/hls-restream/blob/82c9bea924b17787caacb2265241b198febefc77/profiles/abr_transcoding_hd.sh

same we can add .mpeg-dash, .webrtc url "extension" in future

With dash i am fine with .dash extenstion, but because of the same reasons as HLS, I would prefer having custom foler for that.

Maybe <profile>/index.mpd for Media Presentation Description and <profile>/live_%d.dash for chunks.

Webrtc would need to have a html test page. Signaling could be done via websockets.

Maybe <profile>/webrtc.html for a test page, and having a single websocket connection <profile>/ws for signaling. (in the future).


I like the idea of having transcoding suffix as router for any media path we choose. So that we can implement /any/media/identifier/in/the/future and adding /<transcoding-profile>/<output-type> to the URL does the transcoding.

Where <output-type> is as mentioned above:

klahaha commented 2 years ago

And on the server side, we would need to match string in URL for chunks, what is kind of overhead, than just serving whole directory.

it's the same no? file can be named stream_hls720_chunknumber.ts. in 2 situations reverse proxy or go-transcode serves file directly. i don't have preference personnelly

Consider you are watching something, and your quality drops.Your player instantly switches to a diferent quality but it takes some time

good argument. but it is a lot of power for qualities i dont know how jellyfin or hls-vod-too do it i will check

So that we can implement /any/media/identifier/in/the/future and adding // to the URL does the transcoding.

yes that is the idea. but now i think for it, why not use GET parameters ?profile=some&output=some. Or maybe /profile?setting1=some&setting2=some. enables more settings without worry for what order in URL to use them. so URL is /stream/profile for routing, and every profile can have it settings without change route

m1k1o commented 2 years ago

it's the same no? file can be named stream_hls720_chunknumber.ts. in 2 situations reverse proxy or go-transcode serves file directly. i don't have preference personnelly

Right now we are not in control of segment names, they are decided by a profile. When we do it internally, then we can use own names and match segments to playlist properly.

yes that is the idea. but now i think for it, why not use GET parameters ?profile=some&output=some. Or maybe /profile?setting1=some&setting2=some. enables more settings without worry for what order in URL to use them. so URL is /stream/profile for routing, and every profile can have it settings without change route

Also this boils down to the control of manifest and segments. If you would append those parameters to manifest (for HLS) then they need to be passed down to each segment. Because, we could have multiple active configurations and they cannot collide.

Another solution would be to return sessions. Meaning, we can have any URLs we want, they just return 302 to a different url with generated token for exactly chosen profile configuration and output type, and it will be served. What do you think about this approach?


On the other side, maybe it would be menaingful to completly decouple transcoding from packaging. So that we have only one profles, that get bunch od ENV values as input (or maybe stdin video) and then pass just transcoded video to stdout. After that, would be different ffmpeg that would do packaging to HLS or DASH.

That would mean, one additional process per stream, but give us more freedom and portability. And would remove duplicated code (profiles for HLS and HTTP).

klahaha commented 2 years ago

Right now we are not in control of segment names, they are decided by a profile.

we can use convention. example always output to quality-number.ts, or other extension for other profile. for transcoding from not 0 chunk: -segment_start_number N

Because, we could have multiple active configurations and they cannot collide.

HLS uses master playlist for alternative video audio subtitles streams. i think manifest (master playlist) is global, all video manifests have same keyframe but different path like 720p_24.ts. Building media playlist is fast when keyframes is cached. like this no collision?

we can have any URLs we want, they just return 302 to a different url with generated token for exactly chosen profile configuration and output type, and it will be served

for internal use yes, we can use hash(settings) not quality for chunk name. but i dont think useful to expose for user

After that, would be different ffmpeg that would do packaging to HLS or DASH.

i think it good for different logic for packaging/transcoding yes. playlist cache can be warmed before go-transcode (or when it start) like i said in #12. if "warm cache" option is used, find all media files and start sequential packager profile for them, with transcode-go settings in env variables. we can provide default packager profile (extract keyframes, propose qualities default transcode profiles supports, propose alternative audio or subs with file.[locale].vtt or file.[locale].mp3 etc, generate master/media playlist for all).

(if we use keyframes direct from source packager profile doesnt need know settings, because it has keyframes from source for chunk count for playlist, and audio can be cut every place not like video, but if like hls vod we force keyframes min 2.25s max 4.50 (example) so packager profile needs this setting)

of course for live streams playlist is dynamic so for live stream one process ffmpeg (transcode profile) is good. one limit is support for alternate sources for audio, for example translation during conference. in this situation you also need separate process for package and transcode

And would remove duplicated code (profiles for HLS and HTTP).

not for other formats, yes?

edit: if packager profile caches more just keyframes, like also audio/video/sub tracks, then transcoder profile can receive settings. so if selected audio/sub is from video then it can extract during transcode, but if it's outside of video (external .mp3/.srt) packager profile can transcode for futur use (or not). but im not sure audio/sub extraction will work real time (maybe vtt/aac file is generated at end of transcode like m3u8 playlist)