torchbox / wagtailmedia

A Wagtail module for managing video and audio files within the admin
https://pypi.org/project/wagtailmedia/
BSD 3-Clause "New" or "Revised" License
232 stars 70 forks source link

Feature Request: Adaptive Streaming #223

Open andre-fuchs opened 1 year ago

andre-fuchs commented 1 year ago

It bugs me that the native HTML implementation of a video player does not support adaptive streaming. I end up using Vimeo all the time. But the wagtailmedia package might be the place to prepare a video for adaptive streaming in combination with HLS.js on the frontend side. Here are the required steps:

Backend

  1. Upload video file via the existing React based upload form
  2. Convert the uploaded video file into multiple variants with different resolutions and/or bit rates.
  3. Split each variant of the video file into small segments
  4. Create a master playlist file that references the different variants of the video, and create individual playlist files that reference the segments of each variant
  5. Store the video segments media files and playlist files
  6. Store information about the video file in database, such as the hierarchy and locations of all segments and playlist files

Frontend

  1. Serve the video files via a template tag that returns the master playlist
  2. Not sure if these playlists have to be files or could be generated on the fly via template tags as well

Encoder

An encoder might be the bottleneck here as FFmpeg is not available via some hosters (or at least my preferred hoster to be honest). Are there any Python packages that could natively replace services that support HLS? ... like Zencoder, Coconut, Mux and AWS Transcoder.

Laziness

The conversion and splitting of the video file and the generation of the playlists could be done on demand like the Wagtail image renditions following Django’s laziness philosophy. For longer video files on weaker machines that might take a long time.

A client side JavaScript encoder as part of the React upload form might be the solution to both problems here, encoder and time.

Overall this could be a killer feature of the Wagtail CMS. I offer my help here with anything apart from React if there is an reliable encoder that could be integrated.

thibaudcolas commented 9 months ago

👋 just noting RFC 72: Background workers feels very relevant as far as the processing needed of the video files.

andre-fuchs commented 9 months ago

👋 just noting RFC 72: Background workers feels very relevant as far as the processing needed of the video files.

This would solve the conversion part, of course! Amazing how Wagtail is evolving. I am game to help with the development of this adaptive streaming feature. I would need some guidance though. Would you implement adaptive streaming via the wagtailmedia package? It uses a Media() model for both video and audio. HTTP Live Streaming could be interesting for both. ffmpeg converts both video and/or audio for HLS, I think. I have never done this before, to be honest.

evilmonkey19 commented 9 months ago

If you need any help, I am willing as well to help! Depending on the platform they use one strategy or another. In twitch.tv they convert to HLS, but others like Youtube convert the video to MPEG-DASH. It is important to notice that, due to how much bandwidth it consumes to send the video to multiple people, you usually upload the chunks to platforms such as Akamai (the largest), Cloudflare or Amazon Cloudfront; basically CDNs.

By the way, there is a project that it is a wrapper around FFMpeg https://pypi.org/project/pyffmpeg/. Perhaps it is not the best solution, but i don't thing a native solution using python is better. The main problem is that video is one of the heaviest computations always, so the more low-level implemented it is, the better. Usually, FFMpeg tries to use some hardware accelerators if there are any, such as graphic cards https://developer.nvidia.com/ffmpeg.

Stormheg commented 8 months ago

This sounds like a fun sprint topic for Wagtail Space! https://www.wagtail.space/