Joystream / atlas

Whitelabel consumer and publisher experience for Joystream
https://www.joystream.org
GNU General Public License v3.0
101 stars 45 forks source link

Research and POC for MPEG-DASH support in Atlas #2886

Open kdembler opened 2 years ago

kdembler commented 2 years ago

Research and do small proof of concept on how to support multiple resolutions (files) for a single video using MPEG-DASH.

Quesions

Let's try using this video.js plugin since moving to a new library would be costly. There is also https://github.com/shaka-project/shaka-player which looks interesting but may be an overkill at this point

┆Issue is synchronized with this Asana task by Unito

bedeho commented 2 years ago

What could be updated Video GraphQL schema to represent different files? How do we identify them?

Do we really need to put this in API, I don't see a use case for querying on this across videos?

kdembler commented 1 year ago

@bedeho I don't understand your comment. How would Atlas be aware of different resolutions if this is not captured in our GraphQL schema?

bedeho commented 1 year ago

@bedeho I don't understand your comment. How would Atlas be aware of different resolutions if this is not captured in our GraphQL schema?

It can just be data processed client side. The only reason something needs to exist in the API is if you want server-side filtering or processing on some kind on that data. As long as you don't, you can just run preprocessing logic client side to decode whatever data you want. To me I don't see us needing to filter it, but perhaps I am wrong.

kdembler commented 1 year ago

The only reason something needs to exist in the API is if you want server-side filtering or processing on some kind on that data. As long as you don't, you can just run preprocessing logic client side to decode whatever data you want.

I don't agree with that, we keep a bunch of data in the API that is not needed for server-side filtering, just for the ease of querying and visibility of data. If a single video could have multiple resolutions associated with it, information on that needs to be kept with the video, in its VideoMetadata, which is currently decoded by the QN, ending up in the information being available via our GraphQL API.

bedeho commented 1 year ago

Ok, I'm cool with that also.