eka-foundation / home

Interested in contributing? Start here!
http://eka.to
2 stars 0 forks source link

Discoverable, decentralized, cached live video streaming #31

Open m-anish opened 5 years ago

m-anish commented 5 years ago

As a subset of #18 , one of the requests from Zanskar to focus on the live video streaming aspect of the mesh network.

The current implementation involves running vlc on a windows laptop and using its web camera and microphone to create a live video stream, and then asking users in different places to open that live stream in their vlc instances. This works well, but suffers from:

What if there could be some kind of caching implemented on the mesh nodes.

I came across this project, whose readme also lists other potentially similar projects.

We can break this down in two parts:

m-anish commented 5 years ago

Actually, the primary case we want to "optimize" for is live streaming, and server loading during that. That concurrent class watching a video is something that we'll have to see from usage data if it really is a pain point, but it was just a guess that at something which might happen in the future.

But the primary focus remains to make live streaming more decentralized and efficient.

Apologies if my earlier comment was confusing.

mitra42 commented 5 years ago

Understood - I guess I've not seen livestreaming (i.e. video camera capturing image, showing it elsewhere) being a usecase in disconnected networks, so good to hear that its really an issue for your use cases..

mikkokotila commented 5 years ago

Hmm, I haven't yet tried hlsjs-p2p-engine fully offline. It needs a signaling server and a few js files. We ran the signaling server locally but the js files were still being served online. I think we should make it a priority to test it fully offline (@pdey ), but I am optimistic that it should work offline.

There is absolutely no issue serving js offline via static files. If we like, we can also do it all on node.

pdey commented 5 years ago

Updates on hls-js-p2p engine: I tried to get hls-js-p2p engine to work fully offline.

What works

What does not work

Based on these findings, i have tried thinking on a slightly different idea. Following is an outline ( I will soon share the details as i work them out)

  1. Use https://github.com/videojs/http-streaming as a video player. This replaces the deprecated library (https://github.com/videojs/videojs-contrib-hls) used in hls-js-engine. I have tried this in mesh network using ffmpeg for hls video production and works fine.

  2. Work on a simple, configuraable reverse-proxy based caching using raspberry-pi and good quality sd - cards and external ssd/hdd to cache and serve HLS chunks. Need to think about the topology. (one POP in every village connected with mesh ?) This approach will partially take the load off from origin server of streaming. Though its not a p2p solution. It may take some time to design and build a proper p2p sharing system, preferably using webRTC and webTorrent, which we can do later.

  3. A very simple, light-weight static file server running on a rpi + camera setup for producing hls video. @m-anish has already shared some libraries that i will be testing soon.

m-anish commented 5 years ago

Prasenjit, I also had some conversations and thought processes around this. Lets talk soon!

pdey commented 5 years ago

bundle.zip @m-anish Writing down the details of obs studio + nginx + hls setup

Install and setup NGINX

Download

sudo apt-get install build-essential libpcre3 libpcre3-dev libssl-dev wget http://nginx.org/download/nginx-1.7.5.tar.gz wget https://github.com/arut/nginx-rtmp-module/archive/master.zip

Extract

tar -zxvf nginx-1.7.5.tar.gz unzip master.zip

Install nginx

cd nginx-1.7.5 ./configure --add-module=../nginx-rtmp-module-master (Note: we are not configuring nginx with ssl, not necessary for us i suppose) make sudo make install

Download and copy NGINX init scripts

sudo wget https://raw.githubusercontent.com/JasonGiedymin/nginx-init-ubuntu/master/nginx -O /etc/init.d/nginx

sudo chmod +x /etc/init.d/nginx sudo update-rc.d nginx defaults

Test upstart script

sudo service nginx status # to poll for current status sudo service nginx stop # to stop any servers if any sudo service nginx start # to start the server

NGINX service configuration

sudo nano /usr/local/nginx/conf/nginx.conf

Remove all lines and paste the content of attached file. Change the root and port in http module and hls_path in rtmp module to appropriate values

Test config file

sudo /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf -t

Cross-domain config

sudo nano /usr/local/nginx/html/crossdomain.xml

<?xml version="1.0"?>
<!DOCTYPE cross-domain-policy SYSTEM "http://www.adobe.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-access-from domain="*"/>
</cross-domain-policy>

Restart

sudo service nginx restart sudo service nginx status

Link stream url to OBS studio

stream server: rtmp://localhost/hls stream url : http://localhost:8081/hls/.m3u8

__ nginx-conf.txt

m-anish commented 5 years ago

Work on this lives:

  1. In an IIAB playbook
  2. In a simple html webpage
m-anish commented 5 years ago

FWIW. something for the future. https://openvidu.io/

m-anish commented 5 years ago

This might be interesting too - https://github.com/arslancb/clipbucket

Looks like for Zanskar 2019 - we are going with cham

mitra42 commented 4 years ago

@m-anish There isn't any docs on that cham page,

m-anish commented 4 years ago

Hi @mitra42 Apologies for such a late reply.

For Zanskar, we went ahead with Cham, but there were a lot of changes that were made to it while in Zanskar. @pdey and @so-ale probably have those in their personal repos/storage.

Also, as it turns out, there seems to be some issue with the intel NUC (we don't know whether it is hardware or software at this point), but the network hasn't been operational for the past 3 weeks or so.

I shall be getting the schoolserver shortly, which is in transport from Zanskar, which will also have the latest cham code commits. If @pdey or @so-ale can produce them, that'd be fine to, but I should be able to respond to you soon (hopefully a week to ten days).

Cham was quite simple and straightforward, and actually there may be lots of room for adding complexity :)

It can be used for live streaming, and the live streams are also archived in various quality settings once the session is done.

Don't understand the 'authoring podcasts' question. The way it works is very simple.

Some user end software like OBS is used to compose a stream. It can be a live stream, audio, or pre-recorded as long as OBS can handle it. It streams it to an rtmp end point where cham, running on nginx produces 3 different quality levels of 'live streams' while also recording them to disk. The live streams you get as a consumer depend on your connection bandwidth to the server.

All this works well but everything is centralized, and for the future, we'd wish to look into making this more decentralized, or atleast move it beyond a single point of failure.

mitra42 commented 4 years ago

Thanks @m-anish, that sounds replicable, which would be great, I'm hoping our first deploy is going to be in a limited-internet school in Indonesia, and it looks likely they will want to integrate local content.

I look forward to a bit more detail on that setup, there are a bunch of new things there (OBS, rtmp, cham,), which I can look up, but piecing them all together would be much quicker with some more details. I'm interested then to figure out hooking up the output for selective automated upload to the Internet Archive when the box sees the internet connecting and/or via sneakernet.

pdey commented 4 years ago

@m-anish I think you will get the code base soon. If you need it any earlier, i have a copy in one of my hard drives but i am not sure if it has the latest snapshot.

m-anish commented 4 years ago

Hi @mitra42

So, I updated the cham repo to the latest codebase. But there is a catch. The installation of cham is really in two parts:

  1. The html frontend that displays the live video stream.
  2. The backend with rtmp, nginx etc.

I had written a cham playbook for iiab that does everything that is needed for a working setup, but there have been many changes to iiab since so I no longer know if my playbook will work there.

My question to you would be: Do you want this as part of IIAB or as a standalone setup? If its the former, I'll work on updating the playbook, which I guess I need to anyway. If it is the latter, I will pass you specific instructions to get everything working. I'll need to know what hardware you're running on and your OS version in that case.

Looking forward to your reply.

mitra42 commented 4 years ago

Thanks - I could see using it in either context (standalone or IIAB) so probably doing the IIAB one is best - especially since I think that is what you are using? With a working set on IIAB I can always look at how to adapt that to a different setup if required.

m-anish commented 4 years ago

hmm... okay. Let me try an IIAB install on a VM and see how it goes. Give me a couple of days to iron out the proper instructions for you and make any necessary changes in the PR.

m-anish commented 4 years ago

@mitra42 I updated the PR that adds cham to IIAB. https://github.com/iiab/iiab/pull/1743

You can try a fresh iiab install. I tried on a VM with Ubuntu 19.10 and it seemed to work.

mitra42 commented 4 years ago

Will do, @holta wants me to do a test with the current version anyway, so can do both at same time.

holta commented 4 years ago

@m-anish if you can in coming days, please help @mitra42 with his Cham installation question @ iiab/iiab#2209 ?