Closed brettwilcox closed 3 years ago
I'm looking to use this as an "on-demand" platform for provisioning live video streams between students and teachers. It would be nice if we can get a terraform deployment together as well.
I agree, a Docker Compose stack would greatly simplify the setup for the project. If this was converted into a mono repo (Taking all 3 projects (Rust, Go & React) and putting them into one repository) would mean you could run a command as simple as the one below to get started:
git clone https://github.com/GRVYDEV/Project-Lightspeed.git && \
cd Project-Lightspeed && \
docker-compose up -d
The docker compose file can then just be placed in the project root and link to the Dockerfiles located in each projects directory. EG:
version: '3'
services:
ingest:
build:
context: ./Lightspeed-ingest
dockerfile: Dockerfile
image: GRVYDEV/lightspeed_ingest:latest
container_name: lightspeed_ingest
ports:
- 8084:8084
webrtc:
build:
context: ./Lightspeed-webrtc
dockerfile: Dockerfile
image: GRVYDEV/lightspeed_webrtc:latest
container_name: lightspeed_webrtc
ports:
- 8080:8080
command: --addr=XXX.XXX.XXX.XXX # Or an env variable
react:
build:
context: ./Lightspeed-react
dockerfile: Dockerfile
image: GRVYDEV/lightspeed_react:latest
container_name: lightspeed_react
ports:
- 80:80 # Will rely on adding a `serve` command
@NuroDev Yup, we can use this as the "builder repo" or we could merge the projects into a mono repo.
If we keep them separate like they are now, all we would need to do is compile the binary and pull the docker hub images. I opened this issue to address - GRVYDEV/Lightspeed-ingest/issues/15
Here are the ports per - GRVYDEV/Lightspeed-ingest/issues/15
Ingest - 8084:8084 WebRTC - 65535:65535 && 8080:8080 React 80:80
As I put in the example docker-compose.yml
file above, alot of the stuff in it is ready to go. Just need to create the individual Dockerfile
's for each project.
But it should be noted we will need a basic web server for React frontend UI. Also I think it would be better to set the webtrc address via an environment variable rather than a command.
The golang service is not particularly docker friendly. You have to specify listen IP for it but it expects external IP. By passing internal IP, that IP is then sent to the client (web browser), and obviously browser can't access some random docker internal IP over internet.
This is a bit rough but it launches and accepts streams, just delivering them to the browser is problematic, but that should be an easy fix.
docker-compose.yml
:
version: '3.3'
services:
lightspeed:
build: .
command: /lightspeed/run.sh
restart: always
environment:
- EXTERNAL_HOSTNAME=example.com:8080
ports:
- "80:80"
- "8084:8084"
- "8080:8080"
- "65535:65535/udp"
Dockerfile
:
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=UTC
RUN apt-get update && \
apt-get install -y \
golang \
rustc \
nodejs \
npm \
git
RUN mkdir -p /lightspeed && \
cd /lightspeed && \
git clone https://github.com/GRVYDEV/Lightspeed-ingest.git && \
cd Lightspeed-ingest && \
cargo build --release
RUN mkdir -p /lightspeed && \
cd /lightspeed && \
git clone https://github.com/GRVYDEV/Lightspeed-webrtc.git && \
cd Lightspeed-webrtc && \
GO111MODULE=on go build
RUN mkdir -p /lightspeed && \
cd /lightspeed && \
git clone https://github.com/GRVYDEV/Lightspeed-react.git && \
cd Lightspeed-react && \
npm install && \
npm install -g serve && \
sed -i "s|export default|export|g" src/wsUrl.js && \
sed -i -e '$a export default url;' src/wsUrl.js && \
npm run-script build
COPY run.sh /lightspeed/run.sh
run.sh
:
#!/bin/bash
find /lightspeed/Lightspeed-react/build/ -name "main*.js" -exec sed -i "s|stream.gud.software:8080|$EXTERNAL_HOSTNAME|g" {} \;
/lightspeed/Lightspeed-ingest/target/release/lightspeed-ingest &
/lightspeed/Lightspeed-webrtc/lightspeed-webrtc --addr=$(awk 'END{print $1}' /etc/hosts) &
cd /lightspeed/Lightspeed-react/ && serve -s build -l 80
You can use it for some inspiration maybe. :smile:
This is what I meant:
Here are the ports per - GRVYDEV/Lightspeed-ingest/issues/15
Ingest - 8084:8084 WebRTC - 65535:8080 Lightspeed / React - 80:80
Ingest - 8084:8084 WebRTC - 65535:65535 && 8080:8080 React 80:80
If this was converted into a mono repo...
You could also use submodules if you prefer to keep issues, commits, versions, etc separate.
I thought that adding a quick workaround by replacing the *addr in ListenAndServe
and ListenUDP
calls to hard coded "0.0.0.0"
would solve it but then I noticed it still sent internal addresses, so I guess it gets them dynamically somewhere. As long as those don't get rewritten to external address, it won't work in Docker or behind any other NAT.
I thought that adding a quick workaround by replacing the *addr in
ListenAndServe
andListenUDP
calls to hard coded"0.0.0.0"
would solve it but then I noticed it still sent internal addresses, so I guess it gets them dynamically somewhere. As long as those don't get rewritten to external address, it won't work in Docker or behind any other NAT.
Yeah so if we are behind we will need to use a TURN server for WebRTC
Something like https://github.com/coturn/coturn?
I thought that adding a quick workaround by replacing the *addr in
ListenAndServe
andListenUDP
calls to hard coded"0.0.0.0"
would solve it but then I noticed it still sent internal addresses, so I guess it gets them dynamically somewhere. As long as those don't get rewritten to external address, it won't work in Docker or behind any other NAT.
What about something like this?
I thought that adding a quick workaround by replacing the *addr in
ListenAndServe
andListenUDP
calls to hard coded"0.0.0.0"
would solve it but then I noticed it still sent internal addresses, so I guess it gets them dynamically somewhere. As long as those don't get rewritten to external address, it won't work in Docker or behind any other NAT.What about something like this?
That would work actually. I was playing around with that mode for Consul and Nomad. The networking parts are complicated enough that I am just going to install the binary on the servers. But for an app like this it would be perfect.
I am running each service in its own docker, and using a docker-compose for them. Ingest seems to accept input from OBS and OBS is happy. I can access the react site on localhost, but the webRTC is not able to connect to webRTC as I assume it uses internal docker ips, instead of host.
EDIT: I changed it to use network_mode: host in the compose, and that makes everything work on localhost. Although not ideal for a deployment scenario. My branch: https://github.com/Crowdedlight/Project-Lightspeed/tree/feature/docker
I might try to deploy the dockers I have made on my own VPS when I got time again. And ideally, I want to incorporate the scripts to add custom streamkeys etc. But the dockerfiles and composer can be used as inspiration. (Ideally they should all also just use the binaries or multi-stage build. Only react uses multistage build atm)
I do not know how easily this carries over to public outfacing ips, but at least it all works on localhost with docker and composes.
I agree that adding the three service-repos as submodules in this top repo would probably make it easier for CI/CD as you can use the submodules directly in the docker files without having to git clone it.
Great idea, Docker would help launch adoption dramatically I think.
Adding my grain of salt, you can use buildx to build it for multiple CPU architectures (:eyes: arm) using a Github workflow similar to this
I am running each service in its own docker, and using a docker-compose for them. Ingest seems to accept input from OBS and OBS is happy. I can access the react site on localhost, but the webRTC is not able to connect to webRTC as I assume it uses internal docker ips, instead of host.
EDIT: I changed it to use network_mode: host in the compose, and that makes everything work on localhost. Although not ideal for a deployment scenario. My branch: https://github.com/Crowdedlight/Project-Lightspeed/tree/feature/docker
I might try to deploy the dockers I have made on my own VPS when I got time again. And ideally, I want to incorporate the scripts to add custom streamkeys etc. But the dockerfiles and composer can be used as inspiration. (Ideally they should all also just use the binaries or multi-stage build. Only react uses multistage build atm)
I do not know how easily this carries over to public outfacing ips, but at least it all works on localhost with docker and composes.
I agree that adding the three service-repos as submodules in this top repo would probably make it easier for CI/CD as you can use the submodules directly in the docker files without having to git clone it.
Due to the nature of WebRTC I think if we are going to use docker it will have to be --net=host otherwise we would have to deploy a turn server which would greatly complicate the deployment process and thus render the point of docker kind of useless
I am running each service in its own docker, and using a docker-compose for them. Ingest seems to accept input from OBS and OBS is happy. I can access the react site on localhost, but the webRTC is not able to connect to webRTC as I assume it uses internal docker ips, instead of host.
EDIT: I changed it to use network_mode: host in the compose, and that makes everything work on localhost. Although not ideal for a deployment scenario. My branch: https://github.com/Crowdedlight/Project-Lightspeed/tree/feature/docker
I might try to deploy the dockers I have made on my own VPS when I got time again. And ideally, I want to incorporate the scripts to add custom streamkeys etc. But the dockerfiles and composer can be used as inspiration. (Ideally they should all also just use the binaries or multi-stage build. Only react uses multistage build atm)
I do not know how easily this carries over to public outfacing ips, but at least it all works on localhost with docker and composes.
I agree that adding the three service-repos as submodules in this top repo would probably make it easier for CI/CD as you can use the submodules directly in the docker files without having to git clone it.
In regards to stream keys I will implement a better system for them. Basically the ingest will make a cfg file that houses the stream key, If you want to reset it you will just run a command. Could you add some deployment instructions to your docker repo? I want to give it a try on my VPS and if it works would like to get it to master!
Due to the nature of WebRTC I think if we are going to use docker it will have to be --net=host otherwise we would have to deploy a turn server which would greatly complicate the deployment process and thus render the point of docker kind of useless
I think that is an fair assumption. I don't know the inner workings of webRTC enough to judge how and when you would need a turn server. :)
Probably best just to go with --host for time being. That will however make it more important that individual ports on all network services can be configured. As many servers will already have webservers or services running on port 80 or 8080 etc. So it would be a requirement to configure those, to avoid collisions now docker can't map it to custom ports.
In regards to stream keys I will implement a better system for them. Basically the ingest will make a cfg file that houses the stream key, If you want to reset it you will just run a command. Could you add some deployment instructions to your docker repo? I want to give it a try on my VPS and if it works would like to get it to master!
That make sense. I will add some information in the readme and do a push in a bit.
Due to the nature of WebRTC I think if we are going to use docker it will have to be --net=host otherwise we would have to deploy a turn server which would greatly complicate the deployment process and thus render the point of docker kind of useless
I think that is an fair assumption. I don't know the inner workings of webRTC enough to judge how and when you would need a turn server. :)
Probably best just to go with --host for time being. That will however make it more important that individual ports on all network services can be configured. As many servers will already have webservers or services running on port 80 or 8080 etc. So it would be a requirement to configure those, to avoid collisions now docker can't map it to custom ports.
In regards to stream keys I will implement a better system for them. Basically the ingest will make a cfg file that houses the stream key, If you want to reset it you will just run a command. Could you add some deployment instructions to your docker repo? I want to give it a try on my VPS and if it works would like to get it to master!
That make sense. I will add some information in the readme and do a push in a bit.
For those that are already running webservers something like NGINX could be used to route a certain subdomain to a different webport. For example gud.software could be my main site then stream.gud.software could route to the react port
For those that are already running webservers something like NGINX could be used to route a certain subdomain to a different webport. For example gud.software could be my main site then stream.gud.software could route to the react port
I might be wrong here. But I saw the hosts bind port: 0.0.0.0:80 and 0.0.0.0:8080 which means that you bind all addresses and can't use the same port between different services?
I added a readme to my branch. I hope it clears some stuff up. It also showcases the pitfalls of the current docker setup in terms of setting the right IPs etc. ;-)
https://github.com/Crowdedlight/Project-Lightspeed/blob/feature/docker/docker_README.md
You can run TURN server in Docker as well (for example coturn). I don't really know the WebRTC protocol and all the signaling bits but I thought that it is just a matter of adding turn in the mix.
I added a readme to my branch. I hope it clears some stuff up. It also showcases the pitfalls of the current docker setup in terms of setting the right IPs etc. ;-)
https://github.com/Crowdedlight/Project-Lightspeed/blob/feature/docker/docker_README.md
Going to give it a try now!
I added a readme to my branch. I hope it clears some stuff up. It also showcases the pitfalls of the current docker setup in terms of setting the right IPs etc. ;-)
https://github.com/Crowdedlight/Project-Lightspeed/blob/feature/docker/docker_README.md
absolutely fantastic work! I was able to get it up and running really easily! I would like to merge this into master and then turn the respective folders into submodules
absolutely fantastic work! I was able to get it up and running really easily! I would like to merge this into master and then turn the respective folders into submodules
I haven't played with submodules and docker that much, but it should be possible to add the dockerfile to the root of each repo and then add each service repo as submodule in this repo. Then have the composer file in root of this repo, and have it reference the submodules. Might be some methodologys that works better if you release binarys from each service instead etc. I believe some examples was given further up in this thread.
And a better method of setting the wsUrl.js that doesn't require modification to the dockerfile itself. :)
absolutely fantastic work! I was able to get it up and running really easily! I would like to merge this into master and then turn the respective folders into submodules
I haven't played with submodules and docker that much, but it should be possible to add the dockerfile to the root of each repo and then add each service repo as submodule in this repo. Then have the composer file in root of this repo, and have it reference the submodules. Might be some methodologys that works better if you release binarys from each service instead etc. I believe some examples was given further up in this thread.
And a better method of setting the wsUrl.js that doesn't require modification to the dockerfile itself. :)
I am working on the submodules right now on /feature/submodules as far as react goes we could just grab an ENV var and default to localhost?
Would be great if you would update the value during startup so you don't have to recompile it every time you want to adjust some parameters.
Could you all review https://github.com/GRVYDEV/Project-Lightspeed/tree/feature/submodules and let me know what you think?
We decided in the last few weeks at American Airlines not to use submodules. After an internal review, it caused more confusion and commit issues than it was worth.
Edit to say that the docker files and docker compose look good. With the submodules it adds some management overhead. I also have a concern about how this can get deployed with something like K8's / Nomad if host networking mode is enabled. We need an MVP, so that is more like a 0.2.0 concern.
We decided in the last few weeks at American Airlines not to use submodules. After an internal review, it caused more confusion and commit issues than it was worth.
Edit to say that the docker files and docker compose look good. With the submodules it adds some management overhead. I also have a concern about how this can get deployed with something like K8's / Nomad if host networking mode is enabled. We need an MVP, so that is more like a 0.2.0 concern.
Yeah we can look into a TURN implementation if we want to get away from host networking
If each repo has it's own Dockerfile and builds + deploys the images somewhere (Dockerhub?) then a compose file in this repo can just reference the latest images no? Rather than using submodules.
Yes it could! Ideally I would like to keep the submodules in here just for the sake of having them however if we switched it to that system then you wouldn't need to worry about downloading them!
It would definitely be my preference to have the docker images built and pushed per repo.
You would only have to download this repo and run a docker-compose up -d
in the root.
It would definitely be my preference to have the docker images built and pushed per repo.
Yeah I definitely would like to do this
It would definitely be my preference to have the docker images built and pushed per repo.
You would only have to download this repo and run a
docker-compose up -d
in the root.
My question to this would be how will we handle the URL and address configuration?
It would definitely be my preference to have the docker images built and pushed per repo. You would only have to download this repo and run a
docker-compose up -d
in the root.My question to this would be how will we handle the URL and address configuration?
I'm thinking you have an "entrypoint.sh" file in the root and have docker bootstrap on startup. Do a delay until ready and show back the connection IP/Port in the deployment. Mine is not ready, but in practice it should work fine. You would need to proxy the traffic to the docker with NGINX or Caddy I think?
It's something I would have to think/play with.
I would also suggest serving static files from the same golang or rust service. Why have additional background process? And ideally the golang and rust service could be merged in one. I suppose you chose two different ecosystems because of library availability?
I would also suggest serving static files from the same golang or rust service. Why have additional background process? And ideally the golang and rust service could be merged in one. I suppose you chose two different ecosystems because of library availability?
So I like the idea of decoupling the ingest from the broadcast since it makes it more modular but yeah this whole project could be one codebase.
I would prefer this be focused into one Rust project personally. It would make maintenance a lot easier and I'll be able to throw resources at Rust projects eventually where I would not with Elixar and Go. Those are just not in our wheelhouse of developer skills. Go may be in the future though...
I am advocating within AA a push to learn Rust.
Here is how Apollo does their Rust development for the Stargate package-
Definitely a good approach.
I would prefer this be focused into one Rust project personally. It would make maintenance a lot easier and I'll be able to throw resources at Rust projects eventually where I would not with Elixar and Go. Those are just not in our wheelhouse of developer skills. Go may be in the future though...
I am advocating within AA a push to learn Rust.
The issue with this is that the WebRTC support isnt great in Rust ATM (however someone is rewriting Pion in Rust). I like the overall modular structure of this project though and the goal will be to flesh out the Rust ingest to relay the packets to the loopback interface so the WebRTC service does not need to listen on an external port. Modularity in this case is good IMO because you could easily write another ingest that accepts lets say RTMP for example and as long as you relay RTP packets on localhost it will be plug and play with the WebRTC service. In traditional streaming architecture it is broken up into 3 main parts. Ingest, Broadcast and the Website and I plan to always have Lightspeed mirror that architecture.
I think a monorepo and an official Docker Image would both be great
I think a monorepo and an official Docker Image would both be great
So I think the route I am going to take is auto-generation of docker images on tagged commits to each respective repository and then I would like to include the other repos as submodules in this one. Only issue with that is i need to figure out how to keep the submodules updated to the heads of the other repos. Checkout the branch feature/submodules to see where I am at on this!
Github Actions can be initiated via an HTTP Request, so you could have a Github action on each of the other repos that trigger a build step here
I think a monorepo and an official Docker Image would both be great
So I think the route I am going to take is auto-generation of docker images on tagged commits to each respective repository and then I would like to include the other repos as submodules in this one. Only issue with that is i need to figure out how to keep the submodules updated to the heads of the other repos. Checkout the branch feature/submodules to see where I am at on this!
I would not use submodules and instead just use this repo as the monorepo. I would prefer the code be in one repo, but I really just need a way to have the docker compose pull the compiled docker images. Do you have a location on docker hub to push docker images? There is a pull limit if you have not gone through the OSS org steps.
I think a monorepo and an official Docker Image would both be great
So I think the route I am going to take is auto-generation of docker images on tagged commits to each respective repository and then I would like to include the other repos as submodules in this one. Only issue with that is i need to figure out how to keep the submodules updated to the heads of the other repos. Checkout the branch feature/submodules to see where I am at on this!
I would not use submodules and instead just use this repo as the monorepo. I would prefer the code be in one repo, but I really just need a way to have the docker compose pull the compiled docker images. Do you have a location on docker hub to push docker images? There is a pull limit if you have not gone through the OSS org steps.
I have not used Docker hub before so I need to go through and do all of that.
It takes a few weeks - https://www.docker.com/community/open-source
You are welcome to use the scorpion namespace if desired, or we can just build the images until finalized. I will be building and publishing my own scorpion versions anyways. :)
It takes a few weeks - https://www.docker.com/community/open-source
You are welcome to use the scorpion namespace if desired, or we can just build the images until finalized. I will be building and publishing my own scorpion versions anyways. :)
That would be great! I will kickstart that process right now and then I am going to work on the documentation per Paul Irish's issue first then move to docker images afterwards!
I think a monorepo and an official Docker Image would both be great
So I think the route I am going to take is auto-generation of docker images on tagged commits to each respective repository and then I would like to include the other repos as submodules in this one. Only issue with that is i need to figure out how to keep the submodules updated to the heads of the other repos. Checkout the branch feature/submodules to see where I am at on this!
I would not use submodules and instead just use this repo as the monorepo. I would prefer the code be in one repo, but I really just need a way to have the docker compose pull the compiled docker images. Do you have a location on docker hub to push docker images? There is a pull limit if you have not gone through the OSS org steps.
Another note, the reason I do not want a monorepo is that I would like to keep issue tracking separate since it makes it easier to manage
Docker + Docker Compose
The project deployment needs to be simplified with a docker and docker-compose structure. We need a pull request to address adding this as a first class support model.