Closed farazirfan47 closed 5 years ago
Hey @farazirfan47 I am doing a similar project Virtual Room as my GSoC project. I guess @dpallot did a wonderful job. I was planning to use this as the signalling server so I can send simple messages of RemoteDescription and LocalDescription to connect two or more peers. The main thing is to connect the peers using a STUN server mostly which enables the server to know their IP(using NAT). WebRTC basically relies on the browser connection, the signalling server is merely used to communicate between the peers, once they are connected, you can perform fragmentation on the video and send the segments. I guess using open-cv in the server side could be bit of hasty task so I would suggest you to use it in the client side. refer to https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/
@farazirfan47 I am trying to achieve a similar stack. Were you able to stream the getUserMedia
data over to a python websocket server?
Hi @SeanAvery I did a similar project. I used python websocket as the signaling server. You can have a look at the project here. Hope it helps :smile:
@SeanAvery I tried but cannot find a reliable solution, it was too slow when using mobile browser, so I decided to use Full Js stack on both front-end and back-end.
If you want to send live video to a Python backend for image processing, you might consider using aiortc
which is a Python implementation of WebRTC. It comes with a demo server which does what you describe:
https://github.com/aiortc/aiortc/tree/master/examples/server
I want to use JS at front-end and send live stream of video using getUserMedia api to opencv-python at bacl-end for analyzing video and send back data to client. How can I do this using your websocket-server?