microsoft / MixedReality-WebRTC

MixedReality-WebRTC is a collection of components to help mixed reality app developers integrate audio and video real-time communication into their application and improve their collaborative experience
https://microsoft.github.io/MixedReality-WebRTC/
MIT License
909 stars 283 forks source link

Mixed Reality WebRTC without Signalling Server #764

Closed SilverLive closed 3 years ago

SilverLive commented 3 years ago

I am trying to find a way, which allows me to use Mixed Reality WebRTC (link to git-repo) without a signalling server. In detail, I want to create a sdp-file from my ffmpeg Video sender and use this sdp-description in my unity-Project to bypass the signaling process and receive the ffmpeg video stream. Is there a way of doing so with Mixed Reality WebRTC? I was already searching for the line of code, where the sdp-file is created within MR WebRTC but I didn´t find it.

I am relatively new to this topic and I am not sure if this works at all but since ffmpeg is not directly compatible with webrtc I was thinking that this might be the most promising approach.

spacecheeserocks commented 3 years ago

Yes, this is possible, but you will have to write your own signaller code.

You do not need a server - but you will want to write your own signaller script that derives from Signaler in your Unity project.

You can then use this custom signaler to "pretend" to be a server and just respond with the SDP from your file, and discard anything else - whatever suits your needs.

SilverLive commented 3 years ago

Ok, so you mean in total the only thing I need to do is to write my own signaller deriving form Signaler, use my sdp from ffmpeg and afterwards be able to exchange video streams between both sides right? And there are no compatibility-problems if ffmpeg communicates with MR webrtc?

zxubian commented 3 years ago

@SilverLive

In fact, the exchange of SDP messages is just one of the things that the signaling channel must handle. The other important bit is the exchange of ICE candidates. Even if your SDP is the same each time (not sure if it's possible to re-use SDP messages between sessions), your two clients will need to exchange ICE candidates to establish a connection.

That having been said, the signaling channel doesn't need to be a "server" in the strict sense. If your ffmpeg and unity are running on the same machine, you can use whatever other means of sending the SDP/ICE messages, it doesn't have to be via a network.

To learn more about the WebRTC connection establishment process, check out these resources (they are for the web, but MixedReality-WebRTC is the same but just adapted for HoloLens/UWP land): https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Signaling_and_video_calling https://webrtc.org/getting-started/peer-connections

SilverLive commented 3 years ago

My goal is to use WebRTC within my local network and to receive the ffmpeg stream within my HoloLens2 Unity-Project with as low latency as possible. As far as I found out so far, ffmpeg doesn´t support exchanging ICE-candidates and neither supports DTLS-encryption which is also a part of WebRTC but please improve me if I am wrong.
Does this mean, there is no way to directly communicate between ffmpeg and webrtc?

spacecheeserocks commented 3 years ago

Possibly not, but I'm not 100% sure.

Bear in mind that the "SDP" format is not actually part of WebRTC - it's a general specification on how to negotiate media channels between devices. Historically it's used by SIP phones and such too.

You might find that although ffmpeg can produce an SDP, it doesn't necessarily mean it supports WebRTC.

It's also worth reading the SDP produced from ffmpeg - if it does support WebRTC, it might actually contain ICE candidates already. (It's not commonly used by browsers, but it is actually possible to include ICE candidates within an SDP offer).

It will be more complicated to set up, but you may find that an "SFU" server such as Janus might help you solve your problem.
Janus supports quite a few different scenarios, but but one that might be possible would be to use Janus as a "bridge" between WebRTC and other media services. E.g., you could look at setting up janus on your local network, then using WebRTC to connect HoloLens+Janus, and then using something like RTP/RTSP/etc streams etc between Janus and ffmpeg (VLC might also be a help here).

Not sure if there are any examples of this, but this might be something to investigate here.

SilverLive commented 3 years ago

For now I close this issue. After a lot of further research I came to the decision that there is no clean way of combining ffmpeg with webrtc. I try to tackle the issue with a shared memory approach.

mosaviaDC commented 2 years ago

For now I close this issue. After a lot of further research I came to the decision that there is no clean way of combining ffmpeg with webrtc. I try to tackle the issue with a shared memory approach. Hello, i work on same project. Did you find any solutions?