Open thibauts opened 9 years ago
Very nice Write-Up!
I guess that Google uses the tabCapture API ( https://developer.chrome.com/extensions/tabCapture ) after the intital offering has finished and the PRESENTATION message was sent.
Wondering what we would be able to build with this :)
You're probably right but I'd like to know how this could be used outside of a browser. I assumed modules existed to play with WebRTC server-side but I can't find them on npm.
yeah, all i can find are signaling modules.
There is a server-side module https://github.com/js-platform/node-webrtc. It seems to be based on this code and relies on native libs.
I'd like to understand how the OFFER/ANSWER the Chromecast does relates to its SDP counterpart. Can't make sense of the data yet.
I once played around with WebRTC, so I'm a bit familier with the OFFER/ANSWER part. In general one client creates an OFFER. The OFFER contains information about the OFFERER, for example what meda types it supports, etc. Then that OFFER Blob is sent to the other client. This happens normally through an signaling Server. In the Chromecast scenario no signaling server is needed since the sender can send the OFFER directly to the Chromecast. The other client (Chromecast) then creates an ANSWER and which gets sent back to the OFFERER. The ANSWER Blob contains UDP Ports which are open. With these UDP Ports the OFFERER knows where to connect. Often STUN and/or TURN Servers are used to find find out which UDP Ports could be used. After that signaling process the regular communication process begins.
Yes but the offer/answer structure here doesn't look like their SDP counterparts. The translation from one to the other is not clear.
I'm very much interested in researching more in this particular subject but am still trying to figure out how the whole casts
protocol works. I'm aware of the message protocol as described in node-castv2 docs but still need some more pointers as far as how to go about inspecting the different messages exchanged on port 8009 given it's TLS encrypted.
My initial attempt was Wireshark but haven't figured yet how to go about decrypting the traffic (and then deserialize the protobuf message) before I can get to the JSON payloads. Any pointers as to how to go about sniffing the channel contents?
I'm particularly interested in comparing the Chrome Tab cast to the cast performed in Android when mirroring the screen either from the Chromecast app or the OS itself (Kitkat+).
I have sent an OFFER to the Chromecast with the necessary parameters without the AES key and I have received a response from the Chromecast with the SSRCs and the port to be used.
With that answer I have tried to create a candidate and pass it to my WebRTC application that created the offer.
At this point, I could't get an agreement to be closed between both devices to send the frames through the UDP port.
I have various doubts:
I can't get the fingerprint of the Chromecast, so I don't know if it's for that reason, since the response SDP does not have a fingerprint, an agreement between the two may not be closed.
I don't know if I should send a message to the Chromecast beforehand indicating whether or not it is a PRESENTATION.
Interesting thread, it's quite old, anyone figured something out about it in the meantime? My current workaround uses a custom app with WebRTC .. works actually quite fine, but still interested in a "simpler" / built in approach
@simllll Could you clarify? Are you saying that you've managed to get a WebRTC video stream working with a custom receiver? If so I'd be interested on how that was possible!
It's up and running, with some limitation sthough (e.g. no full hd possbile). what I do is to stream browser tabs (with webrtc) directly to a custom chromecast app. I didn't find time yet to clean up the code, but I can try to publish the latest status, which consist of:
I took a look at your stream-tabs app (assuming that's the app you're referring to), it looks like that used FFMPEG to encode to H264 and then chromecast as if it was standard media content, not via WebRTC.
But it sounds like the updated version you have actually communicates directly over WebRTC to the Chromecast? If so I'd be interested to check it out! I'm currently serving locally encoded HLS streams using an HTTP server to a chromecast. But latency is at best 5 seconds, a fully WebRTC implementation would get it down to <1s.
I published the current source code, I'm not 100% sure if this is the identical code I have running, but I guess so :p feel free to test it out, bring fixes in and explore it.
https://github.com/simllll/stream-webrtc-tab
regarding latency I can tell you that <1s is actually possible with this approach :)
Awesome, thanks for your efforts! It's definitely a really interesting workaround
The app used by Chrome is "Chrome Mirroring", and has application ID
0F5096E8
.The app is launched as usual and advertises protocols
urn:x-cast:com.google.cast.webrtc
andurn:x-cast:com.google.cast.media
.Once the app is launched, Chrome issues a
CONNECT
message to the transportId with two additional fields :origin
which value is{}
userAgent
which contains the browser UA stringThen an
OFFER
is sent over theurn:x-cast:com.google.cast.webrtc
channel :The Chromecast responds with an
ANSWER
:Chrome follows up with a
PRESENTATION
message onurn:x-cast:com.google.cast.webrtc
(here I was browsing www.google.com) :And then sends a
GET_STATUS
on the media channel which returns the playing state (PLAYING
) and a few other informations.Then UDP traffic kicks in and the tab appears on the TV screen. I guess the OFFER/ANSWER cycle is somehow translated to its SDP counterpart but not knowing the WebRTC internals I cannot say much more.