Closed Raj123456788 closed 5 years ago
I do not believe we did add system audio support to google's webrtc code.
Is there any documentation on how does desktop audio sent to the remote peer?
In the current code there are many ways depending on how you interact with the UI. Can you describe what you do (how you interact with the UI) to get desktop audio?
Actually we are not able to capture desktop audio for webrtc since google did not add a desktop audio module. In obs studio I am setting the properties for Desktop Audio and setting it to speaker and at the remote end I do hear the audio.
Basically I am more curious on how do you set the media constraints to capture the desktop audio and send as a RTP stream?
In OBS-Studio-webrtc, there is no constraint of any sort, since we use OBS code for the capturers and only send the raw frames (audio or video) to the webrtc stack. Does that answer your question?
If this is a question about webrtc or the browsers implementation in general, you should ask on the public discuss-webrtc mailing list.
ok, got it. You only send raw frames to webrtc stack. I will try to understand the code. Is there any documentation?
Can you give us a demo application which describes the signalling, initiating a desktop sharing with audio connection.
This project is an entire application, and the code is open source, free for all to read and understand.
Documentation for the OBS-studio code can be found at https://obsproject.com
Documentation for the webrtc code can be found at: https://webrtc.org
Request for unrelated features or demo should go either to the public discuss-webrtc mailing, or to cosmosoftware.io for professional services requests.
@agouaillard-cosmo : Sir, can you please provide documentation on how did you add desktop audio support for google's webrtc? It would be really helpful! Or if you can provide us the code files to be added/edited with google's webtc repo.