devopvoid / webrtc-java

WebRTC for desktop platforms running Java
Apache License 2.0
248 stars 60 forks source link

Two request - not sure if we are talking about a bug or a features #48

Closed RaniRaven closed 2 years ago

RaniRaven commented 2 years ago

[1] About the AudioSource - for some reason, and it seems weird, both sources Audio/Video are not acting the same. Where VideoSource has start() option AudioSource does not. How does the streaming of Audio supposed to work ? [2] Is there any example of adding a sink to the track on the receiving peer side ? There seems to be some a-symmetric functions there. Why does the receiving track also has VideoSource in the constructor ? Is there any sample or test to work with. [3] I am having a lot of difficulty with the ICE protocol. I found there is no sample test of the ICE protocol, in cases where it matters, like two different IPs. Moreover, it seems like there should be more options to set the SDP offer/answer created which are not available, and some configurations. Is it aligned with the WebRTC official protocol ?

Regards, Rani

devopvoid commented 2 years ago
  1. The native implementation of WebRTC has indeed two different approaches for audio and video. I don't know why they did this way. I've written JNI wrappers around the native code. That's why webrtc-java has the same different approaches for audio and video.
    The entry point to audio streaming is the AudioDeviceModule.
    If you want to stream audio from any connected input device, e.g. a microphone, you can simply set the device name in the AudioDeviceModule. Same for playback.
    To stream your custom audio input, e.g. from a file, you can provide a custom source to the AudioDeviceModule which will pull the audio frames from the source when a peer-connection has been established.
  2. Adding a sink to a VideoTrack: https://github.com/devopvoid/webrtc-java/blob/9e31c27b30904257eeab1e7c87c85a28cf04fdd0/webrtc-demo/webrtc-demo-api/src/main/java/dev/onvoid/webrtc/demo/net/PeerConnectionClient.java#L225

Why does the receiving track also has VideoSource in the constructor

What do you mean?

  1. As I mentioned this library mainly consists of JNI wrappers for the native WebRTC code. The tests have only one purpose, to check that there are no null pointers and the Java objects are correctly converted to the native implementation and vice versa.
    You can control creation of offers/answers with RTCOfferOptions and RTCAnswerOptions.
    SDP Munging is not part of official WebRTC. It's up to the users/developers to customly change the session descriptions. You can to that with webrtc-java as well.

This library does not (yet) implement a higher level abstraction around the native WebRTC implementation, it wraps the WebRTC code as it is implemented.

RaniRaven commented 2 years ago

Thanx for the answers, just to clarify the question about the Track : Something which is not clear too me, in order to create a track there is only option through the factory for getting a new Track from the factory by add description, and source. But what if the track is for the remote peer and not the local. I think you answered that with the sample, as long as the transceiver is not empty there when the streaming starts. It is just that something looks incomplete within those methods, as track "abstraction" should support source track as well as destination track the way I see it. There should have been ability to read track from the disk and send it, and to "write" it as well on the remote side. I'll look at the native implementation, cause something is weird there, as you noted about the difference between audio and video.

devopvoid commented 2 years ago

Closing this. Please use the discussions section.