Closed Horcrux7 closed 1 year ago
Media streams in ice4j are not associated with a specific media type, they just have a generic name. What do you mean by "the media type is changed" if you use "video1" and "video2" as names?
In the method org.ice4j.ice.sdp.IceSdpUtils.initSessionDescription(SessionDescription, Agent) in line 283 it use the name of the stream as media type for creation the MediaDescription.
I see, that's the old code for generating SDP.
Why do you need multiple streams anyway? Why multiple video streams? Can't you multiplex everything like webrtc does with rtcpmux+bundle? I'm trying to judge if adding a separate "media type" field is an addition worth having in the project.
Thank you for your answer. I can't answer your question yet. I am new to WebRTC. Based on the samples of the project I have thing that every track in the browser match to a stream. If I understand you correct then I am wrong. Only to use one socket seems a large improvement. I will evaluate it.
I see, that's the old code for generating SDP.
Are there newer code for it?
These samples are outdated and not suitable for WebRTC. A PeerConnection in the browser has a single ICE session with a single ICE stream. This would correspond to an Agent with a single IceMediaStream (and a single Component, since rtp and rtcp are muxed).
We don't have modern samples, but you can see how Agent is used in jitsi-videobridge to connect to a browser on the other end. We translate the SDP to Jingle early on in javascript, so the backend doesn't see any of it. You'll also need to handle DTLS/SRTP which ice4j itself doesn't do.
Thanks for your information. I will restart. We can close this ticket.
If I want create multiple video streams with:
Then the second call override the first call. If i use a unique names:
then the media type is changed which is wrong.
It required a method: