Open ypedegroot opened 11 months ago
I would also like to implement that.
Thanks for adding the enhancement label. It is relatively easy to output audio using ffmpeg to alsa, using the -f alsa
parameter. It works like this:
ffmpeg -i input_file.wav -f alsa hw;0,0
The question is, what should we use as input for the "returning WebRTC audio stream" from the browser WebRTC session? Is there (just as for camera streaming) an RTSP endpoint where the returning audio is received? If that is the case, than it is possible to play the returning audio on the local speaker using alsa.
I have tried to visualise this schematically. @chip131001, do you think this is a good representation of the problem?
Note that streaming the Raspberry Pi's microphone to the WebBrowser is already successful (see my first post).
This should be new type of source. Like ISAPI. That only can support two way audio. https://github.com/AlexxIT/go2rtc#source-isapi
If I read the go2rtc documentation correctly, I see that the ISAPI protocol is for Hikvision cameras. I do not understand how I could use this protocol in my scenario. I would expect a second RTSP "source" (?) for the returning voice from the WebBrowser, so ffmpeg could stream the incoming RTSP to the alsa speaker.
You can't use it. What you need is not developed yet.
Okay, that clarifies your answer. Do you think this "bi-directional" audio feature will be implemented (and when)?
Sorry to ask a question that may have been answered, but I'm finding this confusing. In the docs, it says that two-way audio is implemented with the browser and WebRTC, but does this just mean the browser can be the microphone but the output needs to be from a security camera speaker? I just want the audio to play back through the browser/system audio of the device running go2rtc... If this is not possible, the only thing I've found would be something like this: https://github.com/codewithmichael/webrtc-intercom
But since I'm using go2rtc as the way to remotely monitor the local webcam (using this for film production), it would be great if I could make the whole thing work in go2rtc. Thanks for any clarification that anyone can give here. As is usually the case, I've either stumbled upon The Solution To All of The Things or I'm still at square one, cobbling together all kinds of hacks, like an animal, to get something that does what basic WebRTC already does (but which I need to do at scale, e.g. pull up a dashboard with the live feeds from 100 studios around the world with talkback buttons that bring me into the studio as the voice of god).
Currently, go2rtc does not support the local speaker of the server it is running on
Hi Alex,
I am currently experimenting with using your (awesome) project go2rtc as an audio intercom on my Raspberry Pi 4B. So far, I have managed to define a stream (I call it "mic") using exec, like this:
mic: exec:ffmpeg -re -f alsa -channels 1 -sample_rate 48000 -i plughw:CARD=DA70,DEV=0 -c:a libopus -b:a 48K -rtsp_transport tcp -f rtsp {output}
If I test the following URL in my browser (after adding it to the trusted sites list, so it asks microphone permissions)
http://my-local-raspberry-pi:1984/webrtc.html?src=mic&media=audio+microphone
, I can successfully hear the audio that is being streamed live from my Raspberry Pi microphone to my browser. So far so good.Since I have granted my browser access to my microphone, I expect the browser to stream the audio from my headset (connected to my browser) and send it, using WebRTC, to the Raspberry Pi where go2rtc runs.
I see the following if I check the stream details, when the stream is running in the browser:
Notice the yellow marked byte counters: it seems as if the browser indeed is streaming audio back to go2rtc, as expected.
The question: how can I configure go2rtc such that it plays the streamed audio from the browser to the local speaker of my Raspberry Pi, for example using ffmpeg with an alsa sink. It is unclear to me how to configure this as a go2rtc stream. Am I missing something?