bluenviron / mediamtx

Ready-to-use SRT / WebRTC / RTSP / RTMP / LL-HLS media server and media proxy that allows to read, publish, proxy, record and playback video and audio streams.
MIT License
11.55k stars 1.46k forks source link

Support ALSA Microphones when Pi Camera is video source. #1620

Closed mo-g closed 1 year ago

mo-g commented 1 year ago

Describe the feature

Everything in my stream is a silent movie. I've ordered an i2s MEMS mic module to connect to the Pi, but does MediaMTX support recording audio when using the PiCam source? As far as I've seen the only way is to move the camera sourcing outside of the application, using runOnDemand and GStreamer.

If I'm not mistaken, this means that RTSP can no longer control the Pi's autofocus. Consequently, it would be superior to include ALSA mic support when the Pi Camera is a source. This would allow the use of any microphone supported in Linux, whether USB, i2s, SPI, i2c, PCI...

aler9 commented 1 year ago

Hello, the main task behind this feature is integrating an audio encoder into the server.

There's a Opus decoder here but not an encoder yet.

mo-g commented 1 year ago

How about https://github.com/izern/go-fdkaac? We're encoding h264 anyway, right? If we're not using av1/vp8/etc it seems unnecessary to worry about "free formats" for the audio as well. Golame is also an option?

aler9 commented 1 year ago

@mo-g at the moment we're not encoding anything internally. If you mean the Raspberry Pi camera, it is encoded in H264 by the Raspberry Pi itself through its hardware.

The issue with integrating a non-native Golang library like go-fdkaac is that portability becomes an issue, because the server becomes dependent on the external library and libc, and their version depend on the OS, therefore i would have to distribute N binaries for each architecture and for each OS, like:

The opus library i linked on the contrary is a native library and can be fully embedded into the server.

mo-g commented 1 year ago

That's fair. I can't Go* to save my life or I'd throw up a PR with https://github.com/mewkiz/flac - but I get that most won't suffer the ~700kbps for audio, and it's too niche a feature at that bit rate for someone else to work on. I'll keep half an eye out for other codecs.

* And have enough projects without learning another language to port an audio encoder...

aler9 commented 1 year ago

The README has been updated (#1975) with instructions on how to add audio to a RPI Camera video stream.

The approach described in the README combines the benefits of reading the camera with source: rpiCamera with the benefits of delegating the audio encoding process to GStreamer, solving all questions raised in this issue.

Adding native audio support to the server has no particular benefits with respect to the described approach, therefore the feature won't be developed.

mo-g commented 1 year ago

Great, looking forward to testing this!

github-actions[bot] commented 8 months ago

This issue is being locked automatically because it has been closed for more than 6 months. Please open a new issue in case you encounter a similar problem.