Closed hughrawlinson closed 6 months ago
Previously discussed in #34546. Summary: very unlikely to happen.
There has been no activity on this feature request for 5 months. To help maintain relevant open issues, please add the https://github.com/nodejs/node/labels/never-stale label or close this issue if it should be closed. If not, the issue will be automatically closed 6 months after the last non-automated comment. For more information on how the project manages feature requests, please consult the feature request management document.
Unfortunately, this issue seems extremely unlikely to be added to Node.js.
I'm closing this issue, but if you disagree, you can always request it re-opened.
What is the problem this feature will solve?
At the moment, access to audio (microphone/input, desktop capture, and output) requires a native dependency (or a dependency that shells out to sox or aplayer). A disadvantage of this requirement for audio developers using node with cross-platform requirements is that they will need to ensure that the native dependency is compiled for each target platform, or that secondary utilities are available at runtime. This is not insignificant additional complexity for developers focused on audio, with implications for CI/CD (and the cost thereof), as well as potentially introducing code-signing issues.
What is the feature you are proposing to solve the problem?
I think an audio API in node directly would greatly improve the developer experience for those working on audio projects. But what features should a realtime audio API provide? At a minimum, it should address the tasks that are currently impossible in Node without native dependencies - a way to get input audio from audio devices (and possibly desktop capture), and a way to output audio to any audio device or the default device. These should work across all of Node's target platforms where appropriate. Since js code executes more than fast enough for realtime audio processing, with these two fundamental building blocks, it would unlock the ability for library authors to implement processing frameworks, and shim existing audio APIs such as WebAudio (an implementation based on a native dependency was started), MediaStream, and MediaRecorder.
This feature would be useful for a variety of domains including live music, music recordings, podcasting, audio-based IoT sensors, and the application of machine learning models to realtime audio.
What alternatives have you considered?
One alternative is to take the lead from the web and implement the Web Audio API (and potentially MediaCaptures) directly. This has the advantage of extending support to codebases that use those APIs, with the cost of increased complexity within Node.