WebAudio / web-audio-cg

W3C Web Audio Community Group
https://www.w3.org/community/audio-comgp/
Other
75 stars 5 forks source link

Accessing input signal in AudioContext of Audio Device Client #6

Open chrisguttandin opened 5 years ago

chrisguttandin commented 5 years ago

I wonder how one would access the signal of the input device inside an associated AudioContext.

It would be possible with the current API but I think it's not very elegant. The following would, for example, apply a simple gain to the input:

// inputDeviceId is a variable with holds the device id of the desired input device.
const client = await navigator.mediaDevices.getAudioDeviceClient({ inputDeviceId });
const mediaStream = await navigator.mediaDevices.getUserMedia({ audio: { deviceId: { exact: inputDeviceId } } });
const context = client.getContext();
const mediaStreamSourceNode = new MediaStreamSourceNode(context, { mediaStream });
const gainNode = new GainNode(context, { gain: 0.5 });

mediaStreamSourceNode
    .connect(gainNode)
    .connect(context.destination);
hoch commented 5 years ago

If you're using ADC, ADC should directly give you the stream from the input device. (callback function has input data.)

Or do you mean there might be a use case that needs multiple input streams (ADC and getUserMedia) same time? I don't think that is impossible, but as you pointed out this "not elegant" way seems to be the only way.

chrisguttandin commented 5 years ago

I think, I am confused about the role of an AudioContext when used with an ADC. If I understand it correctly, the AudioContext is currently another input along with the input device. My mental model looks somehow like this:

┌────────────┐ ┌────────────┐
│AudioContext│ │input device│
└────────────┘ └────────────┘
       │             │
    ┌───────────────────┐
    │   ADC callback    │
    └───────────────────┘
              │
    ┌───────────────────┐
    │   output device   │
    └───────────────────┘

The ADC callback has access to the lastest buffers from the AudioContext and the input device and can combine or ignore those at will to produce the buffer for the output device. I can't really think of a use case where one would want both an AudioContext and the raw stream of the input device at the same time. Maybe it makes sense to either have an AudioContext or an input device.

Another idea would be to somehow pipe the input signal through the AudioContext to allow it to be modified with regular AudioNodes. However I have no idea how the API should look like for something like this:

   ┌────────────┐
   │input device│
   └────────────┘
         │
   ┌────────────┐
   │AudioContext│
   └────────────┘
         │
┌──────────────────┐
│   ADC callback   │
└──────────────────┘
         │
┌───────────────────┐
│   output device   │
└───────────────────┘
hoch commented 5 years ago

Notes from 4/4/2019:

chrisguttandin commented 5 years ago

I think it's obvious but maybe worth mentioning that the input will not be the same anymore if echo cancellation, auto gain or something like that is activated through the call to getUserMedia().