orottier / web-audio-api-rs

A Rust implementation of the Web Audio API, for use in non-browser contexts
https://docs.rs/web-audio-api/
MIT License
296 stars 16 forks source link

Possible issues with soundcards with number of channels > 32 #320

Closed b-ma closed 1 year ago

b-ma commented 1 year ago

disclaimer: I didn't make the actual tests so I might be mistaking, I will try to do that in the next weeks

The problem:

The question:

orottier commented 1 year ago

I'm not entirely sure what you are asking. I think by default we start with stereo output so that should not fail. Then you should be able to change the settings (which was fixed with #319 ).

There's a small caveat that (for a single render quantum) we may have picked up the new channelCount but not yet the channelInterpretation so the render thread will panic. So it is safer to reverse the ops:

audioContext.destination.channelInterpretation = 'discrete';
audioContext.destination.channelCount = 12;

I'm wondering what the best behaviour would be for unsatisfiable upmix operations:

I think I would favour the last option. Stuff like (https://github.com/ircam-ismm/node-web-audio-api/issues/20) and https://github.com/ircam-ismm/node-web-audio-api/issues/23 are not great for end users.

b-ma commented 1 year ago

I'm not entirely sure what you are asking.

Me neither :)

I think by default we start with stereo output so that should not fail. Then you should be able to change the settings (which was fixed with https://github.com/orottier/web-audio-api-rs/pull/319 ).

Yup you are right, I just don't really understand why he had this error: NotSupportedError - Invalid number of channels: 256 is outside range [1, 32] in https://github.com/ircam-ismm/node-web-audio-api/issues/23, just as if the context tried to pick the max number of channels or something like that. I will try to check that directly with him next week.

I'm wondering what the best behaviour would be for unsatisfiable upmix operations:

Yup that's a bit tricky indeed, maybe the spec is saying something I don't know, need to dig in it. I can also try to see what Firefox and Chrome are doing there... but not sure they are fully compliant themselves, my simple tests with 8 channels gave really weird stuff.

b-ma commented 1 year ago

Another strategy (that I recognise would only solve a part of the problem) would be to put the context in suspended state instead of resumed by default. I more and more inclined to think that would be cleaner (and compliant actually)

orottier commented 1 year ago

Another strategy (that I recognise would only solve a part of the problem) would be to put the context in suspended state instead of resumed by default. I more and more inclined to think that would be cleaner (and compliant actually)

I actually think that will not bring much to the table.

Booting up the audio thread takes about 60 ms on my machine, which is an eternity so all relevant settings will already be queued before the audio starts running.

If I read the spec right, we are compliant by starting immediately ("A user agent may disallow this initial transition"): https://webaudio.github.io/web-audio-api/#allowed-to-start

b-ma commented 1 year ago

Hum right... but the spec also says that the state is set to suspended (cf. https://webaudio.github.io/web-audio-api/#dom-baseaudiocontext-control-thread-state-slot) when creating an AudioContext.

Actually, as I understand the quote "A user agent may disallow this initial transition", it does not say that you may avoid to explicitly call await audioContext.resume() but rather that the user-agent "may disallow" you to resume the context even if you ask politely :). This is what happen in a browser if you call resume outside a user gesture (cf. https://developer.mozilla.org/en-US/docs/Web/API/UserActivation) or more precisely if no user gesture has been recorded in the session.

In our case, I would say that the context is "allowed to start" without more requirements but we should still have to resume it.

orottier commented 1 year ago

Yup you are right, I just don't really understand why he had this error: NotSupportedError - Invalid number of channels: 256 is outside range [1, 32] in https://github.com/ircam-ismm/node-web-audio-api/issues/23, just as if the context tried to pick the max number of channels or something like that. I will try to check that directly with him next week.

Be sure to check the version of the library. We also had the issue 137bf3dec4 where we would start with the max number of channels available.

Hum right... but the spec also says that the state is set to suspended (cf. https://webaudio.github.io/web-audio-api/#dom-baseaudiocontext-control-thread-state-slot) when creating an AudioContext.

Interesting. But I think the relevant section is https://webaudio.github.io/web-audio-api/#AudioContext-constructors Which states

  1. Let context be a new AudioContext object.
  2. Set a [[control thread state]] to suspended on context. <-- we do indeed deviate here by eagerly setting to active
  3. Set a [[rendering thread state]] to suspended on context.

.. many other steps..

  1. If context is allowed to start, send a control message to start processing.

So how I read this, is that we should indeed start processing as soon as possible

b-ma commented 1 year ago

Be sure to check the version of the library. We also had the issue https://github.com/orottier/web-audio-api-rs/commit/137bf3dec4a2cf592d00471e833e620284b681aa where we would start with the max number of channels available.

Yup sure, I will recheck

  1. If context is allowed to start, send a control message to start processing.

Haha indeed, you got the point :)