Closed JohnWeisz closed 7 years ago
You should be able to create a waveshaper node that applies a reciprocal. The resulting signal can then be directed to the gain parameter of a gain node.
I always think of waveshaper as applying "f(x)".
If you need a wider range than -1 to +1 then you another gain node to scale the input signal of the waveshaper.
On 28 Feb 2017 17:17, "John White" notifications@github.com wrote:
I'm wondering if there is a way to divide an audio signal by another one without having to resort to scripted audio processing (which, unfortunately, is really not usable for anything slightly more complex than a tech demo).
I managed to implement all other basic math ops (addition, subtraction, multiplication), but I'm struggling to achieve division. Is it possible at all?
For clarification about what I mean about math ops between audio signals, consider two audio sources, two OscillatorNodes https://developer.mozilla.org/en-US/docs/Web/API/OscillatorNode for example, oscA and oscB:
var oscA = audioCtx.createOscillator();var oscB = audioCtx.createOscillator();
Now, consider that these oscillators are LFOs, both with low (i.e. <20Hz) frequencies, and that their signals are used to control a single destination AudioParam https://developer.mozilla.org/en-US/docs/Web/API/AudioParam, for example, the gain of a GainNode https://developer.mozilla.org/en-US/docs/Web/API/GainNode. Through various routing setups, we can define mathematical operations between these two signals.
Addition
If oscA and oscB are both directly connected to the destination AudioParam, their outputs are added together:
var dest = audioCtx.createGain(); oscA.connect(dest.gain);oscB.connect(dest.gain);
Subtraction
If the output of oscB is first routed through another GainNode with a gain of -1, which is then connected to the destination AudioParam, then the output of oscB is effectively subtracted from that of oscA, because we are effectively doing an oscA + -oscB op. Using this trick we can subtract one signal from another one:
var dest = audioCtx.createGain();var inverter = audioCtx.createGain(); oscA.connect(dest.gain); oscB.connect(inverter);inverter.gain = -1;inverter.connect(dest.gain);
Multiplication
Similarly, if the output of oscA is connected to another GainNode, and the output of oscB is connected to the gain AudioParam of that GainNode, then oscB is multiplying the signal of oscA (amplitude-wise):
var dest = audioCtx.createGain();var multiplier = audioCtx.createGain(); oscA.connect(multiplier);oscB.connect(multiplier.gain); multiplier.connect(dest.gain);
So how to do a division?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/WebAudio/web-audio-api/issues/1156, or mute the thread https://github.com/notifications/unsubscribe-auth/AF9f5YMb-3wmiV9_6F7qMjDvspQfLAwrks5rhFaZgaJpZM4MOrVv .
Yes, using a WaveShaperNode
is the only way.
Also, in the multiplication example, you have to be sure to set the gain attribute of the gain multiplier to 0. Otherwise the output will default to 1 plus the product. (I always forget this.)
@pendragon-andyh
You should be able to create a waveshaper node that applies a reciprocal.
@rtoy
Yes, using a WaveShaperNode is the only way.
Thank you, the help of WaveShaperNode seems to enable virtually all calculations to be done. Perfect for these use cases.
EDIT: Well, not perfect, but it gets the job done.
@rtoy
be sure to set the gain attribute of the gain multiplier to 0
Yes, thank you, that's taken care of, but I accidentally left it out from the example.
Based on the comments, I think we can close this. I don't expect that a new feature or node will be added to support division.
Tone.js used to have code to do exactly this, but it's the division is now deprecated. Here is how it used to be done:
I agree we can close this.
Sure there are hacks you can do to achieve this using nodes that weren't designed for this, but why not not just provide basic math operations to the Web Audio API? What's the harm in that? Surely it would be far more performant for these operations to be done completely natively. Would this really be a controversial addition to the Web Audio API? Would it be difficult to implement? Would it not just be a couple of lines of code containing /
to implement this?
@mbylstra I've been looking at the code, it certainly would be trivial to implement.
The committee would have to come up with the specs, however, and I'm not sure they are ready to go full-time into control signal processing (after all, there are other implications when taking this route, such as reduced sample rate for improved efficiency, as you will likely not need 44.1 kHz to automate a gain audio param).
There's one use case I'd like to bring up: finding the inverse of a frequency to get a wavelength. That's important if you want to delay one synth in a frequency-dependent way relative to another. When the frequencies are static, you can obviously just set them, but if you want to apply any LFO or envelope automation it gets trickier.
I specifically ran into this issue trying to create a pulse wave. I originally wanted to use @pendragon-andyh's solution (sawtooth into a thresholding WaveShaper), but ran into problems with bandlimiting outlined here: pendragon-andyh/WebAudio-PulseOscillator#2. So I then went with the subtract-one-offset-saw-from-another method, but that's tricky to automate because of the above issue and I wanted vibrato.
In the end, I used an inverting WaveShaper as in @padenot's Tone.js, but I'm left with lingering concerns. Looking at Inverse.js, it appears to do multiple rounds of gain and waveshaping depending on the input desired accuracy. I only did one round, and it's a bit tricky to reason about whether that's enough precision.
If the current best solution to "I want to delay one oscillator by a multiple of the period of another" is to rebuild an approximation on top of an arbitrary number of WaveShapers, I think it suggests a reciprocal or division node might be useful.
I'm wondering if there is a way to divide an audio signal by another one without having to resort to scripted audio processing (which, unfortunately, is really not usable for anything slightly more complex than a tech demo, at least when doing live audio playback).
I managed to implement all other basic math ops (addition, subtraction, multiplication), but I'm struggling to achieve division. Is it possible at all?
For clarification about what I mean about math ops between audio signals, consider two audio sources, two OscillatorNodes for example,
oscA
andoscB
:Now, consider that these oscillators are LFOs, both with low (i.e. <20Hz) frequencies, and that their signals are used to control a single destination AudioParam, for example, the gain of a GainNode. Through various routing setups, we can define mathematical operations between these two signals.
Addition
If
oscA
andoscB
are both directly connected to the destination AudioParam, their outputs are added together:Subtraction
If the output of
oscB
is first routed through another GainNode with a gain of-1
, which is then connected to the destination AudioParam, then the output ofoscB
is effectively subtracted from that ofoscA
, because we are effectively doing anoscA + -oscB
op. Using this setup we can subtract one signal from another one:Multiplication
Similarly, if the output of
oscA
is connected to another GainNode, and the output ofoscB
is connected to the gain AudioParam of that GainNode, thenoscB
is multiplying the signal ofoscA
(amplitude-wise):So is there a way to do division in the Web Audio API?