pendragon-andyh / WebAudio-PulseOscillator

Create a Pulse Oscillator using the Web Audio API
MIT License
76 stars 5 forks source link

Waveform is not bandlimited #2

Open rsimmons opened 7 years ago

rsimmons commented 7 years ago

The generated pulse waveform is not "bandlimited". This may require major changes to fix, but it might be worth adding a disclaimer in the docs at least, since it means there will be audible artifacts.

All the waveforms generated by the builtin Web Audio OscillatorNode are bandlimited (per the spec). The relevant part of the spec explains:

Mathematically speaking, a continuous-time periodic waveform can have very high (or infinitely high) frequency information when considered in the frequency domain. When this waveform is sampled as a discrete-time digital audio signal at a particular sample-rate, then care must be taken to discard (filter out) the high-frequency information higher than the Nyquist frequency before converting the waveform to a digital form. If this is not done, then aliasing of higher frequencies (than the Nyquist frequency) will fold back as mirror images into frequencies lower than the Nyquist frequency. In many cases this will cause audibly objectionable artifacts. This is a basic and well understood principle of audio DSP.

One way to generate a bandlimited pulse wave is to subtract two bandlimited sawtooth waves, so that might be a way to fix it if you were so inclined.

pendragon-andyh commented 7 years ago

The problem with subtracting 2 bandlimited sawtooth waves is that it makes modulation difficult.

A possible alternative would be to run the output through a high-pass filter that tracks just below the oscillator's frequency. This seems to be the technique that the JP-8000 used with its supersaw oscillator (www.ghostfact.com/jp-8000-supersaw/).

I plan to do some more investigation - and then republish this project.

sgentle commented 7 years ago

It should be possible to do this by using ConstantSourceNodes to represent the parameters and running them to the DelayNode through the right set of audio nodes to do the calculation. I partly implemented this here (I automated the frequency but not the width), but there's no reason it wouldn't work for both.

You'd just need to replace the width with another ConstantSourceNode and connect it to dutyCycle.gain. Automating the DC offset could be done via a GainNode and another ConstantSourceNode, but I don't really understand why the DC offset calculation is the way it is, so there may be a simpler way to do it.