Closed hughrawlinson closed 5 years ago
@echo66 It is clear, let's check how it works. And do not tell me which code you use to calculate FFT IFFT and operations with complex numbers?
Sure! I used this one: https://github.com/drom/fourier
One tip: if you can avoid allocating new arrays during during the processing, you can avoid certain issues with the garbage collector. In the example I have posted in here, I'm using Array.concat and Array.splice. And, AFAIK, this requests the creation of new arrays.
Things like FFT don't belong in the Web Audio API because they aren't specific to audio and don't need to be coupled with an AudioContext. They're just operations on arrays.
But aren't filters "not specific to audio" and "just operations on arrays"? And convolution? And any other DSP technique?
The problem seems constantly shifting: sometimes it's 'they should not be owned by WAA' (but we have filters and convolvers and waveshapers, which are not exclusive to the audio domain), sometimes it's 'there are already implementations in Javascript' (but they're considerably slower, and the same can be said for filters and convolvers, no?), sometimes it is 'they change domain, what will we do when we connect those nodes to other nodes?' (while they would probably be another type of audioworker, and what an AW spits out is complete responsibility of the user).
I really fail to see what's wrong in audio developers asking for native FFT.
It's because FFT in Javascript is generally slow and we don't generally use it, because when we do, it feels we're stuck in '97 with Cubase 3.5 VST, when phase-vocoding made the [tab] CPU rise to 90%, and we thought we solved that particular problem 20 years ago.
@janesconference , +9001.
@janesconference , do you know any (even if remote) reason why only the AnalyserNode doesn't have the phase? Seriously, since I can remember, I never got that design choice. I'm really really curious about that.
@echo66 @janesconference, I fully support !
@janesconference, I would not say that filters/convolution are just operations on arrays; they're operations on infinite streams of numbers. Not quite the same thing as being inherently audio, but much closer to it than FFTs are.
BUT, I agree with everything else you said. It's frustrating waiting for the Web platform to catch up with the rest of the world. And my point (that FFT should be in a different API) doesn't help with that, since dealing with more than one committee is even worse than dealing with one.
@echo66, I think the AnalyserNode is the way it is (including the odd spelling) because, pre-standardization, someone (Chris Rogers?) thought it would be cool to be able to make "spectrum analyzer" types of visualizations, and made it happen. Committees slow things down (hence our frustration), but without some kind of review process, you end up with strange decisions like that.
(And I'm not on the committee either, although I'm strongly considering crashing the face-to-face meeting in 2 weeks, since it happens to be around the block from my home.)
On Thu, May 21, 2015 at 9:27 AM, artofmus notifications@github.com wrote:
@echo66 https://github.com/echo66 @janesconference https://github.com/janesconference, I fully support !
— Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-audio-api/issues/468#issuecomment-104277261 .
@adelespinasse , thanks for the sincere opinion! :)
Well, now I can "sense the pain" that one my thesis supervisors mention when talking about Semantic Web committees.
I think it's necessary to reiterate here that the lack of phase spectrum output isn't the only issue with AnalyserNode – another one is the fact that we have no way of knowing which part of the audio stream corresponds to the current magnitude data from AN. Even though you can get the synchronized time domain buffer from it, you would have to then figure out where that particular buffer belongs in the source to do any kind of further work with it. You'd also still be interfacing with the node from the JS thread.
This restricts the usability of the node to visualisation of the frequency spectrum, which IMO offers too few possibilities to justify it being a separate node. In fact, I'd totally understand if this node was a JS library.
Another option would be to simply upgrade the AnalyserNode, by giving it an onaudioprocess
event handler, and exposing the FFT data within that scope. However, that does sound a lot like a freq domain version of the ScriptProcessor, and I have a feeling nobody really likes the ScriptProcessor :)
I find it interesting that there is a need to defend to the audio programming community that the FFT is important for the WebAudio API. To address a couple of points brought up:
The FFT is an operation on an infinite stream of numbers as well. It happens to create an array from a portion of that stream, but that array is itself a stream of arrays, they just cover a longer time span than a single sample. In the end, the FFT allows users to deal with sound as a spectrum instead of just streams of samples. This is perhaps more inherently audio than a stream of samples.
Also, all of the processors that we have mentioned are not inherently audio - filtering, convolution (time-domain or frequency-domain), modulation, are used anywhere waves exist (or streams of numbers for that matter), from coastal modeling to gravitational and electromagnetic waves -- It's all the same DSP. The point is, it can be used for audio and should be available through the WebAudio API if we're trying to enable a flexible audio programming environment in the browser.
At any rate, here's my reasoning for including it. Because the FFT/iFFT is a computationally expensive process and has the difficult programming problem of overlapping frames, windowing, etc., having a built in way to handle it will enable frequency-domain processing code to develop that isn't encumbered by the specifics of getting into and out of the frequency domain. It should be noted that most (if not all? Supercollider, Pd, Max, Chuck, Jamoma, CSound, ...) audio programming language/frameworks out there provide exactly this: an FFT and iFFT object (code/function/whatever) that allows you to work in the frequency domain without the details of the transform. Also working with audio in the frequency domain is only growing in it's importance in DSP, audio analysis, synthesis, etc.
I'm also a fan of getting a wavelet transform in the API, but it isn't used in the audio realm much yet. It's day will come...
Thanks for the corrections guys, but the FFT (fast fourier transform) is just the algorithm to implement the DFT (discrete fourier transform), that operates on finite lists of samples, and spits out a finite series of complex sinusoids, at least as far as I can remember.
Regarding splitting WAA and having an hypothetical DSP API, I completely agree. But, most of all, it would be nice to have a way to do efficient DSP inside the browser. Currently, whoever wants to do serious signal processing is forced to go native.
If we want the open web to compete with apps (at least in the audio niche), we should have a way to do processing at a comparable rate of efficiency.
On Thu, May 21, 2015 at 9:41 AM, Cristiano Belloni <notifications@github.com
wrote:
Thanks for the corrections guys, but the FFT (fast fourier transform) is just the algorithm to implement the DFT (discrete fourier transform), that operates on finite lists of samples, and spits out a finite series of complex sinusoids, at least as far as I can remember.
Regarding splitting WAA and having an hypothetical DSP API, I completely agree. But, most of all, it would be nice to have a way to do efficient DSP inside the browser. Currently, whoever wants to do serious signal processing is forced to go native.
I don't think that's WebAudio's fault. Personally, I would go talk to my favorite Javascript implementor and encourage him to make the compiler do a better job compiling up DSP code. Then everybody wins everywhere instead of this one narrow case in WebAudio.
If we want the open web to compete with apps (at least in the audio niche), we should have a way to do processing at a comparable rate of efficiency.
— Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-audio-api/issues/468#issuecomment-104348509 .
Ray
I don't think that's WebAudio's fault. Personally, I would go talk to my favorite Javascript implementor and encourage him to make the compiler do a better job compiling up DSP code. Then everybody wins everywhere instead of this one narrow case in WebAudio.
This is not a narrow case in Web Audio. This is having native DSP in the browser, which is comparatively easier and, most of all, more immediate, than rewriting V8 to make it reach near-native speeds when it comes to DSP. Computationally optimize a javascript compiler to perform comparably to optimized C/C++ code in a cpu-bound operational subset seems far more irrealistical than hooking FFT code to a worker, exactly like you did for all the other Audionodes.
Or maybe I'm wrong, and we should do like you say: next stop, we'll rewrite libjingle in Javascript and bully compiler vendors to make it go faster when it comes to suppress echo in realtime communications.
Maybe the miscommunication, here, is that you're talking at a philosophical level and I'm talking about what we can do now (or in the immediate future), pragmatically, with the open Web.
I still don't understand why we have a freaking convolver, a sophisticated 3d panner, an analyzer node (that most probably uses an FFT implementation internally), but when it comes to have frequency-domain workers, it's a big horrified no-no.
When I learned about web audio, I was happy because the work with sound in JavaScript has become almost as simple and understandable for musicians who have something to do with programming like MAX MSP PD, etc., but the lack of nodes to work with the spectrum with native optimized windowing etc. a little disappointing..( I would like to know this will be in the future or principle not?
@echo66 When i using fourier.js observed very very high CPU perf. and instability work of browser. Whether I use the library to compute? ifft for the same only fourier.idft. code:
var processor = audioContext.createScriptProcessor(1024, 2, 2);
processor.onaudioprocess = function(audioProcessingEvent) {
var inputBuffer = audioProcessingEvent.inputBuffer;
var outputBuffer = audioProcessingEvent.outputBuffer;
var inputDataL = inputBuffer.getChannelData(0);
var inputDataR = inputBuffer.getChannelData(1);
var outputDataL = outputBuffer.getChannelData(0);
var outputDataR = outputBuffer.getChannelData(1);
var fft = fourier.dft(inputDataL,inputDataR);
for (var sample = 0; sample < inputBuffer.length; sample++) {
outputDataL[sample] = fft[0][sample];
outputDataR[sample] = fft[1][sample];
}
}
On Thu, May 21, 2015 at 10:33 AM, Cristiano Belloni < notifications@github.com> wrote:
I don't think that's WebAudio's fault. Personally, I would go talk to my favorite Javascript implementor and encourage him to make the compiler do a better job compiling up DSP code. Then everybody wins everywhere instead of this one narrow case in WebAudio.
This is not a narrow case in Web Audio. This is having native DSP in the browser, which is comparatively easier and, most of all, more immediate, than rewriting V8 to make it reach near-native speeds when it comes to DSP. Computationally optimize a javascript compiler to perform comparably to optimized C/C++ code in a cpu-bound operational subset seems far more irrealistical than hooking FFT code to a worker, exactly like you did for all the other Audionodes.
Or maybe I'm wrong, and we should do like you say: next stop, we'll rewrite libjingle in Javascript and bully compiler vendors to make it go faster when it comes to suppress echo in realtime communications.
Maybe the miscommunication is that you're talking at a philosophical level and I'm talking about what we can do now (or in the immediate future), pragmatically, with the open Web
I still don't understand why we have a freaking convolver, a sophisticated 3d panner, an analyzer node (that most probably uses an FFT implementation internally), but when it comes to have frequency-domain workers, it's a big horrified no-no.
I guess it's a difference in philosophy. As I said, I never considered WebAudio as a general DSP framework, so missing some DSP stuff isn't a show stopper for me. If you think WebAudio is a DSP framework, then, yeah, it's missing tons of stuff and includes lots of strange things. And certainly that philosophy colors your expectations.
— Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-audio-api/issues/468#issuecomment-104365958 .
Ray
Hi all!
I actually like the idea of an AudioFrequencyWorkerNode but totally agree that FFT should be provided natively in general in order to get the most performance out of it. FFT is interesting for a bunch of things, not only audio processing. So in best case scenario you will get such a node natively fast by writing just a few lines of javascript code under your control. There are a lot of things to consider as the window-size, window-shifting and window-shape. This should be all in your hands and might blow up the initial idea/practicability from scratch.
I personally think and hope that the web-audio-api is considered as a vision for the future. It is currently a rather simple api with a bunch of modules. A web-audio-api should be more than just a pure audio processing chain. It should enable everybody to create playful things but also very complex application up to a DAW. Therefore you need to be able to divide the current audio-block globally to enable smooth looping inside your arrangement. You need to be able to have nodes inside the audio-thread that distributes events (sequencing) like notes and automation to your audio-synthesis nodes. In the end it is all about a proper graph to solve, which is already done. It should not be the task of the ui-thread to provide events, unless it are ui-events like turning a know etc.. The big picture is then to promote it to different platforms as Android and IOS, trying to make it a standard for everybody to use. A man can dream :)
For Audiotool, we would love to re-create all devices within the web-audio-api, but we are targeting different platforms in future. So for now we are sticked to create everything inside a single ScriptProcessor, not getting the benefit of multi-threading but looking forward for AudioWorker, where we still only can use one node since it lack the important features mentioned above.
This message probably worths a new subject but I was following the AudioFrequencyWorkerNode idea for a while and it inspired me now for that conclusion :)
André Michelle http://www.audiotool.com http://www.audiotool.com/
On 21 May 2015, at 20:51, rtoy notifications@github.com wrote:
On Thu, May 21, 2015 at 10:33 AM, Cristiano Belloni < notifications@github.com> wrote:
I don't think that's WebAudio's fault. Personally, I would go talk to my favorite Javascript implementor and encourage him to make the compiler do a better job compiling up DSP code. Then everybody wins everywhere instead of this one narrow case in WebAudio.
This is not a narrow case in Web Audio. This is having native DSP in the browser, which is comparatively easier and, most of all, more immediate, than rewriting V8 to make it reach near-native speeds when it comes to DSP. Computationally optimize a javascript compiler to perform comparably to optimized C/C++ code in a cpu-bound operational subset seems far more irrealistical than hooking FFT code to a worker, exactly like you did for all the other Audionodes.
Or maybe I'm wrong, and we should do like you say: next stop, we'll rewrite libjingle in Javascript and bully compiler vendors to make it go faster when it comes to suppress echo in realtime communications.
Maybe the miscommunication is that you're talking at a philosophical level and I'm talking about what we can do now (or in the immediate future), pragmatically, with the open Web
I still don't understand why we have a freaking convolver, a sophisticated 3d panner, an analyzer node (that most probably uses an FFT implementation internally), but when it comes to have frequency-domain workers, it's a big horrified no-no.
I guess it's a difference in philosophy. As I said, I never considered WebAudio as a general DSP framework, so missing some DSP stuff isn't a show stopper for me. If you think WebAudio is a DSP framework, then, yeah, it's missing tons of stuff and includes lots of strange things. And certainly that philosophy colors your expectations.
— Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-audio-api/issues/468#issuecomment-104365958 .
Ray — Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-audio-api/issues/468#issuecomment-104386050.
Heey, Andre! First of all, great work on Audiotool! You and I have a similar dream. :)
By the way, due to this major issue with WAA, Im looking into ActionScript alternatives. 10 minutes ago, I just found out about one of your audio frameworks for AS. Tomorrow, I will try AS for the first time. Im kind of tired of this issue with FFT and WAA. Afterall, I have a thesis prototype to implement.
So, to add my two cents: Web Audio API v 1 was never intended to be a general DSP framework for every signal processing function you might choose to do - EXCEPT through scriptprocessor (now audioworker). It was, in Chris Roger's original design, intended to provide convenient nodes for common audio processing tasks - like low-pass filtering, for example, with easy-to-grok frequency and Q parameters. That's not a replacement for arbitrary FFT processing - but it does hit the necessary scenarios and is far easier for far more developers to understand. Yes, we may need a general FFT node; I'm not against someone making a proposal for that - but the first place to start would be with an FFT library in JS, hooked in via audioworker. That would probably help inform the design.
@janesconference I'll point out you have a Convolver because room effects like reverb are super-common; the super-sophistication of Panner I think was a mistake, personally, and that's why I've pushed for StereoPanner in the spec, but everybody also wants to pan; there's a (relatively simplistic) Analyser because freakin' EVERYBODY wants music/audio visualization, and it's (compared to doing your own FFT when you're not a DSP expert) very easy to use. And dynamics compression is in there because it's very easy to mess up and overdrive your output signal, and digital clipping sucks.
@andremichelle note that AudioWorker will not give you "the benefit of multi-threading", per se - it just gives you the benefit of keeping your audio code out of the main UI thread. If that's what you meant, great.
I'll point out you have a Convolver because room effects like reverb are super-common; the super-sophistication of Panner I think was a mistake, personally, and that's why I've pushed for StereoPanner in the spec, but everybody also wants to pan; there's a (relatively simplistic) Analyser because freakin' EVERYBODY wants music/audio visualization, and it's (compared to doing your own FFT when you're not a DSP expert) very easy to use. And dynamics compression is in there because it's very easy to mess up and overdrive your output signal, and digital clipping sucks.
And I personally love all of them, even the 3d panner that always shifts my output samples in wav.hya.io. Compared to the old times of the now-defunct FF AudioData API (I think it was called like that), WAA is super easy to use and provides native, efficient ways to use DSP building blocks.
The first version of my phase vocoding pitchshifter was for Audio Data, in Firefox 4.0, I think, and it had been a nightmare to get right. I guess that the bottom line of all this (AD dead, WAA alive and well) is that sometimes you have to go native, and I wouldn't go back to pulling samples and getting 100% CPU after chaining 3 real time effects.
That said, I guess that, like EVERYBODY wants efficient and cool 3d panning, impulse convolving and easy filtering, there's a good percent of all those everybodies that would like to mess with the spectral frequency and do vocoding, pitch and time shifting, frequency-domain filtering, audio fingerprinting and loads of other cool stuff.
I still think we need an audio-data-level API (but much better designed, e.g. NOT in the main thread); and I also think we need hooks to enable low-level innovation like spectral frequency, pitch/time shifting, arbitrary FFT filtering, etc - but I will state that I think the number of users of those features will be far fewer (as they require much more expertise) than, say, simple panning and reverb. Not less important; just fewer.
Probably they will be less, but what about the # of higher-level users indirectly using those features? Your users are developers, but they have end and intermediate users in turn. Moving audio to the web could shift its potential userbase from apps (or at least that's what anyone in this thread hopes).
@chrislo , one question: I know this might be a little offtopic (or not, let's see) but instead of just thinking about a DSP standard for audio, why not make it general for other devices that the browser might detect? Nowadays, the amount of sensors and input devices you have in a smartphone is bigger than any "average joe" might ever imagine. They all provide data feeds/signals to be processed in a way or another. Each device provides signals with a specific amount of dimensions: the web cam is 2D, web audio is 1D (per channel), gyroscope is 3D, compass is 8D. So, maybe W3C should start sketching a way to offer DSP operations on N-Dimensions data.
This is just a suggestion, and an offtopic as you might think. Probably, it is not even an original suggestion.
I'm personally weary of pushing down more functionality into WAA spec.
I'll take ConvolverNode
for example. I think the multi-threaded reverb convolver implementation is really great. It works really well in that specific use-case, but the nature of having that baked into the WAA with limited (only AudioParam) controls makes it unusable at some times.
For example, while trying to extend http://chinpen.net/auralizr/, I needed to change the impulse responses for the ConvolverNode
in pseudo-realtime. The current implementations dump the convolution buffer every time the impulse response property is changed.
Now I know my use case is pretty uncommon, but since the implementation is in the spec, it can't be changed/tweaked. If the whole thing was implemented in JS (and yes I understand the issue with performance) I could have easily changed the setter for the impulse response property.
I feel there is much more to gain in working with 'userland' JS implementations of DSP functionality and getting them to perform better (asm.js, SIMD.js, etc) for the ability of tweaking them, changing them, updating them without needing to involve the browsers vendors. Faster turn around, and more control.
I understand we're not there yet in terms of JS performance, but pushing this into the spec will take time, and by the time it gets implemented, it might not have that much improved performance compared to JS implementations.
Finally, I really think this discussions needs real world numbers. I started a tiny project which tries (very unscientifically) to look at the performance of JS FFT with various libraries. Please feel free to fork/improve it. It would be great to see how much performance would a browser implementation add to this.
@artofmus ,
var size = 2048;
var wantedSize = 1025; // I just want the first half of the spectrum.
var stdlib = {
Math: Math,
Float32Array: Float32Array,
Float64Array: Float64Array
};
var heap = fourier.custom.alloc(size, 3);
// For each custom FFT, you may choose if you want Float32 or Float64. Additionally, you must say if you want to use the asm or the raw version.
var fft = fourier.custom["fft_f32_"+size+"_asm"](stdlib, null, heap);
fft.init();
var real = new Float32Array(wantedSize);
var imag = new Float32Array(wantedSize);
// Forward FFT
fourier.custom.array2heap(timeframe, new Float32Array(heap), size, size);
fourier.custom.array2heap(new Float32Array(size), new Float32Array(heap), size, size);
fft.transform();
fourier.custom.heap2array(new Float32Array(heap), real, wantedSize, 0);
fourier.custom.heap2array(new Float32Array(heap), imag, wantedSize, size);
// Inverse FFT
fourier.custom.array2heap(real, new Float32Array(heap), size, 0);
fourier.custom.array2heap(real, new Float32Array(heap), size, size);
fft.transform();
timeframe.set(new Float32Array(heap, size, size));
// Do not forget to normalize the IFFT output
for (var i=0; i<size; i++) {
timeframe[i] /= size;
}
´´´
I understand we're not there yet in terms of JS performance by the time it gets implemented, it might not have that much improved performance compared to JS implementations
I wish, but my impression (and my fear) is that every year we're on the verge of being almost there, but we never get there. Like a "this is the year of Linux on desktop" situation.
Well, let's just put it this way: currently, I can't apply high quality time stretching + pitch shifting using JS implementations of FFT, without having A LOT of audio drops for more than two/three tracks playing in stereo (and I'm using a blank window!). Want a Traktor Pro in javascript? Well, too bad.
@echo66 Added some info in ReadMe. I'm travelling right now as well, so don't have much time to update the UI. PM me if you need specific info.
Based on the perf test, for a window of 1k the FFT takes ~0.5msec, but the callback is every ~22msec, which seems enough time for doing multiple channels of FFT. Unless there is something wrong in the perf test (highly likely knowing me..) or your phase-vocoder implementation has other components which take a lot of time too.
Сan someone say, AudioWorkerNode now available only in the form of js library (http://mohayonao.github.io/audio-worker-node/)? or in som alpha versions of any browsers that it is already available? if not when it is planned to implement?
@echo66 if you're making that statement based on a JS implementation in ScriptProcessor, you're comparing apples and oranges - ScriptProcessor is both very costly and very poorly predictable due to its cross-thread nature. AudioWorker is intended to remove that problem, and the remaining stress will be from pure JS perf, which is both 1) rarely as bad as people think it is, and 2) improving daily.
@artofmus to my knowledge, no one has begun a browser implementation of AudioWorker yet.
@cwilso , I understand that there is a big bottleneck with ScriptProcessor but you should not forget one thing: even in a DAW like Ableton Live, there is a point when you need to "freeze" some tracks in order to avoid distortion and audio drops due to heavy CPU duties. And this happens even with C/C++ FFT implementations for effects. Of course you only face this issue if you create many tracks. But, in the browser, I wont be surprised (actually, Im betting on it) to see more audio drops and distortion than in a C/C++ app. And tools spectral analysis play have a big performance footprint: FFT is O(N log N) and even the sliding FFT is O(N). So, it is a sensitive issue.
I understand your stance regarding adding a spectral analysis node. But if you want to make the web browser very attractive to big players in music production, the issue with analysis performance must be tackle sooner or later.
I think we all just announced our stance in this issue. No need to drag this much more.
@artofmus , I advise you to take a look at that code. That polyfill/shim provides just the AudioWorker API. It does not use a different thread for AudioWorker.
To be clear, I don't have a "stance", per se, on a spectral analysis node (aka a native generic FFT node) - other than I think it should be first implemented via AudioWorker and an FFT library, in Extensible Web Manifesto fashion.
You can, of course, use OfflineAudioContexts to freeze tracks. Of course, you will see more CPU load from an FFT load implemented in Javascript than one implemented in C/C++. At the same time, I think you'll find that multiplier is not as big as you think it is; modern JS engines are pretty good at compilation and type optimization, and it's completely tenable to prototype an FFT library in JS before baking that API into Web Audio.
@cwilso, regarding "track freezing" with OfflineAudioContext, it seems that the bug between ScriptProcessors and OfflineAudioContext still exists.
Of course it does. It's not worth it to fix, as we're deprecating scriptprocessors and AudioWorkers will not have this problem.
Not relevant to this issue, but what is the scriptprocessor and offline context bug?
On Fri, May 29, 2015 at 10:49 AM, Chris Wilson notifications@github.com wrote:
Of course it does. It's not worth it to fix, as we're deprecating scriptprocessors and AudioWorkers will not have this problem.
— Reply to this email directly or view it on GitHub https://github.com/WebAudio/web-audio-api/issues/468#issuecomment-106885602 .
Ray
@rtoy yoy can't render a scriptprocessor in an offline context. I don't have the bug report handy atm, but I can find it later.
Thanks @echo66!
Our plan is to support this by allowing developers to use AudioWorker for initial takes on this idea, and learn from these implementations when considering a first class spectral-domain feature in future versions of the spec.
Note that the issue here is the complexity of potential options, not the lack of desire to implement such a thing. Step 1, to me, is getting AudioWorker working to remove all the huge amount of performance cost of thread-hopping in ScriptProcessor (not to mention running FFT code in the main JS thread). Step 2 is prototyping frequency domain processors using a JS FFT library, to determine what options are necessarily and sufficient. Step 2a) is prototyping moving that FFT into native code to see if the speed benefits are worthwhile. I would spitball the cost of JS FFT (in asmjs style) vs native to be MAYBE two-to-one at worst; the cost of using ScriptProcessor is far more than that in practice.
Step 3, then is deciding if there is a clear API design that emerges, and a significant benefit to the FFT being in native code.
(Also as an aside, I'd expect it would be more useful to have Math.FFT-style FFT library in native than a Web-Audio-specific one.)
Marking as feature request.
Hi all. Any more on whether or not we'll see IFFT support anytime soon?
If you mean the AudioFrequencyWorkerNode, then, no; it's still marked as v.next so something to consider for the next version.
If you're asking about access to an IFFT routine, that's a different question. If this is what you want, you should file another issue on that.
It is the latter. I would like access to an IFFT routine.
This would be a pretty cool feature! If you want an example of something you can do with FFT/IFFT, I put together a frequency phase scrambler a little while back. Fun way to generate drones.
This is achievable using the audio worklet (which was not available at the time the issue was raised). An FFT library is outside of the scope of the working group and should be considered as a separate spec requirement.
This is achievable using the audio worklet (which was not available at the time the issue was raised). An FFT library is outside of the scope of the working group and should be considered as a separate spec requirement.
Hi do you have link to a documentation that explains how this can be achieved using Audio worklet. Thank you in advance
At the moment, ScriptProcessorNodes and AudioWorkers are operating on the time domain buffer data. At the Web Audio Conference, it seems like there's demand for frequency domain data inside a callback that's going to get called for every audio frame.
We're thus proposing an new node called an AudioFrequencyWorkerNode, which gives the developer the option to obtain audio data, perform processing, and output in the frequency domain. This involves passing an options object to the createAudioFrequencyWorker method, specifying the input and output types.
Defaults
The AudioFrequencyWorkerNode should allow access to time domain and frequency domain data concurrently. If no options object is passed to createAudioFrequencyWorker, the input type would default to the amplitude/phase pair, as would the output. However, the options object would allow the user to choose between amplitude/phase, real/imaginary, and time domain data. The dataOut would default to the same as the dataIn, but could be set to a different data type, in case the user wants to read real/imaginary pairs in, and write out to the time domain for example.
Proposed processing structure of the AudioFrequencyWorkerNode
INPUT (time-domain) ↓ windowing ↓ FFT ↓ ~ ~ ~ ~ ~ ~ ~ ~ ~ dataIn ↓ onaudioprocess ↓ dataOut ~ ~ ~ ~ ~ ~ ~ ~ ~ ↓ mirror ↓ complete data ↓ IFFT ↓ windowing ↓ OUTPUT (time-domain)
Example code:
main JS
AudioWorker JS
Use Cases
Jesse Allison, Hugh Rawlinson, Jakub Fiala, Nevo Segal @jesseallison, @hughrawlinson, @jakubfiala, @nevosegal
Related
248
262