Open dy opened 8 years ago
That does sound good! Would you like to PR this or should I?
v2 of audio
is not released yet, it is currently in prerelease. I think it is worth using that version for this new plan.
I'll add you to audio
on npm as well.
Btw it appears that node’s Buffer['writeInt']
etc methods are 3-5 times slower than having a TypedArray
instance created. I guess replacing source
object with audio-buffer
instance is a right move?
I did want to move away from Node parts in Audio
at one point. So I guess it is haha. Nice work :+1:
Started working on refactoring - I am afrad I have to remove almost all code, because it is covered by audio-buffer or audio-buffer-utils...
Also I wonder - how much read
and write
methods are really useful for a single-sample picking? I strongly feel like having a function call for getting a sample value is bad for performance.
Also I am a bit confused with write(sample, offset, channel)
- it makes channels uneven, right?
Also I would suggest extending them to write(sample|samples, offset?, channel?)
and read(offset, size)
Btw practically there is one nuance. It appears that operations on big data, like 1M+ samples are way slower than on small array. That is very bad for RT nature of audio processing (see gl-waveform after 1min of playing). Therefore I would suggest a webworker mode, which would make methods asynchronous, but allow for instant data storing, which would not block main UI/audio thread. I was planning to do that, as some components share waveform buffer, but I didn’t have that object, therefore there is a change for audio be that awesome class. Thoughts?
I'm fine with most of the code being replaced if it is for the better. There hasn't been much testing or components made around this module so I expect some things aren't well designed. Also why it was in beta and not released yet, because I wanted to get some testing in.
Also, with writing single values my intent was for them to be used in loops. In the previous version 1.x
using write(values, ...)
with an array was just doing a loop behind the scenes. In my experience though functions calls are expensive when it comes down to performance, so you are probably right.
With write(x, offset, channel)
, the offset
is the block (which contains the different channels' data) and then channel
is 1
, 2
, 3
, etc. the time it is uneven is if you exceed the audio.channels
number, and I probably should have had an if statement for preventing overflows, whoops.
Your ideas sound awesome, but that is new stuff for me that I've only heard a lot about but never actually used, so I wouldn't know how to approach that. Audio is kind of my aspiring interest, so this is all a cool new opportunity for me. :)
On the topic of performance, I've seen some cool new stuff with SIMD. Not sure if you've seen/used it already, but it looks like something able to use. (Although it is an ES7 thing)
Looks sweet, definitely it is worth trying. Interesting how good is that for audio. Nice SIMD performance research.
A couple of things though. First, RT audio deals with frequent read/write operations, as it puts chunks to memory and disposes them, but that is a weak point of SIMD, basically it is the overhead it adds over TypedArrays. Second, audio data is causal, i. e. value of next sample depends on the value of previous one, that is the problem why audio data is not that easily processed in parallel, including the GPU as well. Third, SIMD is grouped by blocks, which might be an issue in case of buffer editing, but not that serious.
So SIMD is good for image/large data processing/mining, like the mandelbrot calculation in example, but it is interesting what are good applications for audio.
Maybe convolution, like reverb? Or channels merging? I guess audio-mixer would be the perfect use-case in audio.
Perhaps the grouping could be useful in iterative opertions. Like instead of 1 at a time, use 4 or 8 at a time? Or am I thinking of that wrong? Like you mention in audio-mixer. But maybe also with writing sample values? Or maybe experimenting with real-time sampling on the browser using the sample rate (unless that is already done)?
Concern I just had: are components supposed to use AudioBuffer
or Audio
now? Seems like some fragmentation, you might need to handle two different types (Audio
and AudioBuffer
). Also if components don't pipe Audio
, then you might as well just use the utils with AudioBuffer
, which defeats the point.
Yeah, but not all the iterative cases. Eg x[i] = x[i-1] * n
can't be done in parallel.
And Audio is supposed to be a high-level wrapper, for user, not a holder for data, like AudioBuffer. Sure it can be used in components, if it is reasonable, like audio-editors, samplers, audiosprites, bitbox-machines etc, where there is supposed to be lots of work with audio within single, known beforehead buffer. For possibly realtime processing streams are better fit, and audio-buffer is a low-level audio data holder. Stream components are helpful to modify Audio in-place, like reducers in choo.
I think of audio.js as jQuery but for audio. Basically, same as color. Possible ideas of applications:
load
event.Should we make a lot of that functionality as separate packages if not already, then wrap it some/all together here?
Here is something I found that could help with the webworker mode: https://github.com/maxogden/workerstream
Yeah, with separate packages, as far as it is reasonable.
As for webworker it is very nice idea. Here I guess we can get along with webworkify only, but for streams we should definitely try. Maybe try to place subprocessing pipeline into webworker stream.
For example we have audio.load(src, cb?)
method. Should we return promise or self instance from it?
//Promise:
audio.load('./sound.mp3').then(
result => {
},
error => {
});
//Chain
audio.load('./sound.mp3').once('load', (err, result) => {
});
I like the second, it is classical way and does not break chain of calls. Also promises are hellish in dev/support tbh.
Chaining the event listeners sounds good to me as well. :smile:
@dfcreative Do you think it is worth putting a notice in the README until we release something compatible with all the other audio modules?
Mb. I started refactoring in class branch, implemented audio-play for playback API and audio-decode for node. Waiting for resolution of audio-play name in npm and @danigb to make audio-loader work in node, then we have almost all needed to implement audio@3.0.0.
We pretty much need to discuss the API.
API reference https://github.com/jiaaro/pydub/blob/master/API.markdown
API gotchas.
In advanced programming languages it is done with atoms or named arguments:
audio.invert(from=0s, to=1s, channel=1)
audio.data(from=100, to=8000, channel=left)
audio.fade(from=1100ms, duration=500ms, gain=-20db)
What are possible workarounds for js?
So here is some js style ethics: https://codeutopia.net/blog/2016/11/24/best-practices-for-javascript-function-parameters/ for named arguments. Also we can agree upon units, say, seconds and decibels.
audio.invert({channel: 1})
audio.data(100/audio.sampleRate, 8000/audio.sampleRate, {channel: 0})
audio.fade(1.1, .5, {gain: -20})
Hi @jamen! Do I understand right that audio is opinionated container for audio-data with some handy API methods? Is there any sense to pivot it from audio-buffer so to provide a bunch of common methods for audio manipulations, e. g. all from audio-buffer-utils?
That would be handy to require
audio
one time and have all the manip work done on some audio data, I could use that in wavearea as well. Using audio-buffer and utils right now is a bit wearisome.