audiojs / audio

Class for high-level audio manipulations [NOT MAINTAINED]
MIT License
240 stars 9 forks source link

alternative audio data structures / storing ops instead of applying immediately #55

Open dy opened 6 years ago

dy commented 6 years ago

Following https://github.com/audiojs/audio-buffer-list/issues/5.

The current API approach is covered by a lot of similar components, it is destined to insignificant competition and questionable value. The main blocker and drawback is the core - audio-buffer-list component, which does not bring a lot of value, compared to just storing linked audio-buffers.

Alternately, audio could be focused on storing editing-in-process, rather than data wrapper with linear API, similar to XRay RGA.

Principle

Pros

dy commented 5 years ago

Reference structures:

In fact, git seems to be suitable for that too.

Note also that the class technically should allow to utilize any underlying technology: time series, STFT, formants, HPR, HPS SPS etc models (https://github.com/MTG/sms-tools/tree/master/software/models), wavelets etc.

:+1: In the case of formants, for example, transforms are theoretically times faster than the time series. :+1: Abstract interface would discard sampleRate param and make Audio just a time-series data wrapper, with possibly even uncertain/irregular stops. We may want to engage a separate time-series structure for that, which seems to be broadly demanded, from animation/gradient/colormap stops to compact storing of observations.

dy commented 5 years ago

Lifecycle

  1. Initialize data model
  2. Input data source
    • Convert input data source to target model
  3. Modify data source
    • Create stack of modifiers/reducers/transforms
    • Modifiers can possibly be applied real-time
  4. Play data source
    • Apply stack of transforms, play / apply transforms per-buffer
  5. Get stats
    • Should model include stat params straight ahead?
  6. Output data source
    • Apply stack of transforms, output

Plan

Stores