Closed trentgill closed 3 years ago
In 3.0 I plan to turn ASL into a C-based library with light Lua hooks to the functionality.
Con:
to
will need to be literal values, or specially implemented generatorsPro:
This tradeoff is being made as an admission that ASL can't great as a general-purpose scheduling system, while maintaining tight-timing for synthesis duties. The decision to focus on the timing-accuracy comes from the prospect of having the full norns clock
system running in crow. Having 'Lua things' happen at a scheduled time in the future is much more inline with that concept, and indeed the clock
system can set & call ASL actions in time.
To enable ASL to create varying waves, a number of methods will be enabled for users to interact with the data of the ASL.
listener._
table, where lua variables are shared to the C environment for dynamic updates.generator
functions, where a to
variable can be generated from a data-set & a pre-defined iterator behaviourGenerators could include:
Step behaviours could be:
Mod behaviours could be:
These behaviours will likely need to be nested (eg increment & wrap).
//
Basically ASL becomes much more of a 'tiny programming language for describing modulation & waveforms', rather than just an alternate syntax for coroutines with custom timing.
This is ASL2.0
held{}
times(n,{})
dyn.instant.key = val
to update val now, rather than at the next breakpointlock{}
fixed in #399
issue
The fundamental issue here is that after each breakpoint in an ASL we must call back into Lua. Because this happens inside the audio callback, we can't call directly into Lua as it may already be active from the event loop. Instead we cue an event that waits for Lua to process it.
The result is that there is always some delay in getting the next destination value, and thus the current audio vector just sits at the limit and waits until the next cycle (or potentially n cycles later) for the callback to have been called. We already compensate for this delay by jumping ahead the appropriate part of the waveform.
The effective result is a maximum cycle time of 2 audio-vectors (one up, one down) which makes high-quality waveforms impossible with the current system.
possible approach
ASL could be refactored such that the ASL program is compiled into a C data structure, rather than maintained as a lua table. This makes audio-rate no problem, but adds some limitations:
to
function is allowed)to
would need to be literals (not functions), so values could not be updated without recompiling the ASL.Currently the function handling just allows params to be functions which are resolved only when applying that segment to the slope library. Thus, looping ASLs will continuously resolve these calculations allowing for new values to be applied every breakpoint.
This could be ameliorated:
to
parameters, and a userdata table where a fixed number of variables can be stored. These would be directly accessible from C, and could be updated in realtime by the script writer as eg:listen.frequency
. There could be a set of fixed names with special behaviour (eg 'frequency' converts Hz->S). Care needs to be taken that each channel has separate access. These params would be compared every audio-vector, though perhaps the 'at next breakpoint' behaviour of the current system would be interesting to keep?to
, forwards the input to the output. Otherwise 'replacement' types (eg 'noise') could be allowed.reset_all
would restart, ie sync, all ASL channels)