ASL stands for 'a slope language', but our entire focus on slopes is their close correlation with the representation of musical expressions. Yes 'LFO' and 'ADSR' are of course covered, but ASL can also describe 'crescendo' or 'melodic contour'.
A number of extensions to ASL are already proposed to move toward these ideas:
'absolute shapers' allowing outputs to quantized to a user defined musical sequence
separation (and later composition) of 'actions', 'pulses' and 'offsets'
A primary concern is thinking about how the syntax or lexical structure of ASL could be refined to speak more directly to melody and rhythm.
It would require some changes to the implementation (and perhaps introduce some limitations), but running ASL at audio-rates could be conceptually interesting. Perhaps this requires further changes to the syntax as well? The idea is that of 'algorithmic waveforms' where modulations are built-in to the waveform descriptor.
Consider a waveform that is a simple triangle, where the 'top' of the triangle is moved about within the period of the waveform. this results in sawtooth through ramp sounds. typically such a oscillator is custom built with this behaviour (see Just Friends etc), then that point can be modulated by control-voltage or some other algorithm. I propose it would be interesting to control the location of that point with an algorithm. Using ASL, that algorithm is effortlessly wrapped in a closure that calculates a new location upon each repetition.
Of course this above idea can then be generalized such that arbitrary waveforms can be created with an arbitrary number of modulation points. The real key is that the modulation is as much a part of the description as the frequency and amplitudes of points. Thus we can say that the modulation is a component of the waveform itself.
This is the key to 'algorithmic waveforms'. That is, waveforms where their behaviour changes over time according to some context. The logical extension of algo-waves is that of 'behavioural waveforms' or the category of 'behavioural synthesis'. In this case the aforementioned 'context' would be based on an 'environment' shared across the synthesis platform.
ASL stands for 'a slope language', but our entire focus on slopes is their close correlation with the representation of musical expressions. Yes 'LFO' and 'ADSR' are of course covered, but ASL can also describe 'crescendo' or 'melodic contour'.
A number of extensions to ASL are already proposed to move toward these ideas:
A primary concern is thinking about how the syntax or lexical structure of ASL could be refined to speak more directly to melody and rhythm.
It would require some changes to the implementation (and perhaps introduce some limitations), but running ASL at audio-rates could be conceptually interesting. Perhaps this requires further changes to the syntax as well? The idea is that of 'algorithmic waveforms' where modulations are built-in to the waveform descriptor.
Consider a waveform that is a simple triangle, where the 'top' of the triangle is moved about within the period of the waveform. this results in sawtooth through ramp sounds. typically such a oscillator is custom built with this behaviour (see Just Friends etc), then that point can be modulated by control-voltage or some other algorithm. I propose it would be interesting to control the location of that point with an algorithm. Using ASL, that algorithm is effortlessly wrapped in a closure that calculates a new location upon each repetition.
Of course this above idea can then be generalized such that arbitrary waveforms can be created with an arbitrary number of modulation points. The real key is that the modulation is as much a part of the description as the frequency and amplitudes of points. Thus we can say that the modulation is a component of the waveform itself.
This is the key to 'algorithmic waveforms'. That is, waveforms where their behaviour changes over time according to some context. The logical extension of algo-waves is that of 'behavioural waveforms' or the category of 'behavioural synthesis'. In this case the aforementioned 'context' would be based on an 'environment' shared across the synthesis platform.