Open eefano opened 1 year ago
that sounds like a good feature.. i am not yet sure what the ":" operator does in your examples, also not sure what staccato has to do with it. Generally it sounds kind of doable, we would probably need the smooth function for that to work. I also wonder what happens if the patterns do not line up, like "C D".linearbend("[0 2 0]")
.
I have just noticed that my choice of parameters cannot handle all the cases, in fact the third example is wrong and it cannot be expressed with my system.
By using the starting delta as the 1st parameter and the ending delta as the 2nd one (defaulting it to the first one), we can express the example as so:
Slide from C to D, starting half-way
"C D".linearbend("[0 0:2] 0")
Slide down from D to C, immediately in half-time
"D C".linearbend("[0:-2 -2] 0")
Slide from E to F, to G, stay a little, then back to F, with a single note event E
"E".linearbend("[0:1 1:3 3 3:1]"
If the patterns do not line up:
"C D".linearbend("[0 0:2 0]")
ok I think I got it! so if the second param is not used, then it's just a relative repitch? So the next question would be how to calculate that..
Let's take this as an example:
note("C D").bend("[0 0:2] 0"))
When the two patterns are joined, the pitchbend could be represented as keyframes relative to the hap duration:
[
{ note: "C", bend: [[0,0],[0,0.5],[2,1]] },
{ note: "D", bend: [[0,0],[0,1]] },
]
Here, bend
is an array with keyframes with two values:
When the note is then triggered, those keyframes can be turned into scheduling calls, sth like:
// assume oscNode,time and duration are defined
const getFreq = (repitch) => oscNode.frequency.value * Math.pow(2, repitch/12)
const getTime = (progress) => time + progress*duration;
const f = oscNode.frequency;
const [first, ...rest] = hap.value.bend[0];
f.setValueAtTime(first[0], getTime(first[1])); //
const ramp = ([shift, time]) => f.linearRampToValueAtTime(getFreq(shift), getTime(time]))
rest.forEach(ramp);
...creating the calls:
// assume
oscNode.frequency.setValueAtTime(0, 0);
oscNode.frequency.linearRampToValueAtTime(+0, +0.5);
oscNode.frequency.linearRampToValueAtTime(2, 1);
... which looks good to me. Of course, the code is untested and conceptual, but I don't see a problem. Similar logic would have to be written for other types of audio nodes like AudioBufferSourceNode.
Btw this type of logic would also work for other params, like cutoff etc...
The next question would then be how to create this keyframe array from the two patterns... I will let that question simmer for now, (or let someone else think about it)
edit: maybe setValueAtTime
could replace linearRampToValueAtTime
when the value does not change
Btw this type of logic would also work for other params, like cutoff etc...
what if bend
is just part of regular arithmetics??
note("c a f e").cutoff("1000".add("0:2000"))
with this, you could even create envelopes like that:
note("c a f e").cutoff("500".add.squeeze("0:1000"))
this could be crazy useful! if that's to much, there could also be a lerp method for each arithmetic operation:
note("c a f e").cutoff("500".addLerp.squeeze("0:1000"))
The representation would also need to change a bit... maybe
{ note: "c", cutoff: 500, lerp: { cutoff: [[500,0],[1500,1]] } }
{ note: "a", cutoff: 500, lerp: { cutoff: [[500,0],[1500,1]] } }
{ note: "f", cutoff: 500, lerp: { cutoff: [[500,0],[1500,1]] } }
{ note: "e", cutoff: 500, lerp: { cutoff: [[500,0],[1500,1]] } }
just thinking out loud...
edit: using squeeze would probably look more like this:
note("c a f e").set.squeeze(cutoff("500:1500"))
note("c a f e").cutoff("500").add.squeeze(cutoff("500:1500"))
Introducing higly varying parameters on the notes needs a complete reconsideration on how the inner loop actually works.
At the moment the note events (and all his properties: pitch, volume, duration, and so on) are "set and forget" (or at least I think you've told me so). With the introduction of fast variance parameters, you somehow must implement a tight inner loop to control them for each playing channel that requires them.
For example, mod players "tick" at a precise frequency (usually 50hz); every channel that is under bend control is pitch-adjusted 50 times per second (or every 20ms). For obvious reasons, that interval should be a multiple of the global sound buffer time, so the updates can be done in the buffer callback function once every N times.
If there's no browser API equivalent, doing updates in pure javascript can be more perfomance impacting overall.
A good article on the topic: https://www.a1k0n.net/2015/11/09/javascript-ft2-player.html
content warning: information overload :P
With the introduction of fast variance parameters, you somehow must implement a tight inner loop to control them for each playing channel that requires them.
If I understand correctly, I don't think this is needed when it's implemented with the Web Audio API. The methods setValueAtTime and linearRampToValueAtTime are standardized and take care of everything.. They exist on each AudioParam to precisely schedule in advance, either at sample rate (a-rate = 44.1kHz) or sample-block rate (k-rate = 44.1kHz/128), depending on the parameter.
If there's no browser API equivalent, doing updates in pure JS can be more perfomance impacting overall.
So yes, there is a browser API for that.. in strudel, the actual JS scheduling runs at 20Hz (adjustable) and only calls the web audio scheduling methods. The JS is just a binding to the native web audio API implementation in the browser.
These APIs are already used in strudel, for example in the envelope. It should still be noted that the Web Audio API has its limits, for example the method cancelAndHoldAtTime is not implemented in Firefox, but it is indispensable for some scenarios.
A good article on the topic: https://www.a1k0n.net/2015/11/09/javascript-ft2-player.html
While it certainly looks interesting (especially the idea), the article (from 2015) uses an already deprecated API (createScriptProcessor). Nowadays you'd normally use AudioWorkletProcessor, as it runs in a separate thread + you can use WASM to run your audio code (although you can still use js if you want). I haven't tested it, but I'd guess that the scheduling methods mentioned above are faster than what the article is doing.
TLDR; afaik, you either use the full Web Audio API with the (older) Tale of 2 Clocks approach to scheduling, or you only use AudioWorklet and implement everything (including the audio engine) inside a system programming language. The latter approach is certainly more powerful, but so far the former held pretty well for strudel.
The fact that we're using JS to calculate the events means we have to query in JS anyway, and that is also the performance bottleneck i think (at least right now). It would certainly be interesting to find a way to query patterns inside an AudioWorklet, though I am not sure if you can split the calculation over multiple blocks (with 128 samples at 44.1kHz, you only have 3ms for everything).
The article was just informative, not a guideline, and as you said, linearRampToValueAtTime does that all by itself, so it's even better from Strudel point of view! I think we should leverage the generalized nature of ramp functions, for every parameter we can abuse 😄 (filter, gain, pan...)
Would this be the equivalent of Tidal's smooth
?
Would this be the equivalent of Tidal's
smooth
?
it is similar, as smooth
will turn a pattern of discrete numbers into a continuous lerp between them.
Having it would still not fully solve this issue, for example a pitchbend could look like:
// "C D".linearbend("[0 0:2] 0")
"C D".add("[0 [0 2]] 0".smooth())
...trying to express the first example of https://github.com/tidalcycles/strudel/issues/561#issuecomment-1536697012 .
The problem is that you still won't get a pitchbend like that, because the structure comes from the left, and the .add
will just be applied to the onsets of each note.
We can test that right now, replacing the smoothed pattern with its equivalent*:
"C D".add(seq([0,saw.mul(2)],0)).note()
It can be made audible by adding .segment(16)
:
"C D".add(seq([0,saw.mul(2)],0)).note().segment(16)
although this creates the desired pitchbend, it also creates a bunch of onsets we don't want.
TLDR; i think smooth
is not suited for this type of interpolation, as it doesn't allow specifying sudden jumps, e.g. seq(saw,0)
is not something you can express. smooth("[0 1] 0")
would interpolate from 1 to 0, but we want a jump.
A workaround could be to use a very steep curve: smooth("[0 [1@99 0]] 0")
but that's not something you'd want to write
*not really equivalent, for the above reason
froos edit: I've renamed the issue because this is not only useful for pitchbends but also for interpolation between params in general. Make sure to read till the end :)
One could define a pattern to indicate an interpolation curve, then apply it to a delta in the pitch control of the sample. I know that the element in the patterns have no memory of the preceeding ones, but i suggest a simple approach:
In the case of linear interpolation we need two parameters: the ending pitch and the time needed to reach it expressed in the element temporal unit (if 1 it can be omitted), (the starting pitch can be assigned by the note value). Other methods of interpolation may need new parameters.
For example, to slide from C to D, starting from the middle (so 2 semitones up):
"C D".linearbend("[0 2] 0")
Slide down from D to C, immediately, twice fast:"D C".linearbend("[-2 -2:0] 0")
Slide from E to F to G, stay a little, then back to F, without staccato:"E".linearbend("[0 1 3 3:0 1]"