tidalcycles / strudel

Web-based environment for live coding algorithmic patterns, incorporating a faithful port of TidalCycles to JavaScript
https://strudel.cc/
GNU Affero General Public License v3.0
664 stars 113 forks source link

Param Interpolation, useful for pitch bends, envelopes etc #561

Open eefano opened 1 year ago

eefano commented 1 year ago

froos edit: I've renamed the issue because this is not only useful for pitchbends but also for interpolation between params in general. Make sure to read till the end :)

One could define a pattern to indicate an interpolation curve, then apply it to a delta in the pitch control of the sample. I know that the element in the patterns have no memory of the preceeding ones, but i suggest a simple approach:

In the case of linear interpolation we need two parameters: the ending pitch and the time needed to reach it expressed in the element temporal unit (if 1 it can be omitted), (the starting pitch can be assigned by the note value). Other methods of interpolation may need new parameters.

For example, to slide from C to D, starting from the middle (so 2 semitones up): "C D".linearbend("[0 2] 0") Slide down from D to C, immediately, twice fast: "D C".linearbend("[-2 -2:0] 0") Slide from E to F to G, stay a little, then back to F, without staccato: "E".linearbend("[0 1 3 3:0 1]"

felixroos commented 1 year ago

that sounds like a good feature.. i am not yet sure what the ":" operator does in your examples, also not sure what staccato has to do with it. Generally it sounds kind of doable, we would probably need the smooth function for that to work. I also wonder what happens if the patterns do not line up, like "C D".linearbend("[0 2 0]") .

eefano commented 1 year ago

I have just noticed that my choice of parameters cannot handle all the cases, in fact the third example is wrong and it cannot be expressed with my system.

By using the starting delta as the 1st parameter and the ending delta as the 2nd one (defaulting it to the first one), we can express the example as so:

Slide from C to D, starting half-way "C D".linearbend("[0 0:2] 0") Slide down from D to C, immediately in half-time "D C".linearbend("[0:-2 -2] 0") Slide from E to F, to G, stay a little, then back to F, with a single note event E "E".linearbend("[0:1 1:3 3 3:1]"

If the patterns do not line up: "C D".linearbend("[0 0:2 0]") immagine

felixroos commented 1 year ago

ok I think I got it! so if the second param is not used, then it's just a relative repitch? So the next question would be how to calculate that..

Let's take this as an example:

note("C D").bend("[0 0:2] 0"))

When the two patterns are joined, the pitchbend could be represented as keyframes relative to the hap duration:

[
{ note: "C", bend: [[0,0],[0,0.5],[2,1]] },
{ note: "D", bend: [[0,0],[0,1]] },
]

Here, bend is an array with keyframes with two values:

  1. repitch
  2. normalized time (0-1 relative to hap duration)

When the note is then triggered, those keyframes can be turned into scheduling calls, sth like:

// assume oscNode,time and duration are defined
const getFreq = (repitch) => oscNode.frequency.value * Math.pow(2, repitch/12)
const getTime = (progress) => time + progress*duration;
const f = oscNode.frequency;
const [first, ...rest] = hap.value.bend[0];
f.setValueAtTime(first[0], getTime(first[1])); //
const ramp = ([shift, time]) => f.linearRampToValueAtTime(getFreq(shift), getTime(time]))
rest.forEach(ramp);

...creating the calls:

// assume 
oscNode.frequency.setValueAtTime(0, 0);
oscNode.frequency.linearRampToValueAtTime(+0, +0.5);
oscNode.frequency.linearRampToValueAtTime(2, 1);

... which looks good to me. Of course, the code is untested and conceptual, but I don't see a problem. Similar logic would have to be written for other types of audio nodes like AudioBufferSourceNode.

Btw this type of logic would also work for other params, like cutoff etc...

The next question would then be how to create this keyframe array from the two patterns... I will let that question simmer for now, (or let someone else think about it)

edit: maybe setValueAtTime could replace linearRampToValueAtTime when the value does not change

felixroos commented 1 year ago

Btw this type of logic would also work for other params, like cutoff etc...

what if bend is just part of regular arithmetics??

note("c a f e").cutoff("1000".add("0:2000"))

with this, you could even create envelopes like that:

note("c a f e").cutoff("500".add.squeeze("0:1000"))

this could be crazy useful! if that's to much, there could also be a lerp method for each arithmetic operation:

note("c a f e").cutoff("500".addLerp.squeeze("0:1000"))

The representation would also need to change a bit... maybe

{ note: "c", cutoff: 500, lerp: { cutoff: [[500,0],[1500,1]] } }
{ note: "a", cutoff: 500, lerp: { cutoff: [[500,0],[1500,1]] } }
{ note: "f", cutoff: 500, lerp: { cutoff: [[500,0],[1500,1]] } }
{ note: "e", cutoff: 500, lerp: { cutoff: [[500,0],[1500,1]] } }

just thinking out loud...

edit: using squeeze would probably look more like this:

note("c a f e").set.squeeze(cutoff("500:1500"))
note("c a f e").cutoff("500").add.squeeze(cutoff("500:1500"))
eefano commented 1 year ago

Introducing higly varying parameters on the notes needs a complete reconsideration on how the inner loop actually works.

At the moment the note events (and all his properties: pitch, volume, duration, and so on) are "set and forget" (or at least I think you've told me so). With the introduction of fast variance parameters, you somehow must implement a tight inner loop to control them for each playing channel that requires them.

For example, mod players "tick" at a precise frequency (usually 50hz); every channel that is under bend control is pitch-adjusted 50 times per second (or every 20ms). For obvious reasons, that interval should be a multiple of the global sound buffer time, so the updates can be done in the buffer callback function once every N times.

If there's no browser API equivalent, doing updates in pure javascript can be more perfomance impacting overall.

A good article on the topic: https://www.a1k0n.net/2015/11/09/javascript-ft2-player.html

felixroos commented 1 year ago

content warning: information overload :P

With the introduction of fast variance parameters, you somehow must implement a tight inner loop to control them for each playing channel that requires them.

If I understand correctly, I don't think this is needed when it's implemented with the Web Audio API. The methods setValueAtTime and linearRampToValueAtTime are standardized and take care of everything.. They exist on each AudioParam to precisely schedule in advance, either at sample rate (a-rate = 44.1kHz) or sample-block rate (k-rate = 44.1kHz/128), depending on the parameter.

If there's no browser API equivalent, doing updates in pure JS can be more perfomance impacting overall.

So yes, there is a browser API for that.. in strudel, the actual JS scheduling runs at 20Hz (adjustable) and only calls the web audio scheduling methods. The JS is just a binding to the native web audio API implementation in the browser.

These APIs are already used in strudel, for example in the envelope. It should still be noted that the Web Audio API has its limits, for example the method cancelAndHoldAtTime is not implemented in Firefox, but it is indispensable for some scenarios.

A good article on the topic: https://www.a1k0n.net/2015/11/09/javascript-ft2-player.html

While it certainly looks interesting (especially the idea), the article (from 2015) uses an already deprecated API (createScriptProcessor). Nowadays you'd normally use AudioWorkletProcessor, as it runs in a separate thread + you can use WASM to run your audio code (although you can still use js if you want). I haven't tested it, but I'd guess that the scheduling methods mentioned above are faster than what the article is doing.

TLDR; afaik, you either use the full Web Audio API with the (older) Tale of 2 Clocks approach to scheduling, or you only use AudioWorklet and implement everything (including the audio engine) inside a system programming language. The latter approach is certainly more powerful, but so far the former held pretty well for strudel.

The fact that we're using JS to calculate the events means we have to query in JS anyway, and that is also the performance bottleneck i think (at least right now). It would certainly be interesting to find a way to query patterns inside an AudioWorklet, though I am not sure if you can split the calculation over multiple blocks (with 128 samples at 44.1kHz, you only have 3ms for everything).

eefano commented 1 year ago

The article was just informative, not a guideline, and as you said, linearRampToValueAtTime does that all by itself, so it's even better from Strudel point of view! I think we should leverage the generalized nature of ramp functions, for every parameter we can abuse 😄 (filter, gain, pan...)

jarmitage commented 1 year ago

Would this be the equivalent of Tidal's smooth?

https://tidalcycles.org/docs/reference/oscillators/#smooth

felixroos commented 1 year ago

Would this be the equivalent of Tidal's smooth?

https://tidalcycles.org/docs/reference/oscillators/#smooth

it is similar, as smooth will turn a pattern of discrete numbers into a continuous lerp between them. Having it would still not fully solve this issue, for example a pitchbend could look like:

// "C D".linearbend("[0 0:2] 0")
"C D".add("[0 [0 2]] 0".smooth())

...trying to express the first example of https://github.com/tidalcycles/strudel/issues/561#issuecomment-1536697012 . The problem is that you still won't get a pitchbend like that, because the structure comes from the left, and the .add will just be applied to the onsets of each note.

We can test that right now, replacing the smoothed pattern with its equivalent*:

"C D".add(seq([0,saw.mul(2)],0)).note()

repl

It can be made audible by adding .segment(16):

"C D".add(seq([0,saw.mul(2)],0)).note().segment(16)

repl

although this creates the desired pitchbend, it also creates a bunch of onsets we don't want.

CW: long chain of thought It might work using a hypothetical alignment method `lerp`: ```js "C D".add.lerp("[0 [0 2]] 0".smooth().segment(8)).note() ``` this could result in ```js [[48,0],[48,.25],[48,.5],[49,.75],[50,1]] [[50,0],[50,.25],[50,.5],[50,.75],[50,1]] ``` which could be simplified to ```js [[48,0],[48,.5],[50,1]] 50 ``` it could work by using each hap value of the inner pattern as an array item of the outer pattern: above the outer pattern is `"C D"` => `[48, 50]` and inner pattern is `"[0 [0 2]] 0".smooth().segment(8)` => `[0, 0, 0, 1, 0, 0, 0, 0]` resulting in: `[[48+0, 48+0, 48+0, 48+1], [50+0, 50+0, 50+0, 50+0]]` = `[[48, 48, 48, 49], [50, 50, 50, 50]]` the above result still misses the final value of the first hap, which would be 50, or 2 in the inner pattern. To obtain it, some mechanism would be needed that also queries the value at the offset of a hap.. Not sure how to do that.. Generally, if the above would somehow work, it could allow solving the issue without the ":" syntax. The downside is that it's much more complicated to write.. There might be a shorthand for this, like: ```js "C D".add.smooth("[0 [0 2]] 0").note() ``` this wouldn't require a segment, and just take the values of the inner pattern as a lerp.. here, the inner pattern is `[0 [0 2]] 0` => `[0, 0, 2, 0]` (values), `[0, 1/4, 3/8, 1/2]` (onset time) for each onset, the outer pattern `C D` could be queried to get the values `[48,48,48, 50]` the `add.smooth` could then calculate: `[[0+48, 0], [0+48,1/4] [2+48,3/8], [0+50,1/2]]` = `[[48, 0], [48,1/4] [50,3/8], [50,1/2]]` which is almost correct, but now the `2` is applied at the onset, so the pitchbend would be too quick, and it potentially might slide down too early.. The difference again comes down to onset vs offset values / where you sample the lerped pattern. afaik signals in strudel / tidal are currently sampled at the midpoint, which is why `console.log(saw.segment(2).firstCycleValues)` outputs `[0.25,0.75]`. The question is, *when* are the numbers `"[0 [0 2]] 0"` applied? To match our original example, the `2` would need to be reached at the end of the hap. In contrast, the last `0` is expected to be applied at the beginning of the hap. This hints at the fact that the pattern `"[0 [0 2]] 0"` contains too little information to convey what we'd want, which somehow underpins the need for an operator to help here, because `"[0 [0:2]] 0"` has enough information, given that the first item is the value at the beginning of the hap, and the second (if given) is the value at the end.

TLDR; i think smooth is not suited for this type of interpolation, as it doesn't allow specifying sudden jumps, e.g. seq(saw,0) is not something you can express. smooth("[0 1] 0") would interpolate from 1 to 0, but we want a jump. A workaround could be to use a very steep curve: smooth("[0 [1@99 0]] 0") but that's not something you'd want to write

*not really equivalent, for the above reason

felixroos commented 4 months ago

related: https://github.com/tidalcycles/strudel/discussions/1118