Closed nick-thompson closed 7 months ago
Hi!
I added some stuff about this recently, since it is tricky. Firstly, yes it's filled with 0s initially, and also after a .reset()
call.
However, since (for both cases) you're reading ahead in the input, you should use the new .seek()
method. In the t=0 case, you'd use this (after a .reset()
if needed) to start .inputLatency()
samples ahead in the input.
In the other case (seeking to t=N), you can use that same method to jump to a new location: feeding it a suitable amount of input which ends at position t*sampleRate + stretch.inputLatency()
. If you want the output to stay continuous (no clicks/etc.) just don't call .reset()
in the second case.
The playbackRate
argument helps Stretch figure out what to do with transients in the first block. If you want to scrub around the input (calling .seek()
again every time the mouse moves), you can pass in 0
for this. If you're going to resume normal playback after .seek()
, you should use a playback rate that matches your initial time-stretch rate.
This is a new method, so if you hit any issues or bugs, please do get in touch! In fact, get in touch either way so I know whether it works. 😛
Discord or email are more reliable methods than GitHub, I don't check here very often.
Meant to follow up here after catching you briefly in Discord– thanks so much for your help! I've got it working perfectly now, really excited
Hey @geraintluff,
We chatted a while ago on Discord about the latency calculations involved in using this library, and I want to follow up with a more concrete example because I’m still not quite sure I understand. Let me explain my use case and I’ll enumerate my understanding along the way. I’m hoping you can either confirm my understanding or show me where I have it wrong!
So I’m using this library in a VST3 plugin context running in a DAW. My plugin has an audio loop (e.g. a synth melody loop) loaded into memory, and aims to play the loop perfectly in time with the DAW. Because the loop might be recorded at a BPM that differs from the DAW’s current BPM, I’m using time stretching to accommodate.
Now, because we’re dealing with a DAW and an algorithm that introduces latency, we have to report our latency to the DAW. My understanding is that the latency I should report to the DAW should be a fixed number, and that that number is
stretch.outputLatency()
. Then there is the more dynamic component of the latency calculationstretch.inputLatency() * stretchFactor
, and my understanding is that this part of the latency calculation is something I can address manually using the pre-roll idea to ensure my output is always in sync with the DAW.So my first question is, is that understanding accurate? Then, assuming that’s accurate, there comes the follow up question for how to actually handle that pre-roll. Here I see two cases:
To handle the first case, there’s nothing for me to pre-fill because before time t=0 we have just silence. But here I’m not sure: do I need to push zeros into
stretch
to fill some internal buffer? Or can I assume those 0s are already in there internally and my expected output time will followstretch.outputLatency()
samples later?To handle the second case, there is often sample data I can and should pre-fill from time
t’ < N
. To do this, I should pre-roll input data by readingstretch.inputLatency() * stretchFactor
number samples of my input data leading up to time t=N into stretch. I believe I can do this by callingstretch.process
with that input data and with anullptr
for the output data so that it’s clear I’m not asking for output from my pre-roll. Immediately after (i.e. synchronously) I should then feed my input data from time t=N onward intostretch
and take the appropriate number of output samples to hand back to the DAW.Does the above sound accurate as well? Thank you for your time! I appreciate your help and again I want to say that I really appreciate your work on this library, it sounds fantastic.