Open jarmitage opened 1 month ago
should something like this work?
from signalflow import *
graph = AudioGraph()
audio_buf = Buffer(audio_path)
player = BufferPlayer(audio_buf, loop=True)
clock = Impulse(1.0)
divided_clock = ClockDivider(clock, graph.sample_rate)
counter = Counter(clock=divided_clock, min=0, max=audio_buf.num_frames)
graph.play(player)
co = counter.output_buffer[0][0]
nf = audio_buf.num_frames
conf = co / nf # percentage way through the buffer
If you're looking to get the current read position of a BufferPlayer
, you can use the under-utilised (and under-documented) get_property
method, which will return the read position in seconds:
> player.get_property("position")
15.184000015258789
Is this what you were looking to do? As an aside, a couple of other issues with your hypothetical code sample above:
sample_rate
triggers per second, you would want a clock multiplier, not a divider (although ClockMultiplier
does not actually exist, yet)sample_rate / 2
triggers per second, because a trigger is defined as a zero-crossing (i.e., a transition from <= 0
to >0
)Ok cool - does get_property("position")
work for all Nodes or just BufferPlayer? E.g. if I want the value of the last sample of an oscillator.
Read position is good, but then we have to use that to manually fetch the value associated with that position. Would be great if this was available via another property. Or not?
Named properties can be defined for each node class. position
is a property that BufferPlayer specifically defines. And yes, you would then need to translate that position to an offset in samples and query the buffer's data
property to get the value of the sample under the read head.
Yes, it could be nice to have current_position
and current_sample
as properties of BufferPlayer. One caveat is that these would obviously only update once per processing block, meaning every 5ms (= 188fps) with a 256-sample buffer size at 48kHz, down to every 43ms (= 24fps) with a 2048-sample buffer. This is probably fine for most purposes.
Ok so in practice output_buffer[0][0]
is essentially the finest resolution Python can access for now?
Yes, exactly. And this is the same as any audio system running on a non-real-time OS - samples are passed to the audio thread in blocks, and the control thread has no insight into what sample is currently being rendered with any resolution better than <1 block. You could do something with interpolation on the control side if you did want granularity better than that.
Say I wanted to draw a moving playhead on top of a buffer waveform visualisation, how would I do that?