Open yaxu opened 3 years ago
The process function is meant to be called at regular intervals, typically based on the audio block size. So if you dont have audio yourself, just setup a thread and try to call process at regular intervals (the more precise the better, remember that "sleep" function does a sleep-at-least and a "select+fd" does a sleep-at-most, so the 2nd option is preferred if possible)
I think around 5ms should be ok for the timing interval, report that to hylia too of course as it needs to adjust time based on this value.
Corrected above, "sleep" function does a sleep-at-least and a "select+fd" does a sleep-at-most.
Tidal calculates events as 20Hz (I think that's every 50ms), would that work? What would I put for the 'frames' parameter then?
Would it be better to take this kind of approach instead when audio is involved: https://github.com/Ableton/link/blob/master/examples/linkaudio/AudioPlatform_Dummy.hpp#L62
i.e. just ask what beatTime etc is 'now' according to link's steady clock?
You are free to try to do some changes and submit PR.
50ms could work, just try it :) The number of frames will then depend on the sample rate, but you dont have that either since there is no audio.. Some little math will be needed. The latency value is directly correlated to the buffer-size/frames. Just assume 48kHz rate and do the same calculations as in mod-host, a bit in reverse since you have the frequency and want to get the buffer size.
It looks like non-audio apps are supposed to use captureAppSessionState
and commitAppSessionState
. This is how SuperCollider does things in the non-audio client process. https://github.com/supercollider/supercollider/blob/develop/lang/LangPrimSource/SC_LinkClock.cpp
I'm not a c++ person but can try to have a go at a PR with the needed calls.
Hi, I'm trying to get my head around how to use Hylia with TidalCycles (aka tidal), I think I see an assumption in hylia that local time is calculated from sample time https://github.com/falkTX/Hylia/blob/master/hylia.cpp#L84 . Tidal doesn't make sound itself so doesn't have access to sample time, it only outputs Open Sound Control bundles according to the system clock time, usually sent to supercollider for synthesis.
Am I right then that for processes that aren't tied to a sampleclock, an alternative version or option for
process
is needed that accepts a hostTime as input rather than elapsed frames?