Currently, there's one working combination of flock.enviro and flock.audioSystem, the latter of which is coupled to the Web Audio API.
In order to support easier use of Flocking as a modulation source in contexts like Aconite, there should be a Flocking environment that is intended to be manually driven, and which does not output to any audio backend.
For example, in Aconite, the frame rate clock will drive this new type of environment (by binding its generate invoker to the clock's onTick event), and separately, values can be relayed from any arbitrary flock.synth.model to the uniforms of an Aconite shader—without needing to manually bind each model synth to the clock separately.
Currently, there's one working combination of
flock.enviro
andflock.audioSystem
, the latter of which is coupled to the Web Audio API.In order to support easier use of Flocking as a modulation source in contexts like Aconite, there should be a Flocking environment that is intended to be manually driven, and which does not output to any audio backend.
For example, in Aconite, the frame rate clock will drive this new type of environment (by binding its
generate
invoker to the clock'sonTick
event), and separately, values can be relayed from any arbitraryflock.synth.model
to the uniforms of an Aconite shader—without needing to manually bind each model synth to the clock separately.