Open DevAlone opened 1 year ago
Thanks for the comment, and sorry for the terribly belated answer.
A couple of thoughts:
Right now, CLEESE doesn't work real-time (more precisely, the PhaseVocoder
Engine is assuming a fixed duration for the sounds it processes, dividing it in e.g. n windows from 0 to duration), so it's integration as a VST plugin does not appear easy. As the lab is currently looking into real-time experimental paradigms (e.g. real-time vocal feedback), we have plans to develop a new version of CLEESE based on a real-time API (i.e. able to manipulate audio or video buffers instead of files), but this will not be worked on in the immediate future.
If the idea is to combine the existing CLEESE transformations (e.g. those available in the PhaseVocoder Engine) with other transformations available in other software (e.g. random reverb, etc.), it is always possible to add these transformations as plugins in CLEESE using the Engine API (requiring a python wrapper around the third-party effect). Right now CLEESE includes 2 of such Engines (PhaseVocoder
for pitch/stretch/eq/gain transformations of sounds, and Mediapipe
for landmark transformation of face images), but other Engines are planned in the future and we'd welcome external contributions too.
Hope this helps, JJ
I know it sound like a whole product based on a library, but if it would be super useful for quickly creating different scientific experiments. You would be able to add it to any preferred DAW like ableton, fl studio, garageband (which is free for macbook users) and use in combination with other plugins. This would allow the following:
It may also benefit music creators