SamiPerttu / fundsp

Library for audio processing and synthesis
Apache License 2.0
798 stars 43 forks source link

Architecture for mixing multiple Waves #32

Closed bschwind closed 1 year ago

bschwind commented 1 year ago

Hi, I recently discovered this library, I haven't really used a synthesizer before so it was exciting to write some code and have sounds generated from my speakers :) I'm also impressed by the "DSL" you created while avoiding macros, nice work!

I'm working on a small project at the moment where among other audio generators, I want to be able to load up and play samples from WAV files at arbitrary times, having them "overlap" and mix together, up to some max amount of concurrent plays.

I think this is similar to the issue of playing sound effects in a video game where a particular action generates the sound, and if you repeat that action quickly you hear the sound multiple times, perhaps getting louder or higher pitched as the many similar samples overlap.

Here's an example of what I'm going for.

At first I thought I could use the WavePlayer audio node and create them on the fly as new sound effect "requests" come in, but I wasn't sure about how to clean up the node when the sound sample is done so I figured that would eventually blow up memory usage. As far as I can tell, there isn't a way to determine when a WavePlayer is done producing samples if you don't loop it.

My next thought was to create and destroy Nets as these requests come and go, but that also doesn't seem like the right path.

Finally, I'm thinking of creating my own struct and implementing AudioNode for it. The struct could hold however many Waves I want in an arbitrary sized vector, and the tick function would iterate through each one and grab the next sample. I could then also detect when the sounds are done because I'd have access to the state that controls where in the sound we are.

To request a new sound to be played, I would (ab)use the Setting associated type and the listen() function to send it new sound requests over time. My main hesitation here is that AudioNode doesn't feel like something I should be implementing myself, especially with the const ID: u64 I have to specify.

Am I on the right path here, or is this straying outside the intended usage of fundsp?

Thanks!

SamiPerttu commented 1 year ago

Hi!

This is a job for Sequencer. I'm in the process of fixing it up. You can now divide it into a frontend and a backend. The backend renders audio while the frontend is used to add new events. In the future it will also support fading out existing events.

To clean up past events in a real-time situation, you can instantiate the Sequencer with retain_past_events set to false. You can use the push_relative method of Sequencer to add new events relative to real time. For finer grained control, you can add a Timer to the backend graph to keep track of backend time.

bschwind commented 1 year ago

This is a job for Sequencer

Oh wow, I completely missed this in the docs! Looks like it's still a work in progress but this seems like the most proper way to handle what I'm trying to do. Thanks for the response :)