Yellow-Dog-Man / Resonite-Issues

Issue repository for Resonite.
https://resonite.com
121 stars 2 forks source link

Implement and integrate audio DSP system for ProtoFlux #567

Open Frooxius opened 7 months ago

Frooxius commented 7 months ago

Is your feature request related to a problem? Please describe.

Currently options for realtime audio processing and filtering are very limited in Resonite.

Describe the solution you'd like

ProtoFlux supports a number of various contexts. There exists a partial basis implementation for DSP context, which allows high performance multi-threaded data processing.

A specific application of this system will be for processing audio streams, which can be used for generating, filtering and processing audio in realtime.

The system would also be allowed to be tied into other parts - e.g. timelines/animations, sequences and so on once those are implemented.

This will be able to used for variety of purposes:

Describe alternatives you've considered

N/A

Additional Context

No response

ProbablePrime commented 7 months ago

My bad!

Enverex commented 7 months ago

Steam Audio (which I believe we're using) also allows for things like reflections, object occlusion, etc. Will that capability be exposed as part of this?

Frooxius commented 7 months ago

@Enverex No, that's completely unrelated and separate. Steam Audio comes into play after audio is processed and then outputted into the world via AudioOutput components.

5H4D0W-X commented 7 months ago

Will this only apply to specific audio source references or could there be volumes that change all sounds, similar to reverb zones? The underwater low pass example would work even better if it could just muffle every audio source, although I understand this could be a security concern

Frooxius commented 7 months ago

You would likely be able to inject filter for the audio that's being outputted and processed, so that would be fine.

Where do you see a security concern with this?

5H4D0W-X commented 7 months ago

The problem I see is that people could make items or avatar features that completely block any user from hearing, by putting a volume around their head that distorts everything badly enough

gameboycjp commented 7 months ago

That sounds like an issue that should be dealt with by the instance owner, or moderation.

FlameSoulis commented 4 months ago

I'll update it there, but part of the discussion mentioned Steam Audio, which went open-source a few weeks ago. Not sure if this helps at all, but figured it'd be worth pointing out, since it's already part of the workflow.

Doesn't help directly, but maybe it opens the doors more on ideas.

https://steamcommunity.com/games/596420/announcements/detail/7745698166044243233

Frooxius commented 4 months ago

That doesn't really help us. It's specifically audio specialization, which happens late in the process, after DSP has already been done.

The open sourcing is cool, but it's not going to help us much in any way - it'd mostly be applicable if we needed to make modifications or fixes to how audio spatialization works.

JackTheFoxOtter commented 2 weeks ago

Because #48 was closed referencing this, but I couldn't find it referenced in the description of this issue, will real-time FFT / Fourier analysis, for example to drive visual effects in a club world, be possible with / build upon this?

shiftyscales commented 2 weeks ago

Per the request in #48:

reliably and accurately extract specific frequencies

Per this issue:

A specific application of this system will be for processing audio streams, which can be used for generating, filtering and processing audio in realtime.

As given as an example in this issue:

Reactive audio processing in worlds (e.g. music gets low pass filter when underwater)

So yes- per my understanding, being able to select and modify certain frequencies are implicitly a part of this issue, @JackTheFoxOtter.

JackTheFoxOtter commented 2 weeks ago

That was not my question. What I'm asking about is basically getting a float value (the fourier transform for a specific frequency) out of the audio data. So my goal isn't modifying the signal but analyzing it and driving something based on that result. I assume this will be possible, I just asked for clarification.

shiftyscales commented 2 weeks ago

I am not certain on what the specifics of the implementation will look like- but I'd also assume if it's possible to modify/write the output, it'd also be possible to just read (or 'analyze') the input.