Open mutationstudios opened 3 years ago
Side chaining doesn't mean sending audio to multiple buses, it means sending the output of a bus to an effect on a different bus. This audio will drive the effect, but won't be audible on that bus. For example, you could drive a compressor on the ambient sounds bus using a gun shot so the level of ambient sounds will be reduced during that gunshot. This will create room for the gunshot, making it appear louder than it is. In sound design this is known as "audio ducking". This is either done using an interface on the effects panel, or by adding a send output on the bus that will drive the effect. Below is an example of this UI inside Presonus Studio One's stock compressor; you turn on the blue "Sidechain" button and pick a track that will be used for side chaining.
I just found out about this feature today because I was asked to implement something that uses this in a Unity project. There is a built-in Duck Volume audio effect in Unity, so I tried to find out if Godot already had support for something like that as well (in case I need it in the future) .
Here is the official Unity example: https://www.youtube.com/watch?v=UJYN4_jUIQs
It looks like sidechaining is already implemented, I just don't understand it enough to test it right now. https://docs.godotengine.org/en/stable/tutorials/audio/audio_buses.html#compressor
Describe the project you are working on
Im working on a tower defense
Describe the problem or limitation you are having in your project
not having side-chainned compressors or EQ's dont limit me, it just lowers the quality
Describe the feature / enhancement and how it helps to overcome the problem or limitation
Side-chainning is when you send the audio signal to another audio bus, in this case its so we can compress the original audio signal based on the secondary audio signal, this is known as "ducking"
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
as described above, theres an option that allows for audio signal routing, i was thinking like how FL Studio handles it, it shouldnt take any processing power to allow for re-routing, the only part that should is the ducking from the signal
If this enhancement will not be used often, can it be worked around with a few lines of script?
im not sure
Is there a reason why this should be core and not an add-on in the asset library?
it will vastly improve audio quality for games, say you have background music playing and you fire a gun, you wouldn't necessarily want a louder the bullet louder, but you would keep it the same level as the background music, and cut the frequencies the gun shot makes from the background music, making everything level and clean, getting rid of muddiness, this is the exact way I mix, and most major mixing engineers mix,