Closed erodozer closed 5 months ago
I second this. At the moment I haven't found a solution for dynamic music except going Fmod or Wwise route which makes questionable porting to all platforms.
I didn't try it, but maybe this could be implemented with the new custom mixing capabilities? Like, in a callback to output audio frames, the game sums all the layers' samples.
It would be really useful i.e. for rhythm games and effects like mickey-mousing. From a scripting API perspective what I think is needed are two ingredients:
(Note: The second part of the above API is AFAIK already implemented in Godot.)
An example of an API that already does this would be Unity3D's AudioSource.PlaySheduled function and its AudioSettings.dspTime
Combining those two APIs make the gameplay in the following 1 minute video possible.
@nhydock As a workaround you could autostart all tracks that you need (muted) and slowly change the volume on the tracks that you want to crossfade between. We actually used this in our DungeonTracks game because we had the same problem with the desyncing.
Edit: It seems that the company that made the Unity plugin set its showcase video on private. What it showed was a top down shooter game where the enemy animations and players shots (visual effects and audiosamples) were in sync to the beat of the music.
This would be fantastic for a project I'm working on. Right now basically doing this:
for music_track in music_tracks: music_track.play()
...but sometimes these are slightly unsynchronised.
Right now my plan is to create a module that adds a 'LayeredAudioStream' node that behaves like a normal AudioStream node, except it would support multiple AudioStream resources each with independent volumes.
I too, need something like this, so I can switch between different audio files that are exactly the same BPM but one has extra melodies on top (doing a kind of thing like Super Mario World where when he jumps on Yoshi, the bongos track starts playing).
What might work for me is if I could create a multitrack audio file (i.e. with 4 tracks: left and right for the base song, and another left and right for the extra melody on top), then mute or fade in specific tracks within that one audio file.
Might be worth mentioning that with a fair bit of legwork, AudioStreamGenerator
can be used to mix WAVs 100% accurately - by, er, doing the mixing yourself. Here's an example:
# here, we presume these are both signed 16-bit stereo wavs of equal length:
var wav_a := preload("res://a.wav")
var wav_b := preload("res://b.wav")
var wav_size := wav_a.data.size()
var playback: AudioStreamGeneratorPlayback
var offset := 0
func _ready():
var player := AudioStreamPlayer.new()
var generator := AudioStreamGenerator.new()
player.stream = generator
add_child(player)
player.play()
playback = player.get_stream_playback()
func _process(_delta):
for _i in playback.get_frames_available():
playback.push_frame(Vector2(
wav_a.data.decode_s16(offset) + wav_b.data.decode_s16(offset),
wav_a.data.decode_s16(offset + 2) + wav_b.data.decode_s16(offset + 2)
) / 32768.0)
offset = (offset + 4) % wav_size
This isn't trivial for non-PCM formats though; there's no easy way of loading the decoded PCM from an Ogg or MP3 into a PackedByteArray
this way, so all your music will have to be WAVs or similar. This is also pretty inefficient - quote from the docs:
Note: Due to performance constraints, this class is best used from C# or from a compiled language via GDExtension. If you still want to use this class from GDScript, consider using a lower mix_rate such as 11,025 Hz or 22,050 Hz.
Correct me if I'm wrong, but the AudioStreamSynchronized completes this proposal, correct?
Yep, the interactive music support does achieve what this proposal was for. I think it's good to close this proposal finally, either now or once 4.3 stable is released.
Describe the project you are working on: A game with dynamic audio based on gameplay context.
Describe how this feature / enhancement will help your project: The audio in the game involves multiple tracks which are variations unto itself. Transitioning between tracks should be seemless and in-sync with one another, as they are layered. In my game all the tracks that I need to switch between are the same length and bpm, but this shouldn't be a requirement. Without accurate syncing, I've had tracks that need to play simultaneously and both be heard together end up becoming desynced, causing extreme dissonance as the beats are completely off.
This kind of feature is also incredibly useful for rhythm games that keep instrument tracks as separate streams, ie. Frets On Fire/Rock Band.
Show a mock up screenshots/video or a flow diagram explaining how your proposal will work:
Describe implementation detail for your proposal (in code), if possible: Using a manager node, allow all children audio streams within it to play simultaneously, their positions synced across them. Position and isPlaying should be controlled by the manager instead of each stream. Looping should still be managed by child streams, with sync position relative to the longest stream.
Through scripts developers can adjust things such as fade between tracks if they only wish to have one heard at a time.
If this enhancement will not be used often, can it be worked around with a few lines of script?: Not presently without causing audio to jitter. Audio runs on a thread separate from the game loop, so attempting to modulate and keep tracks in sync using scripts is not ideal.
Is there a reason why this should be core and not an add-on in the asset library?: Core at the present time does not expose enough low-level details about the audio streams and ability to keep them in sync. Most audio features are better suited integrated tightly with the engine, however if the APIs could be improved to expose more details then it's possible that this could be a feature managed better in an asset such as Godot-Mixing-Deck