Open ZenBre4ker opened 1 year ago
memo: WRS-419
I was starting to look into it myself and found that apparently the OnAudioFilterRead
might be called too late.
Others solved it by just using a clip with 1s and multiply that with the data.
https://forum.unity.com/threads/onaudiofilterread-sound-spatialisation.362782/
Instead I first tried to directly create a clip instead of a filter, but somehow I can't manage to set the data like in the filter method. https://github.com/Unity-Technologies/com.unity.webrtc/blob/main/Runtime/Scripts/AudioStreamTrack.cs#L165-L168
Manually applying a sinus curve like in the example works, but with the webrtc method it fails or I just dont hear anything.
private int sampleRate;
private int channels = 2;
private void AddClip(AudioSource source)
{
sampleRate = AudioSettings.outputSampleRate;
_clip = AudioClip.Create("WebRTC-Stream", sampleRate * 20, channels, sampleRate, true, OnAudioRead, OnAudioSetPosition);
source.clip = _clip;
source.Play();
}
private void OnAudioRead(float[] data)
{
SetData(data, channels, sampleRate);
}
EDIT: I believe that NativeMethods.AudioTrackSinkProcessAudio(self, data, data.length, channels, sampleRate);
is not able to handle a regularly changing data-lengths as it rebuilds the buffer to fast to actually fill it here:
https://github.com/Unity-Technologies/com.unity.webrtc/blob/main/Plugin%7E/WebRTCPlugin/AudioTrackSinkAdapter.cpp#L82-L86
Setting the data.length to a constant value actually makes it partly working. (Still not good, but you can hear parts of the original sound and apply stereo panning)
Using the most common length 4096 in NativeMethods.AudioTrackSinkProcessAudio(self, data, 4096, channels, sampleRate);
leads to a repeating cut-off sound of the original.
EDIT2: By just storing everything in another float[ ] which comes out of the webrtc buffer everytime OnAudioFilterRead
gets called and instead creating a clip, that iterates in its own pace through that float[ ] everytime OnAudioRead
gets called, I can perfectly recreate the source. So i guess going for a clip makes more sense here. Probably needs to rewrite the webrtc-buffer readout to directly stream it into the clip instead of in the OnAudioFilterRead
-MEthod.
@ZenBre4ker
Thank you for sharing details.
Your suggesion is that we need to provide another method instead of the OnAudioFilterRead
method to make audio streaming buffer. Right?
@karasusan Yeah, I believe that going over the OnAudioFilterRead
Method without being able to set the time of creation is the problem here. My suggestion would be using AudioClips
instead and use the ability to directly stream them and then be able to use AudioClips
in an AudioSource
.
@ZenBre4ker I got it. I added the task into the backlog.
Related issue #880
Package version
2.4.0-exp.11
Environment
Steps To Reproduce
Current Behavior
The volume stays the same on both sides,
Expected Behavior
The volume should only be hearable on one side left for -1 and right for +1.
Also note, that the input audioSource with a proper clip is pannable and also sends that information via webrtc to the receiver.
Anything else?
No response