Unity-Technologies / com.unity.webrtc

WebRTC package for Unity
Other
753 stars 191 forks source link

[BUG]: Output AudioSource not stereo pannable #834

Open ZenBre4ker opened 1 year ago

ZenBre4ker commented 1 year ago

Package version

2.4.0-exp.11

Environment

* OS:Windows 10
* Unity version: Unity 2021.3

Steps To Reproduce

  1. Start the WebRTC Audio Sample Scene
  2. Push the "Start"-button and then the "Call"-button
  3. Now go to the hierarchy and select the "Audiosource - Output"
  4. Apply -1 or +1 to "Stereo Pan"

Current Behavior

The volume stays the same on both sides,

Expected Behavior

The volume should only be hearable on one side left for -1 and right for +1.

Also note, that the input audioSource with a proper clip is pannable and also sends that information via webrtc to the receiver.

Anything else?

No response

karasusan commented 1 year ago

memo: WRS-419

ZenBre4ker commented 1 year ago

I was starting to look into it myself and found that apparently the OnAudioFilterRead might be called too late. Others solved it by just using a clip with 1s and multiply that with the data. https://forum.unity.com/threads/onaudiofilterread-sound-spatialisation.362782/

Instead I first tried to directly create a clip instead of a filter, but somehow I can't manage to set the data like in the filter method. https://github.com/Unity-Technologies/com.unity.webrtc/blob/main/Runtime/Scripts/AudioStreamTrack.cs#L165-L168

Manually applying a sinus curve like in the example works, but with the webrtc method it fails or I just dont hear anything.

private int sampleRate;
private int channels = 2;
private void AddClip(AudioSource source)
{
    sampleRate = AudioSettings.outputSampleRate;
    _clip = AudioClip.Create("WebRTC-Stream", sampleRate * 20, channels, sampleRate, true, OnAudioRead, OnAudioSetPosition);
    source.clip = _clip;
    source.Play();
}

private void OnAudioRead(float[] data)
{
    SetData(data, channels, sampleRate);
}   

EDIT: I believe that NativeMethods.AudioTrackSinkProcessAudio(self, data, data.length, channels, sampleRate); is not able to handle a regularly changing data-lengths as it rebuilds the buffer to fast to actually fill it here: https://github.com/Unity-Technologies/com.unity.webrtc/blob/main/Plugin%7E/WebRTCPlugin/AudioTrackSinkAdapter.cpp#L82-L86

Setting the data.length to a constant value actually makes it partly working. (Still not good, but you can hear parts of the original sound and apply stereo panning) Using the most common length 4096 in NativeMethods.AudioTrackSinkProcessAudio(self, data, 4096, channels, sampleRate); leads to a repeating cut-off sound of the original.

EDIT2: By just storing everything in another float[ ] which comes out of the webrtc buffer everytime OnAudioFilterRead gets called and instead creating a clip, that iterates in its own pace through that float[ ] everytime OnAudioRead gets called, I can perfectly recreate the source. So i guess going for a clip makes more sense here. Probably needs to rewrite the webrtc-buffer readout to directly stream it into the clip instead of in the OnAudioFilterRead-MEthod.

karasusan commented 1 year ago

@ZenBre4ker Thank you for sharing details. Your suggesion is that we need to provide another method instead of the OnAudioFilterRead method to make audio streaming buffer. Right?

ZenBre4ker commented 1 year ago

@karasusan Yeah, I believe that going over the OnAudioFilterRead Method without being able to set the time of creation is the problem here. My suggestion would be using AudioClips instead and use the ability to directly stream them and then be able to use AudioClips in an AudioSource.

karasusan commented 1 year ago

@ZenBre4ker I got it. I added the task into the backlog.

karasusan commented 1 year ago

Related issue #880