perchco / perchrtc

An iOS WebRTC demo using XirSys servers
MIT License
135 stars 34 forks source link

Can core audio influence audio input and output? #13

Open piofficetwo opened 9 years ago

piofficetwo commented 9 years ago

So far, it seems like trying to change data input and output using core data does not work. For example, when calling a callbackrender function on the audio data to be outputted and setting all data to 0, the audio is still played properly through the speakers.

So I am wondering, where exactly is the video and audio data streamed into the audioSession to be played out and how would I be able to tap into this data?

Thanks!

ceaglest commented 9 years ago

@piofficetwo thanks for the question.

This project doesn't implement a custom iOS audio device, but it does override the AVAudioSession configuration to be better suited for video conferencing (see PHAudioSessionController). If you want access to realtime Core Audio I/O you need to implement your own webrtc::AudioDeviceGeneric for iOS, or modify the existing implementation within the WebRTC source code. This would be really cool, as the default audio device is limited to 16khz / mono, and uses AVAudioSessionModeVoiceChat rather than AVAudioSessionModeVideoChat.

piofficetwo commented 9 years ago

Thanks for your response!

I thought of something but was not sure whether this will work but, do you think it is possible to shut off audio/video and then use the data stream to send audio that will be inputted into the the core audio output? I am not sure about this since I am skeptical about whether it is even possible to access the core audio I/O even after I shut off audio or video.

Any input and advice really helps!

ceaglest commented 9 years ago

@piofficetwo If I understand correctly you essentially want input monitoring? This is possible with Core Audio directly, the problem is WebRTC's audio device implementation for iOS is pretty much the minimum required to get video conferencing to work and no more. That means it leaves out features that most people wouldn't use - such as monitoring.

I'm not planning on adding this particular feature any time soon, but I may go ahead and make some small modifications to the audio device implementation for future WebRTC builds (targeting the Chrome 45 branch at the moment).

Good luck on your quest. Though I haven't kept up with this project recently, The Amazing Audio Engine uses Core Audio directly, and supports input monitoring if you're looking for pointers.

zevarito commented 9 years ago

@ceaglest can you explain me how the audio routes from mediaStream.audioTracks to actually being reproduced? Thanks!

ceaglest commented 9 years ago

@zevarito I don't know everything in between, but local audio is captured and remote audio is reproduced in the iOS Audio Device. The local audio track's enabled property allows you as an application developer to control what is published: either your captured audio or muted silence.

There is also the matter of encoding, decoding, and mixing audio which I don't claim to be an expert in. :)

zevarito commented 9 years ago

@ceaglest Thanks for your answer, the piece of code you just pointed is helpful to me.