Closed moarzumanov closed 6 years ago
Hello,
The example has been updated to use 2.0.0-preview7
which went live earlier today.
https://www.twilio.com/docs/api/video/changelogs/ios#200-preview7-november-9-2017
At this point we are considering the feature work for AudioSink to be complete. One potential improvement we've imagined would be to allow the developer to determine the frequency of the renderSample:
callbacks. Only time, and your feedback will tell us if this is worthwhile to pursue.
Elsewhere on the audio front, we are considering the ability to provide your own custom Audio Device. This would allow you to control the mechanism by which audio playback and recording occurs in a Room. We think this would help Twilio Video satisfy more product use cases, especially those that fall outside of traditional microphone / speaker audio routes. Expect to hear more about this in the future.
Regards, Chris Eagleston
Hey @ceaglest is there a way currently to change the sample rate of the AudioDevice?
Hi @Gabriel-Lewis,
The AudioDevice prefers that AVAudioSession operates at 48 kHz (and in mono for echo cancellation purposes). However, attaching a bluetooth headset or other full duplex playback/recording device might cause a different sample rate to be chosen by AVAudioSession at runtime. When this occurs the device will react to the change.
You might also be able to override the sample rate yourself by using our CallKit audio APIs for manual control and then changing the preferred sample rate of the AVAudioSession yourself. We are actively working on improving our APIs in this area, so expect some changes in time for the next 2.0-preview release.
Best, Chris Eagleston
Why can't I get access to the same raw audio buffers within the c# sdk?
@unicomp21 Can you provide more details about your development setup? (Currently, the Video SDK does not support Xamarin officially.)
The intention would be server side receiving a webrtc connection, and grabbing the raw buffers out of the audio stream. The other end of the webrtc connection would be a javascript browser client which runs on any ios/android/desktop browser.
Thanks @unicomp21. Can you provide following information - Are you using Xamarin or the C# helper library to REST APIs? Are you using Twilio's JavaScript Video SDK or the iOS Video SDK in your app?
Using our client SDK (iOS and JavaScript both), you should be able to retrieve the audio buffers. Here are few example techniques for JavaScrtip SDK -
Please let me know if you have any questions.
javascript video sdk for client
On the server side end of the connection, preferably in .net core, I need access to the raw audio buffers.
The javascript client, can I use it w/ node.js within the server-side?
On Tue, Jan 2, 2018 at 8:13 PM, Piyush Tank notifications@github.com wrote:
Thanks @unicomp21 https://github.com/unicomp21. Can you provide following information - Are you using Xamarin or the C# helper library to REST APIs? Are you using Twilio's JavaScript Video SDK or the iOS Video SDK in your app?
Using our client SDK (iOS and JavaScript both), you should be able to retrieve the audio buffers. Here are few example techniques for JavaScrtip SDK -
- In DOM contexts, the WebAudio APIs give access to audio samples (and Canvas APIs to video frames) Another technique: https://github.com/mappum/electron-webrtc
Please let me know if you have any questions.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/twilio/video-quickstart-swift/issues/104#issuecomment-354917903, or mute the thread https://github.com/notifications/unsubscribe-auth/AAhV_brTHnyP3HJqotoprOWvQGhLGgZAks5tGtREgaJpZM4M2Sir .
Hi @unicomp21,
Why can't I get access to the same raw audio buffers within the c# sdk?
We only have a C# Helper Library for interacting with the REST API. We do not have a Video SDK for C#.
On the server side end of the connection, preferably in .net core, I need access to the raw audio buffers.
You can possibly get creative using twilio-video.js and something like electron-webrtc. For example, your .NET server could launch a separate process running electron-webrtc and twilio-video.js. This separate process could manipulate/handle the incoming audio/video as necessary (probably using the Web Audio APIs). Any communication back to your .NET server could be done via IPC.
The javascript client, can I use it w/ node.js within the server-side?
You could, but it would take a lot of tweaking. It's not supported out-of-the-box. The main challenge is making the WebRTC APIs available. There are projects like electron-webrtc and node-webrtc, but they're pretty experimental.
We are also considering publishing a C++ SDK. Would this help you?
-Mark
I'm looking for any webrtc client which I can use within a server side process, and get access to the raw buffers for the audio stream. I don't care about the language or framework. Whatever is available, I'll use it. The key part is getting real-time access to the raw audio buffers.
On Tue, Jan 2, 2018 at 1:20 PM, Piyush Tank notifications@github.com wrote:
@unicomp21 https://github.com/unicomp21 Can you provide more details about your development setup? (Currently, the Video SDK does not support Xamarin officially.)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/twilio/video-quickstart-swift/issues/104#issuecomment-354838026, or mute the thread https://github.com/notifications/unsubscribe-auth/AAhV_UieYnihGJfY_RLWg57HcF14m2rYks5tGnNggaJpZM4M2Sir .
Yes, absolutely, a c++ sdk would be very helpful.
On Thu, Jan 4, 2018 at 1:58 PM, Mark Roberts notifications@github.com wrote:
Hi @unicomp21 https://github.com/unicomp21,
Why can't I get access to the same raw audio buffers within the c# sdk?
We only have a C# Helper Library https://www.twilio.com/docs/libraries/csharp for interacting with the REST API. We do not have a Video SDK for C#.
On the server side end of the connection, preferably in .net core, I need access to the raw audio buffers.
You can possibly get creative using twilio-video.js and something like electron-webrtc. For example, your .NET server could launch a separate process running electron-webrtc and twilio-video.js. This separate process could manipulate/handle the incoming audio/video as necessary (probably using the Web Audio APIs https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API). Any communication back to your .NET server could be done via IPC.
The javascript client, can I use it w/ node.js within the server-side?
You could, but it may take some tweaking. It's not supported out-of-the-box. The main challenge is making the WebRTC APIs available. There are projects like electron-webrtc and node-webrtc, but they're pretty experimental.
We are also considering publishing a C++ SDK. Would this help you?
-Mark
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/twilio/video-quickstart-swift/issues/104#issuecomment-355367818, or mute the thread https://github.com/notifications/unsubscribe-auth/AAhV_TSUDoPSC7QQ-ISJyNNd3FBzAnhyks5tHR9CgaJpZM4M2Sir .
@ptankTwilio the electron-webrtc thing just clicked, I think this could work. My fear though is the overhead of the user/kernel context switch required now for all the i/o. Especially w/ all the security patches that just went out due to Intel vulns (ie MeltDown, Spectre), this will get much worse. Anyone aware of a container concept, or other mechanism, which can eliminate this overhead?
@markandrus after further research, I think a C++ sdk (ie RTCPeerConnection equivalent) which allows access to the raw audio buffers would be ideal.
@ptankTwilio for the life of me, I can't find a webaudio example which populates an AudioBuffer from the MediaStream of an RTCPeerConnection. Any recommendation on where to look?
@unicomp21 in twilio-video.js, the AudioTrack and VideoTrack classes have a mediaStreamTrack
property. To pipe this in to the Web Audio APIs, you'll want to
Create an AudioContext, e.g.
const audioContext = new AudioContext()
Wrap the MediaStreamTrack in a MediaStream, e.g.
// Given some twilio-video.js AudioTrack...
const audioTrack = /* ... */
const mediaStream = new MediaStream() // Create a MediaStream
mediaStream.addTrack(audioTrack.mediaStreamTrack) // And add the MediaStreamTrack to it
Create a MediaStreamAudioSourceNode from the MediaStream with createMediaStreamSource
, e.g.
const sourceNode = audioContext.createMediaStreamSource(mediaStream)
Create an AnalyserNode with createAnalyserNode
, e.g.
const analyserNode = audioContext.createAnalyser()
Connect the MediaStreamSourceNode to the AnalyserNode, e.g.
sourceNode.connect(analyserNode)
Use one of getFloatFrequencyData
, getByteFrequencyData
, getByteTimeDomainData
, or getFloatTimeDomainData
to get data out of the AnalyserNode. These APIs write into an ArrayBuffer, e.g. Uint8Array or Float32Array.
For a full example where these APIs are used to visualize audio, check out this example from MDN.
Thanks @markandrus!
Within firefox and chromium, running headless on server side, is there any limit for the number of rtcpeerconnections one can create?
Is there any way to get AVAssetTrack from TVIAudioTrack?