Closed jagged91 closed 4 years ago
@jgadsby Our Video SDK provides APIs where you could write your own custom video renderer by implementing TVIVideoRenderer and receive video frames from a track. In future, we plan to provide APIs where you can edit video frames.
Let me know if you have any questions.
@piyushtank Thanks! Is there any example code that shows using TVIVideoRenderer in this way? I know there used to be an example for screen recording but I'm not aware of how to use TVIVideoRenderer to access video frames / the camera track itself.
@jgadsby Unfortunately, there is no way to get frames out of TVIVideoView
at present. The TVIVideoView
implements TVIVideoRenderer
. With our current set of APIs, you will have to implement your own custom video renderer to render the video. We are working on providing a sample code to demonstrate the use of TVIVideoRenderer APIs. See this for work-in-progress pull request.
Let me know if you have any questions.
Thanks for the follow up. It sounds like we’re dependent on the example being ready before following up with you on how to actually access / intercept the camera frames. Is that correct?
@jgadsby The PR does not implement your used case. It demonstrates how to use TVIVideoRenderer
APIs. You can take a look at the PR on how to implement your own custom video renderer.
@piyushtank Thanks for clarifying. Just to confirm: we would have to implement our own video renderer that accesses the camera within that renderer class, correct?
@jgadsby The Video Renderer APIs will provide you the video frames for local video track and remote video track. You can have your camera as the Video Source for your local video track.
hi @piyushtank, did you make any progress in exposing the ability to expose video frames from the VideoView so that they could be modified and/or processed prior to rendering them. Looked at the rendered PR but I am not sure how complete it is and how reliable of a source towards implementing something like this that code is. Can you advise if ExampleSampleBufferView is the example we should be looking at from that PR?
@Horatiu We haven't got a chance to complete the Video Renderer example https://github.com/twilio/video-quickstart-ios/pull/286 as our team is busy implementing other priorities for the Video SDK. If you want to edit the frame before rendering on the screen, you will have to write your own custom Video Render by implementing the TVIVideoRenderer
protocol. Yes, you can take a look at the ExampleSampleBufferView
in https://github.com/twilio/video-quickstart-ios/pull/286 on how to implement the TVIVideoRenderer.
Let me know if you have any questions.
Great, thanks for your reply.
@piyushtank Hi again, sorry to bother you on this thread but I cannot find the answer to the question I have anywhere and I am pretty sure you are the right person to be asking this question too. Here it goes:
I am using your iOS SDK to start a live video chat, next thing I would want to do is make use of the AVDepthData map to remove or blur out the background of the subject on compatible devices.
Would something like that be available with your latest SDK?
Or is there an existing feature in your TVIVideoView to remove/replace the background of a subject with a different image/texture?
OR, could I use the face tracking the ARKit provides to approximate and track where the subject is and blur based on that in a twilio (TVIVideoView) live video chat?
Thank you so much for giving this some attention and sorry that I am posting in the wrong place.
@Horatiu The Video SDK renderer will just render the images on screen, it allows you to edit the image before rendering. But for your used case will not satisfy with it.
Unfortunately, at present, our TVICameraSource
does not provide image filtering features and APIs for AVDepthData
, but we plan to open these APIs for image filtering early next year, and I will try to add AVDepthData
APIs as well. Thanks for bringing this up to our attention.
For now, with our current set of APIs, you will have to write your own custom video source using AVCaptureSession
to achieve your used case.
For example on how to write your own custom video source, see this. For example on how to write Video source, see this
I am sorry I do not have a better answer for you right now. Our team was very busy last year improving the stability of the SDK, but we plan to work on supporting advanced features in 2020.
@piyushtank Thank you for your response.
I imagined this is not yet supported, it will be a nice addition once this is supported or at least a way to have the CameraSource or the VideoView to expose rawFrames before they packaged for transport. Even if this means that limits would be enforced by the Twilio SDK in processing times per frame.
I understand the code under the first link (https://github.com/twilio/video-quickstart-ios/blob/master/ARKitExample/ViewController.swift#L55). Writing a TVIVideoSource seems to make what I want to achieve doable. Essentially building a custom TVICameraSource.
Does the Twilio SDK support the addition/injection of a 'custom' CameraSource? If so we could implement that and pass that to the TVILocalVideoTrack.
I am not clear how the second link (objc code) you posted would be useful in the context of dealing with the Twilio SDK. Can you please clarify that in one line?
Thanks again for taking the time to reply to me.
@Horatiu Yes the Video SDK supports injecting a custom camera source. You will have to implement the TVIVideoSource
protocol. See this line in ARKitExample.
Let me know if you have any questions.
hi @piyushtank , Can you help me set a fullscreen image background (virtual background)?
Hi, just a quick question - I know this SDK doesn't support video filters (e.g. blur/cartoon style, etc) - do you have any idea whether it's possible to:
Thanks for your help!