Hello! I have been following the ExampleSampleBufferView from #286 for a custom render in order to do face detection on local frames and display the results. I was followiing @ceaglest comments on #349 to take incoming local frames, unpack CVImageBuffers, and feed them into my OpenCV face detection. Now I want to display the results of my face detection which gives me a UIImage. In the renderFrame function in ExampleSampleBufferView it enqueues a sampleBuffer. I've seen in the ARKit Example how to transform images to CVPixelBuffers. However when I transform a CVPixelBuffer into a CMSampleBuffer, everything runs but no image shows up. I would love some advice on how I can display the UIImage onto a UIStackView by properly enqueue this a newly created sampleBuffer to the display. Do you have any advice or idea what could be the issue? I'm not sure if I am correctly getting all the data in the CVPixelBuffer and CMSampleBuffers. I'd greatly appreciate the help!
*Edit - I have figured out the issue! I wasn't taking into consideration the video format when transforming into a UIImage and was using an RGB conversion process. Thank you!
Expected Behavior
Get local video frame -> access CVImageBufferRef -> create a CMSampleBuffer -> get image from CMSampleBuffer -> perform face detection and get UIImage -> transform UIImage to PixelBuffer -> transform PixelBuffer to CMSampleBuffer -> enqueue new CMSampleBuffer to display layer -> see local frame with face detection
Actual Behavior
Get local video frame -> access CVImageBufferRef -> create a CMSampleBuffer -> get image from CMSampleBuffer -> perform face detection and get UIImage -> transform UIImage to PixelBuffer -> transform PixelBuffer to CMSampleBuffer -> enqueue new CMSampleBuffer to display layer -> don't see any local frame
Hello! I have been following the ExampleSampleBufferView from #286 for a custom render in order to do face detection on local frames and display the results. I was followiing @ceaglest comments on #349 to take incoming local frames, unpack CVImageBuffers, and feed them into my OpenCV face detection. Now I want to display the results of my face detection which gives me a UIImage. In the renderFrame function in ExampleSampleBufferView it enqueues a sampleBuffer. I've seen in the ARKit Example how to transform images to CVPixelBuffers. However when I transform a CVPixelBuffer into a CMSampleBuffer, everything runs but no image shows up. I would love some advice on how I can display the UIImage onto a UIStackView by properly enqueue this a newly created sampleBuffer to the display. Do you have any advice or idea what could be the issue? I'm not sure if I am correctly getting all the data in the CVPixelBuffer and CMSampleBuffers. I'd greatly appreciate the help!
*Edit - I have figured out the issue! I wasn't taking into consideration the video format when transforming into a UIImage and was using an RGB conversion process. Thank you!
Expected Behavior
Get local video frame -> access CVImageBufferRef -> create a CMSampleBuffer -> get image from CMSampleBuffer -> perform face detection and get UIImage -> transform UIImage to PixelBuffer -> transform PixelBuffer to CMSampleBuffer -> enqueue new CMSampleBuffer to display layer -> see local frame with face detection
Actual Behavior
Get local video frame -> access CVImageBufferRef -> create a CMSampleBuffer -> get image from CMSampleBuffer -> perform face detection and get UIImage -> transform UIImage to PixelBuffer -> transform PixelBuffer to CMSampleBuffer -> enqueue new CMSampleBuffer to display layer -> don't see any local frame
Versions
All relevant version information for the issue.
Xcode
[11.5]
iOS Version
[13.5.1]
iOS Device
[iPhone 8 11 Pro Max]