I’m currently trying to integrate the following code to retrieve camera intrinsics from the CMSampleBuffer to compute the field of view (FOV):
if let captureConnection = videoDataOutput.connection(with: .video) {
captureConnection.isEnabled = true
captureConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
nonisolated func computeFOV(_ sampleBuffer: CMSampleBuffer) -> Double? {
guard let camData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) as? Data else { return nil }
let intrinsics: matrix_float3x3? = camData.withUnsafeBytes { pointer in
if let baseAddress = pointer.baseAddress {
return baseAddress.assumingMemoryBound(to: matrix_float3x3.self).pointee
}
return nil
}
guard let intrinsics = intrinsics else { return nil }
let fx = intrinsics[0][0]
let w = 2 * intrinsics[2][0]
return Double(atan2(w, 2 * fx))
}
However, I’m not very familiar with WebRTC on iOS, and I’m wondering where I can find the typical captureOutput with the CMSampleBuffer in the Sources -> StreamVideo package. I would appreciate any guidance or suggestions on where to integrate this functionality into the existing codebase.
What are you trying to achieve?
I’m currently trying to integrate the following code to retrieve camera intrinsics from the CMSampleBuffer to compute the field of view (FOV):
However, I’m not very familiar with WebRTC on iOS, and I’m wondering where I can find the typical captureOutput with the CMSampleBuffer in the Sources -> StreamVideo package. I would appreciate any guidance or suggestions on where to integrate this functionality into the existing codebase.
Thanks for your help!
Best regards
If possible, how can you achieve this currently?
Maybe possible?
What would be the better way?
I don't know right now.