Meonardo / janus-videoroom-ios

An iOS AppRTC example for janus videoroom plugin, written in Swift.
16 stars 5 forks source link

ios 15 now support PIP, did u tried with this project? #11

Open fukemy opened 2 years ago

fukemy commented 2 years ago

hi, i read the doc about PIP for video call, but i can not done it alone, i feel so hard to me, so I want to ask did you want to make PIP mode for videocall? https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls?changes=_1

Meonardo commented 2 years ago

https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls?changes=_1 this is good news, I will try to do it this weekend.

fukemy commented 2 years ago

thanks bro, im stuck in converting buffer into AVSampleBufferDisplayLayer, waiting for your help now

fukemy commented 2 years ago

that's not good point: You need register your app with Apple to get Multitasking's certificate that using for PIP Call, that's so bad. You can see the doc here

https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_developer_avfoundation_multitasking-camera-access https://developer.apple.com/contact/request/multitasking-camera-access/

Meonardo commented 2 years ago

Yes, I saw those requirements, I did make a request earlier, wait for Apple to approve...

fukemy commented 2 years ago

ok, hope you get this PIP feature in demo <3

Meonardo commented 2 years ago

@fukemy Hi, recently I have a urgent task to finish, so I will try my best to do this after my task is done.

fukemy commented 2 years ago

dont worry, we have time, i can wait, if you have any ideal u can post here

Derad6709 commented 2 years ago

Any updates?

fukemy commented 2 years ago

hi, i have new ideal based on PiPKit He using view captured to render pip mode, can u check this support to us? Im tried to using this but it's return black screen, but i think may it work?

Meonardo commented 2 years ago

sorry for no response for a long time @fukemy can you try the latest main brach?

fukemy commented 2 years ago

hi, welcome back @Meonardo, could u provide me what's new in lasted main branch? Because your current version that i used is work normally, So I think i will check with Janus server

Meonardo commented 2 years ago

I add the PiP mode support, but it only work when the videoroom configures h.264 codec, I am not sure what problem is, still working on it...

fukemy commented 2 years ago

ok, i can convert codec to VP8 from IOS to test, i will try if i have free times, thanks If you want to check VP8 with me, you can see this code below:

protocol ScreenSampleCapturerDelegate: AnyObject {
    func didCaptureVideo(sampleBuffer: CMSampleBuffer)
}

enum VideoRotation: Int {
    case _0 = 0
    case _90 = 90
    case _180 = 180
    case _270 = 270
}

open class ScreenSampleCapturer: RTCVideoCapturer, ScreenSampleCapturerDelegate {
    var kDownScaledFrameWidth = 360
    var kDownScaledFrameHeight = 960

    override init(delegate: RTCVideoCapturerDelegate) {
        super.init(delegate: delegate)
        let width = Int(UIScreen.main.bounds.width)
        let height = Int(UIScreen.main.bounds.height)
        kDownScaledFrameHeight = height * kDownScaledFrameWidth / width
        print("kDownScaledFrameWidth: \(kDownScaledFrameWidth) - kDownScaledFrameHeight: \(kDownScaledFrameHeight)")
    }

    func didCaptureVideo(sampleBuffer: CMSampleBuffer) {
        if sampleBuffer.numSamples != 1 || !sampleBuffer.isValid || !CMSampleBufferDataIsReady(sampleBuffer) {
            return
        }
        guard let pixelBuffer = sampleBuffer.imageBuffer else { return }
        var videoOrientation = VideoRotation._0
        guard let orientationAttachment = CMGetAttachment(sampleBuffer, key: RPVideoSampleOrientationKey as CFString, attachmentModeOut: nil) as? NSNumber else { return }
        let orientation: CGImagePropertyOrientation = CGImagePropertyOrientation(rawValue: orientationAttachment.uint32Value) ?? .up
        switch (orientation) {
        case .up, .upMirrored, .down, .downMirrored:
            videoOrientation = ._0
            break
        case .leftMirrored:
            videoOrientation = ._90
        case .left:
            videoOrientation = ._90
        case .rightMirrored:
            videoOrientation = ._270
        case .right:
            videoOrientation = ._270
        }
        let rotation = RTCVideoRotation(rawValue: videoOrientation.rawValue) ?? ._0

        var outPixelBuffer: CVPixelBuffer? = nil
        CVPixelBufferLockBaseAddress(pixelBuffer, [])

        let pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer)

        if (pixelFormat != kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
            print("Extension assumes the incoming frames are of type NV12")
            return
        }

        let status = CVPixelBufferCreate(kCFAllocatorDefault,
                                         Int(kDownScaledFrameWidth),
                                         Int(kDownScaledFrameHeight),
                                         pixelFormat,
                                         nil,
                                         &outPixelBuffer)
        if (status != kCVReturnSuccess) {
            print("Failed to create pixel buffer")
            return
        }

        CVPixelBufferLockBaseAddress(outPixelBuffer!, [])

        // Prepare source pointers.
        var sourceImageY = vImage_Buffer(data: CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0),
                                         height: vImagePixelCount(CVPixelBufferGetHeightOfPlane(pixelBuffer, 0)),
                                         width: vImagePixelCount(CVPixelBufferGetWidthOfPlane(pixelBuffer, 0)),
                                         rowBytes: CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0))

        var sourceImageUV = vImage_Buffer(data: CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1),
                                          height: vImagePixelCount(CVPixelBufferGetHeightOfPlane(pixelBuffer, 1)),
                                          width: vImagePixelCount(CVPixelBufferGetWidthOfPlane(pixelBuffer, 1)),
                                          rowBytes: CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1))

        // Prepare out pointers.
        var outImageY = vImage_Buffer(data: CVPixelBufferGetBaseAddressOfPlane(outPixelBuffer!, 0),
                                      height: vImagePixelCount(CVPixelBufferGetHeightOfPlane(outPixelBuffer!, 0)),
                                      width: vImagePixelCount(CVPixelBufferGetWidthOfPlane(outPixelBuffer!, 0)),
                                      rowBytes: CVPixelBufferGetBytesPerRowOfPlane(outPixelBuffer!, 0))

        var outImageUV = vImage_Buffer(data: CVPixelBufferGetBaseAddressOfPlane(outPixelBuffer!, 1),
                                       height: vImagePixelCount(CVPixelBufferGetHeightOfPlane(outPixelBuffer!, 1)),
                                       width: vImagePixelCount( CVPixelBufferGetWidthOfPlane(outPixelBuffer!, 1)),
                                       rowBytes: CVPixelBufferGetBytesPerRowOfPlane(outPixelBuffer!, 1))

        var error = vImageScale_Planar8(&sourceImageY,
                                        &outImageY,
                                        nil,
                                        vImage_Flags(0))
        if (error != kvImageNoError) {
            print("Failed to down scale")
            return
        }

        error = vImageScale_CbCr8(&sourceImageUV,
                                  &outImageUV,
                                  nil,
                                  vImage_Flags(0));
        if (error != kvImageNoError) {
            print("Failed to down scale")
            return
        }
        CVPixelBufferUnlockBaseAddress(outPixelBuffer!, [])
        CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
        if outPixelBuffer == nil { return }
        let timeStamp = Int64(CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * 1000000000)
        let rtcPixelBuffer = RTCCVPixelBuffer(pixelBuffer: outPixelBuffer!)
        let videoFrame = RTCVideoFrame(buffer: rtcPixelBuffer, rotation: rotation, timeStampNs: timeStamp)
        delegate?.capturer(self, didCapture: videoFrame)
    }
}
fukemy commented 2 years ago

kDownScaledFrameWidth and kDownScaledFrameHeight were calculated based on MEMORY problem( special to VP8)

Meonardo commented 2 years ago

I didnt mean the screen capturer, Its about render the pixel buff in AVSampleBufferDisplayLayer. I can get RTCCVPixelBuffer directly from WebRTC when the codec is H.264, but when I change the codec to VP8 I get RTCI420Buffer, so there are some convert tasks to do.

fukemy commented 2 years ago

hi Meonardo, i just found big problem. The camera stopped right after app go background, Im so sad, but I think this current way can not work, because we can not do anything when camera stopped capture

About information: When app still active then PIP active, all work good(below picture is the "Hi" action for you) IMG_0983

But when app go to background the camera stopped, may be it's the WebRTC rule IMG_0984

fukemy commented 2 years ago

I think the problem is not have multitasking camera access profile from Apple. https://developer.apple.com/documentation/bundleresources/entitlements/com_apple_developer_avfoundation_multitasking-camera-access

May I will tried to register again

Meonardo commented 2 years ago

Hi back! Yes, you're right! I have not got the profile yet, so the camera capture session will pause when the app go to background. Usually, we want the remote peer to show in the PiP window only just like making a FaceTime call.