googlevr / gvr-ios-sdk

Google VR SDK for iOS
http://developers.google.com/vr/ios/
Other
647 stars 191 forks source link

Any way we can load video frames individually in the iOS sdk? #245

Closed SteveLobdell closed 7 years ago

SteveLobdell commented 7 years ago

I'm currently getting individual frames that I process and then display on the screen by enqueuing a CMSampleBufferRef into a AVSampleBufferDisplayLayer. Wondering if it's at all possible that I could send the CMSampleBufferRef into the sdk and have it display stereoscopically. Thanks

EDIT: could also use a CVImageBufferRef if that were possible as well.

Steve

sanjayc77 commented 7 years ago

This is quite complicated, but possible using GVR NDK directly. You will have to build your own ObjC SDK on top.

SteveLobdell commented 7 years ago

thanks for absolutely not giving any insight into my question and then closing the topic. Super helpful.

sanjayc77 commented 7 years ago

Sorry about the abrupt reply, but we don't have a solution/workaround that can help you. As far as insight goes, you will have to build a proper OpenGL based renderer on top of GVR NDK that can display video textures. Look at TreasureHuntNDK on how to do that.

The following code pulls textures from AVVideoOutput at a given time:

CMTime itemTime = [_videoOutput itemTimeForHostTime:headPose.nextFrameTime];

  if ([_videoOutput hasNewPixelBufferForItemTime:itemTime]) {
    CVPixelBufferRef pixelBuffer =
        [_videoOutput copyPixelBufferForItemTime:itemTime itemTimeForDisplay:NULL];
    if (pixelBuffer) {
      [self cleanUpTextures];
      int videoWidth = (int)CVPixelBufferGetWidth(pixelBuffer);
      int videoHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
      BOOL requiresChannelSizes = EAGLContext.currentContext.API > kEAGLRenderingAPIOpenGLES2;

      // Create Y and UV textures from the pixel buffer. RGB is not supported.
      _lumaTexture = [self createSourceTexture:pixelBuffer
                                         index:0
                                        format:requiresChannelSizes ? GL_RED : GL_RED_EXT
                                internalFormat:requiresChannelSizes ? GL_R8 : GL_RED_EXT
                                         width:videoWidth
                                        height:videoHeight];
      // UV-plane.
      _chromaTexture = [self createSourceTexture:pixelBuffer
                                           index:1
                                          format:requiresChannelSizes ? GL_RG : GL_RG_EXT
                                  internalFormat:requiresChannelSizes ? GL_RG8 : GL_RG_EXT
                                           width:(videoWidth + 1) / 2
                                          height:(videoHeight + 1) / 2];

      // Use the color attachment to determine the appropriate color conversion matrix.
      CFTypeRef colorAttachments =
          CVBufferGetAttachment(pixelBuffer, kCVImageBufferYCbCrMatrixKey, NULL);
      GLKMatrix4 colorConversionMatrix =
          CFEqual(colorAttachments, kCVImageBufferYCbCrMatrix_ITU_R_601_4)
              ? kColorConversionMatrix601
              : kColorConversionMatrix709;

      GLuint lumaTextureId = CVOpenGLESTextureGetName(_lumaTexture);
      GLuint chromaTextureId = CVOpenGLESTextureGetName(_chromaTexture);
      [self setVideoYTextureId:lumaTextureId
                    uvTextureId:chromaTextureId
          colorConversionMatrix:colorConversionMatrix];

      CFRelease(pixelBuffer);
      pixelBuffer = 0;
    }
  }

This is using CVPixelBuffer to create textures from. Hopefully, this gives you some direction. We might have something better in the future.

Briahas commented 6 years ago

Hi, but what to do next with that pixelbuffer? Where should I put it in GVRKit?

bsabiston commented 6 years ago

You use the pixel buffer just as a texture map, and map it onto an orthographic, full-screen quad, if you are wanting the image to fill the screen. Basically the buffer becomes a texture map and you use it how you would any OpenGL texture map...