shogo4405 / HaishinKit.swift

Camera and Microphone streaming library via RTMP and SRT for iOS, macOS, tvOS and visionOS.
https://docs.haishinkit.com/swift/latest
BSD 3-Clause "New" or "Revised" License
2.78k stars 618 forks source link

Question about multiple video effects and layering #169

Closed mzmiric5 closed 7 years ago

mzmiric5 commented 7 years ago

I was wondering if it was possible to do layering and composition with the library.

One of the things I'm wondering about is, is it possible to change the resolution of the camera size on the output stream without changing the output stream resolution. An example of this would be having auto rotation for the camera enabled, and resizing the camera that is being pushed to the output, while the actual output resolution doesn't change, and stays a constant 720p for example. A service we are using to stream to has huge issues with it's ingest encoders if the incoming stream resolution changes while the stream is active. So what we would like to accomplish is for the resolution to stay 720 for both landscape and portrait orientation, but in portrait for the camera output to resize to fit within the canvas of the 720p stream. (By default the library seems to be applying an effect similar to the video gravity aspect ratio, but I'm not sure how to turn that off).

Secondly, I was wondering if it was possible to push visual effects behind the camera. So in the case where above camera resizing was possible, we could inject a visual effect of some sort of an image behind the camera, which would become visible instead of the black bars when the camera would rotate.

Would this all just be a matter of not actually attaching a camera or a screen to the stream, adding a background image visual effect and then adding the resizing/rotating camera visual effect on top of it. Is it possible to stack multiple effects like that?

Thanks for all the great work, love the library so far :)

shogo4405 commented 7 years ago

One of the things

Please see this issue. #147. see also https://github.com/shogo4405/lf.swift/blob/master/Sources/Codec/AVCEncoder.swift#L182.

Secondly,

Good idea. I think framework can do this. But I’m worried about perfomance down (FPS).

Is it possible to stack multiple effects like that?

Yes. You can use CIFilters. https://github.com/shogo4405/lf.swift/blob/master/Examples/iOS/VisualEffect.swift#L32

mzmiric5 commented 7 years ago

Yeah I already checked #147, and while that works as expected (it adds the black bars on the side when viewed in a 16:9 player), it causes issues with the streaming service if we change the video resolution while we are live.

Basically we are attempting to stream to twitch.tv and the best I can deduce from what happens is, that their encoder on the RTMP ingest does not like it when the input stream changes the video resolution (when watching the stream, we end up getting the stream with the same resolution as it was originally pushed [before rotation and changing the video output resolution] but the actual video just becomes blocky grey squares... rotating back to the original position, fixes it. this issue doesn't happen if we rotate when we are offline and then start the stream. So while #147 works if I lock rotation once the app goes live, if I let the user rotate the camera while they are live, this solution doesn't work.

I'm definitely going to look at the PixelTransferProperties ScalingMode and see if I can make changes that would work for what we are trying to do.

And yeah for the 2nd idea, the FPS impact is also what I was worried about. I guess I'll just try it and let you know how it goes.

Thanks for pointing me in the right direction.

mzmiric5 commented 7 years ago

Ok, so switching the PixelTransferProperites ScalingMode to Letterbox definitely does what I want it to do, basically keeping the output video resolution at 1280x720 and scaling the video down to fit within that resolution, thus giving it the 2 black bars on the sides.

I have now tried implementing my visual effect that I wanted to use to cover the black bars with my custom content. I've essentially created a new CIFilter(name: "CISourceOverCompositing") but where I set my image as inputBackgroundImage and my video feed as the inputImage in order to cause the video to be composited over the image. And while this seemingly works, it looks like the visual effects get applied before the letter boxing happens, so the image that the filter is working is still 720x1280 instead of 1280x720 causing the background image that I'm putting behind the video to also be letter boxed. Is there a way for me to either resize the visual effect image, or insert the effect after the letterboxing happens, so i can have the letterbox black bars filled with my image?

I guess a better question is, weather or not it is possible to apply visual effects in postprocessing so that they don't affect the preview but are applied on the output video in order to get the desired effect?

shogo4405 commented 7 years ago

Thank u. PR.

so i can have the letterbox black bars filled with my image?

I can... But I can not explain "how to create letterbox black bar" https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/#//apple_ref/doc/filter/ci/CISourceOverCompositing

weather or not it is possible to apply visual effects in postprocessing so that they don't affect the preview

Please try LFView instead of GLLFView.

mzmiric5 commented 7 years ago

Oh yeah thanks, I totally forgot that LFView doesn't render effects on it, so that solves that part of the issue.

For the visual effect, yeah I'm using CISourceOverCompositing, but the problem that I end up having is, because the input image is 720x1280 (the camera is in portrait mode), so it is not letter boxed yet. And because of this I don't have a 1280x720 canvas to work with when applying the effect using CISourceOverCompositing. I tried instead making the effects CI context 1280x1280 and setting my background image in that, hoping that what will happen is when the letterbox pixel transfer property in the encoder hits it will scale that down together with the video, resulting in a 1280x720 video which has my image instead of the black bars. This however does not happen, and I just end up receiving the same video feed like I do without any effects (so just the portrait video with the black bars, the visual effect which should be applied in the background is not visible on the output stream). Using different modes on the pixel transfer property does not result in any different behaviour.

Here's an example code from my VisualEffect code, just in case I'm doing something wrong there:

final class MyEffect: VisualEffect {
  let filter:CIFilter? = CIFilter(name: "CISourceOverCompositing")

  var extent:CGRect = CGRect.zero {
    didSet {
      if (extent == oldValue) {
        return
      }
//      UIGraphicsBeginImageContext(extent.size)
      UIGraphicsBeginImageContext(CGSize(width: 1280, height: 1280))
      let image:UIImage = UIImage(named: "myBackground.png")!
      image.draw(at: CGPoint(x: 0, y: 320))
      effectImage = CIImage(image: UIGraphicsGetImageFromCurrentImageContext()!, options: nil)
      UIGraphicsEndImageContext()
    }
  }
  var effectImage:CIImage?

  override init() {
    super.init()
  }

  override func execute(_ image: CIImage) -> CIImage {
    guard let filter:CIFilter = filter else {
      return image
    }
    extent = image.extent
    filter.setValue(image, forKey: "inputImage")
    filter.setValue(effectImage!, forKey: "inputBackgroundImage")
    return filter.outputImage!
  }
}
mzmiric5 commented 7 years ago

@shogo4405 ok, so after experimenting with the visual effects further, I discovered I can probably create the effect I'm trying to accomplish by applying a CA scale transform to the input image from the camera video and then compositing it with my background. However, this comes with the problem of getting a lot of image in image effects (still not sure why this happens) but also, a bigger issue where for example when I apply an effect that completely replaces the input image, on the stream I still get the original video in the background of the video effect.

Also, after trying to use the LFView, the VisualEffects are not actually visible on the stream at all, so while it accomplishes what I wanted with the preview not being affected by the effects, it doesn't apply them to the output like I thought it would (in post processing essentially)

shogo4405 commented 7 years ago

CIFilter and CMSampleBuffer technologies are difficult for us. Please consider to ask stackoverflow.

after trying to use the LFView, the VisualEffects are not actually visible on the stream at all,

well, I think a videoGravity property. How about LFView.videoGravity = AVLayerVideoGravityResizeAspect?

Rendering process is here. https://github.com/shogo4405/lf.swift/blob/master/Sources/Media/VideoIOComponent.swift#L368

mazharhameed25 commented 3 years ago

can anyone tell me how to stream image in place of video?

onebuckgames commented 3 years ago

Just draw it to the CMSampleBuffer in VideoEffect

abubaqr70 commented 3 years ago

can anyone tell me who to stream video from gallery

ldhios commented 1 year ago

Oh yeah thanks, I totally forgot that LFView doesn't render effects on it, so that solves that part of the issue.

For the visual effect, yeah I'm using CISourceOverCompositing, but the problem that I end up having is, because the input image is 720x1280 (the camera is in portrait mode), so it is not letter boxed yet. And because of this I don't have a 1280x720 canvas to work with when applying the effect using CISourceOverCompositing. I tried instead making the effects CI context 1280x1280 and setting my background image in that, hoping that what will happen is when the letterbox pixel transfer property in the encoder hits it will scale that down together with the video, resulting in a 1280x720 video which has my image instead of the black bars. This however does not happen, and I just end up receiving the same video feed like I do without any effects (so just the portrait video with the black bars, the visual effect which should be applied in the background is not visible on the output stream). Using different modes on the pixel transfer property does not result in any different behaviour.

Here's an example code from my VisualEffect code, just in case I'm doing something wrong there:

final class MyEffect: VisualEffect {
  let filter:CIFilter? = CIFilter(name: "CISourceOverCompositing")

  var extent:CGRect = CGRect.zero {
    didSet {
      if (extent == oldValue) {
        return
      }
//      UIGraphicsBeginImageContext(extent.size)
      UIGraphicsBeginImageContext(CGSize(width: 1280, height: 1280))
      let image:UIImage = UIImage(named: "myBackground.png")!
      image.draw(at: CGPoint(x: 0, y: 320))
      effectImage = CIImage(image: UIGraphicsGetImageFromCurrentImageContext()!, options: nil)
      UIGraphicsEndImageContext()
    }
  }
  var effectImage:CIImage?

  override init() {
    super.init()
  }

  override func execute(_ image: CIImage) -> CIImage {
    guard let filter:CIFilter = filter else {
      return image
    }
    extent = image.extent
    filter.setValue(image, forKey: "inputImage")
    filter.setValue(effectImage!, forKey: "inputBackgroundImage")
    return filter.outputImage!
  }
}

@mzmiric5 After such a long time, I also met your problem. I want to know how you solved it later