Closed cyodrew closed 3 years ago
Any update on this?
Hey @cyodrew. I'll take a closer look at this issue soon and provide feedback. Sorry for the delay. Thanks!
Hey @cyodrew. I started taking a closer look at this one recently. Unfortunately, I can't give you a quick answer on how to resolve the issue at the moment. However, I will be focusing on this issue this sprint and will work on an example from scratch to take a crack it. More updates to come soon! Thanks for your patience.
@Alton09 Awesome, I appreciate the follow up. If there's a better way to do this, I'm open to that as well. I know using an input surface over input buffers is another way to use MediaCodec
, but most examples use the Android Camera APIs directly for that.
@cyodrew I'm honestly not sure from the top of my head, I think your approach looks good overall. I'll let you know what I find out when digging into this a bit more soon.
@cyodrew Just to provide an update on this one. I've created the branch escalation/video-4836-rotate-video that creates a new example module called exampleRotateVideoFrames
that attempts to recreate this use case. I'm not sure what I'm missing from your code example, but the mp4 that is created is completely off. Do you mind taking a look to see what I'm missing? I'd like to get a public branch that is able to reproduce the issue you are experiencing so I can focus on the frame rotation problem. It will also be a useful example fo other developers to learn from.
I expect that the YuvHelper.I420Rotate
should achieve the desired rotation. Since Google recommends to use a Surface
for better efficiency when reading raw video, an Image
object can be retrieved from it. I wonder if there is an API that can rotate these Image
instances 🤔
@Alton09 Excellent, I appreciate you taking a deeper look into this. No problem, in my original example, MediaHandler
is hardcoding the width and height at 1080x1920 and the localVideoTrack
is defaulting to 640x480 unless you pass the VideoFormat
with the right dimensions.
All you need to do here is create a VideoFormat
instance with the desired VideoDimensions
and frame rate that you pass to both the localVideoTrack
and MediaHandler
constructor, which you can then use to set the MediaFormat
correctly.
class MediaHandler(
context: Context,
private val videoFormat: VideoFormat,
private val externalScope: CoroutineScope
) {
private var videoEncoderDone = CompletableDeferred<Unit>()
private lateinit var encodeVideoJob: Job
private val videoMediaFormat = MediaFormat().apply {
setString(MediaFormat.KEY_MIME, MediaFormat.MIMETYPE_VIDEO_AVC)
setInteger(MediaFormat.KEY_WIDTH, videoFormat.dimensions.width)
setInteger(MediaFormat.KEY_HEIGHT, videoFormat.dimensions.height)
setInteger(MediaFormat.KEY_FRAME_RATE, 30)
setInteger(MediaFormat.KEY_BIT_RATE, 1080 * 1920 * 5)
setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1)
}
// ...
As you'll observe though, the i420Copy
method seems to look correct except the orientation is wrong. If you swap the width and height in the MediaFormat
and use i420Rotate
, visually the output MP4 does not look correct.
I'd prefer to use Image
here as well if such an API exists or is accessible, but I haven't found one yet. I think there is a way to get the eglBase
from the video track through reflection which can be used to call eglBase.createSurface(videoCodec.createInputSurface())
. Not particularly pretty or intuitive though, and not a preferred way to do it. With that method and a few other webrtc methods, I got a correct video but it occasionally seems to give a SIGSEGV crash at a lower-level so it's not very stable.
Thanks for those tips @cyodrew ! I am definitely seeing some artifacts in the recording. Are you seeing a green tint as well? Here's what it looks like when running it on a Pixel One XL:
https://user-images.githubusercontent.com/2661383/117883511-19460580-b271-11eb-9c3d-aaa12ad6284e.mp4
I'd prefer to use Image here as well if such an API exists or is accessible, but I haven't found one yet. I think there is a way to get the eglBase from the video track through reflection which can be used to call eglBase.createSurface(videoCodec.createInputSurface()). Not particularly pretty or intuitive though, and not a preferred way to do it. With that method and a few other webrtc methods, I got a correct video but it occasionally seems to give a SIGSEGV crash at a lower-level so it's not very stable.
Yeah that's not ideal. I'll sync up with a coworker on this one to get more ideas on a stable solution. Thanks for your patience!
@Alton09
Oddly enough, I hadn't tried using both i420Copy
and i420Rotate
together, but that seemed to work (for my device, which is a Pixel 3a). The result you got indicates it may not produce the right result for every device, so we may have to dig deeper. I have seen the green artifacts before, but I saw you added COLOR_FormatYUV420Flexible
to the format, which is what I was going to suggest trying since most answers I found suggested that.
Yeah that's not ideal. I'll sync up with a coworker on this one to get more ideas on a stable solution. Thanks for your patience!
Thank you, I look forward to it!
Just tested on my Pixel 5 and it looks perfect like what you have seen on the Pixel 3 👍🏻
The result you got indicates it may not produce the right result for every device, so we may have to dig deeper
Yep agreed. I think the media format config (color and bitrate) needs to be set to specific values based on the encoder that is used on the device for video avc. We've seen issues with specific hardware encoders in the past when given the incorrect values for color and bitrate.
I think there are some utility functions built in to resolve that, but sounds like it's on the right track 👍
@cyodrew I haven't had luck getting the codecs to work on other devices yet by trying to change the supported color types on the MediaCodec configuration. Another thing to try is to compare the configuration WebRTC uses when creating codecs in the HardwareVideoEncoder class to see if we are missing anything.
Were you getting similar results like the video you previously posted that contained artifacts? That may be worth adding in to the initial configuration, there are a few things there that were not added in my original code.
Hey @cyodrew. I've done all I can for this issue at the moment and need to look at other ongoing issues. It appears the solution is almost there when using raw ByteBuffers from a custom VideoProcessor, with just some configuration changes that are needed to the MediaCodec to get it working properly on other OEMs. Again I think taking a look at the WebRTC APIs can be helpful for providing more insight on this. Also, I found a really helpful blog post that goes into detail about proper hardware MediaCodec configuration and may provide more insight here.
I think the preferred way to solve this is to provide a Surface to the MediaCodec (also recommended by Google) and avoid using the VideoProcessor. After syncing with a @aaalaniz on this he had a few more ideas to get this working. First off, to get the SurfaceTexture from the camera reflection can be used. The SurfaceTextureHelper has a getSurfaceTexture() method which can be used to retrieve the SurfaceTexture. It is referenced by the CameraCapturer and Camera2Capturer classes so reflection would be needed to get a reference to the SurfaceTextureHelper private member. Also, there are many examples in the Grafika repo that are performing similar features that you are trying to achieve here so definitely take a look there as well. The VideoEncoderCore class looks interesting and could be helpful.
Also, you may already be aware of this, but we do have a REST API for recording a video room. This may not meet your use case here but here it is just in case.
I'll close this issue for now, but please feel free to open tickets for any new issues as needed. Thanks for your patience and collaboration!
Hey @Alton09,
I appreciate all the help regarding this issue. I also believe using Surface method with MediaCodec would be the best option moving forward. I'm trying to understand is what to do with the SurfaceTexture (accessed through reflection as you suggested) as many of the examples in Grafika are a bit bloated and it appears much of the EGL related classes are created and referenced internally with WebRTC, so I'm not sure if I'll also need to use reflection to get those or if two instances can exist.
I'll keep the search up!
Happy to help @cyodrew ! Thank you and please do share any findings to help others that may be facing the same issue.
Description
I'm attempting to pull frames off the camera prior to being adapted for WebRTC. Based on the docs, it seems the best way to do this is creating a custom
VideoProcessor
and obtain the frame in the overridden methodonFrameCaptured(VideoFrame frame, VideoProcessor.FrameAdaptationParameters parameters)
. The video is being captured in a portrait orientation, so the frame has a rotation value of 270 or 90, depending on if it's the front or back camera. I'm feeding the frame to the Android encoderMediaCodec
usingYuvHelper.i420Rotate
by copying it into the input buffer. After usingMediaMuxer
, the resulting MP4 appears wrong.Video captured with
MediaCodec
width set to 1080 and height set to 1920.Note: I hardcoded the height, width, and framerate here for brevity. If I use
YuvHelper.i420Copy
and use the unrotated width and height when setting up the MediaCodec, the video looks correct but is oriented horizontally (see below).Video captured with
MediaCodec
width set to 1920 and height set to 1080.Steps to Reproduce
VideoProcessor
MediaCodec
for encodingi420Rotate
to theVideoFrame.i420Buffer
MediaCodec
MediaMuxer
Code
RecordingVideoProcessor.kt
MediaHandler.kt
VideoActivity.kt
(in the quickStartKotlin project)Expected Behavior
The output file should look correct regardless of the front and back camera being used.
Actual Behavior
The output file does not look right.
Reproduces how Often
100%
Logs
Versions
All relevant version information for issue.
Video Android SDK
6.2.1
Android API
30
Android Device
Pixel 3a