Only "mVideoBuffMax"th frames have been rendered, and a runtime error occurs when trying to render the "mVideoBuffMax+1"th frame.
according to my LOG in localVideoContext->textureObj->upload(),
std::thread YThread = std::thread([&]() {
if (mWidthY == rowPitchY) {
LOG("----------------- [Y2] before memcpy\n");
memcpy(ptrMappedY, ych, mLengthY); // error occured here
LOG("----------------- [Y3] after memcpy \n");
}
}
LOG "[Y3] after memcpy" doesnt called when the current frame number is mVideoBuffMax+1.
When I checked the mLengthY, there was no problem. The value of mLengthY was [upscaled width * upscaled height].
why this error happens and how can I fix?
I am not sure what the root cause is from the code you post here. But I have some suggestions:
Remember to release your pBuffer allocated in scaleResolution.
sws_scale is CPU based upscale. One alternative method is to render your frame to a rendertarget with the resolution you wanted.
You could attach your native project to Unity for more debug details by step each line.
Please check and try again. If you still have some difficulties, please provide more information for further evaluation.
Thanks.
Hello. I want to add sws_scale function. So, I made DecoderFFmpeg::scaledResolution() function which is called in DecoderFFmpeg::UpdateVideoFrame.
Now, frames in "mVideoFrames queue" have upscaled resolution, so I modified ViveMediaDecoder::DoRendering() function like below.
Only "mVideoBuffMax"th frames have been rendered, and a runtime error occurs when trying to render the "mVideoBuffMax+1"th frame.
according to my LOG in localVideoContext->textureObj->upload(),
LOG "[Y3] after memcpy" doesnt called when the current frame number is mVideoBuffMax+1. When I checked the mLengthY, there was no problem. The value of mLengthY was [upscaled width * upscaled height]. why this error happens and how can I fix?
Thank you.