Closed fuchanghao closed 7 years ago
In this way, each luma and chroma are processed independently. That is against the idea of 'cmode'. EDIT. No, I had read it wrong, my mistake. Luma is 100% independently and chroma uses a downscaled luma.
This is what I used when I wan't different strengh in luma&chroma denoise.
If I want the same strength I will use the KNLMeansCL in havfunc.
https://github.com/HomeOfVapourSynthEvolution/havsfunc/blob/master/havsfunc.py
def KNLMeansCL(clip, d=None, a=None, s=None, h=None, wmode=None, wref=None, device_type=None, device_id=None, info=None):
core = vs.get_core()
if not isinstance(clip, vs.VideoNode):
raise TypeError('KNLMeansCL: This is not a clip')
if clip.format.color_family not in [vs.YUV, vs.YCOCG]:
raise TypeError('KNLMeansCL: This wrapper is intended to be used for color family of YUV and YCOCG only')
nrY = core.knlm.KNLMeansCL(clip, d=d, a=a, s=s, h=h, wmode=wmode, wref=wref, device_type=device_type, device_id=device_id, info=info)
if clip.format.subsampling_w > 0 or clip.format.subsampling_h > 0:
subY = core.resize.Bicubic(mvf.GetPlane(clip, 0), clip.width >> clip.format.subsampling_w, clip.height >> clip.format.subsampling_h,
src_left=-0.5 * (1 << clip.format.subsampling_w) + 0.5, filter_param_a=0, filter_param_b=0.5)
yuv444 = core.std.ShufflePlanes([subY, clip], planes=[0, 1, 2], colorfamily=clip.format.color_family)
nrUV = core.knlm.KNLMeansCL(yuv444, d=d, a=a, s=s, h=h, cmode=True, wmode=wmode, wref=wref, device_type=device_type, device_id=device_id)
else:
nrUV = core.knlm.KNLMeansCL(clip, d=d, a=a, s=s, h=h, cmode=True, wmode=wmode, wref=wref, device_type=device_type, device_id=device_id)
return core.std.ShufflePlanes([nrY, nrUV], planes=[0, 1, 2], colorfamily=clip.format.color_family)
Something like the chroma upscaling?
clip = core.resize.Bicubic(clip, format=vs.YUV444P8, matrix_s="709", matrix_in_s="709") clip = core.knlm.KNLMeansCL(clip, cmode=True) clip = core.resize.Bicubic(clip, format=vs.YUV420P8, matrix_s="709", matrix_in_s="709")
1.no center shift fixed. 2.during the 0.7.6's test this would make the chroma worse and speed are slower(in gtx670 1080p 16bit input). 3.That's why I didn't use upsampling set.
PS:I can't test 0.7.7 right now because I'm encoding sth.
Precess the Y with the UV upscaled, and process the UV with the Y downscaled. The problem is how to make it fast, otherwise it is useless.
Also you are making me doubt about the correct processing of the YUV444 clip.
EDIT. Misprinting.
KNLmeansCL 0.7.7.
I can just write the code like this. for example, a 1920x1080 YUV420P16 input.