Closed aleksati closed 4 years ago
Update: In the [VA motion_image] the [jit.rgb2luma] converts all grayscale videos from 4 to 1 plane. However, when this happens, the [jit.op @ op + pass pass pass @ val 255] in the [VA_motion_average_image] makes these 1 plane videos completely white.
The only reason I can find that all grayscale videos should be 1-plane matrices before going through the noise reduction is that this somehow makes for better noise reduction. In every step after this, the videos are converted back to 4 planes either by [jit.matrix 4] or [mgt.luma2rb%].
But since all color videos go through the noise reduction as 4-plane matrices I think the [jit.rgb2luma] was just an earlier way of converting videos to grayscale and that now, that with our [jit.brcosa] implementation, this is unnecessary.
Please correct me if I´m wrong here, or if I´m just missing something. @balintlaczko , any comment on this?
Generally, it is efficient to convert a matrix grayscale by converting it into a 1-plane one. And then all the rest of the operations should be able to handle 4- or 1-plane matrices appropriately. But I remember there was a problem with the 1-plane matrix down the line, maybe some of the externals were only capable to handle 4-plane matrices, or the situation you also mentioned (that matrices are hardcoded to be 4-plane ones almost everywhere) led me initially to implement the grayscale thing with the brcosa - but that is sort of a hack. I will look into the problem now.
This has something to do with the [VA_motion_averege_image] abstraction. The way it is implemented now is that it uses the [ji.op @ op + pass pass pass @ val255] to "hack" the jit.mean and create the motion average images. However, this makes the average images all white when running grayscale videos.