Open cameng318 opened 2 years ago
Interesting idea. To judge it for its worth to implement, please provide some sample images that show the differences/similarities between your algorithm and the existing Markesteijn algorithm implemented in RawTherapee. Preferably zoomed-in sections.
I don't know how the Markesteijn algorithm works, and I can't find the literature to show how it works. So I'm going to show the comparisons with my Gaussian filter method. Basically it weighs the nearby few pixels with Gaussian function. Performance wise it should be equivalent to the bilinear interpolation algorithm for Bayer sensor.
The test images are from this medium post by Jonathan Moore Liles, Actually, some of the images processed with Markesteijn algorithm are in that post, but I'm not sure how much addtional processing are done to them.
The top images are processed with the Gaussian method, and the bottom images are processed with this 3x3 to 2x2 method with G3 splitted into 4 parts.
.
I figured out how to compare this method with Markesteijn. The test image comes from [DPReview's Fuji X-T4 review]
This method: 3 pass: 1 pass: fast: This method: 3 pass: 1 pass: fast:
By my observation, this method doesn't correct the color filtering and moire artifacts like the Markesteijn algorithm. However, its performance is nearly identical to the fast method. Despite the pixel count is reduced by half, this method doesn't seem to lose any sharpness and contrast. I think this method would be a viable way to reduce post processing file size, which might be useful for Fuji's 40MP camera that's about to come out.
By the way, I would like to know more about how Markesteijn algorithm works. Is there a paper about it? Maybe I can combine Markesteijn algorithm with this method to address the artifacts while still reduce the filesize.
Another food for thought, maybe the center green pixels in the X-Trans array could be replace with a white pixel to improve low light response. Perhaps then it could shoot monochrome pictures nearly as good as monochrome cameras.
I have a Fuji X-T20 with an X-Trans III sensor. I am happy to provide raw images if you are interested for testing. Is there a particular type of subject matter you are looking for?
Here are some I happen to have already shared for various reasons: https://discuss.pixls.us/t/is-defringe-supposed-to-have-an-effect-when-equalizer-is-zeroed-out/31404?u=chaimav https://discuss.pixls.us/t/what-causes-these-hot-pixels/27893?u=chaimav https://discuss.pixls.us/t/tutorial-using-local-adjustments-to-create-off-center-vignette/25193?u=chaimav https://discuss.pixls.us/t/hazy-snowy-bridge/23605?u=chaimav
I came up with a fast way to demosaic X-Trans sensor. I'm not sure if it's a new idea, but I didn't find it elsewhere.
Basically the 6x6 X-Trans pattern can break down into the following 3x3 pattern and its 90 degree rotation: G1 B1 G2 R1 G3 R2 G4 B2 G5
We can demosaic them into the overlapping 2x2 pixel groups below: R1' R2' G1' G2' B1' B2' R3' R4' G3' G4' B3' B4'
For red and blue, we can just copy them over by the following operations: R1' = R3' = R1 R2' = R4' = R2 B1' = B2' = B1 B3' = B4' = B2
For the green, we can divide G3 into 4 pieces, and add them to the corners: G1' = G10.8 + G30.2; G2' = G20.8 + G30.2; G3' = G40.8 + G30.2; G4' = G50.8 + G30.2;
or just ignore G3: G1' = G1; G2' = G2; G3' = G4; G4' = G5; ignoring G3 will make a little more moire in my simulation, sharpness however seems a little better.
The input has 2x red, 5x green, 2x blue pixels, and the output has 4x red, 4x green, 4x blue pixels. Thus we only lose a little bit of information in the green. We still have at least 2x information in the green channel compared to the red or the blue channel.
I've simulated it in Matlab, the result is very similar to demosaicing 1:1 pixel with Gaussian weights. Except it has a little more of color fringing on high contrast edges. The fringe patterns aren't like superpixel artifacts in bayer sensors, which is blue to one side and red to the other side. Instead, the fringes alternate in red and blue thanks to the nature of X-Trans sensor. It's more pleasing to the eye in my opinion. The fringes are also much less than fringes in a superpixeled Bayer sensor, because the RGB pixels in 3x3 grids average to the same center.
The difference in sharpness compared to 1:1 demosacing with Gaussian weights is negligible, despite having less than half pixel count. This method does not attempt to restore any artifact caused by the CFA. The result might seem soft, but it won't introduce digitus artifacts like worms and orange peel texture to the image. This method might be desirable for portraits.
It's basically a bilinear interpolation for X-Trans sensor, while reducing file size with a little compromise.
Pro:
Con:
I don't actually have a Fuji camera, and all of the ideas above come from my Matlab simulation. I'm interested to see how this method performs on actual images taken with the X-Trans sensor, or if this method has already been implemented somewhere.