cpixip / simple_picam_lens_compensation

Calculating and using the new lens compensation in picamera with a v1-camera
3 stars 0 forks source link

Sensor mode issue #1

Open zbarna opened 5 years ago

zbarna commented 5 years ago

Hi @cpixip!

Regarding our conversation here: https://github.com/waveform80/picamera/pull/470, I write my remarks about your code.

So the code is working well, and I made some test with different camera configs (I took picture about a yellow painted plastic item): sensor_mode = 1, awb_mode = off, awb_gain=(1.5, 1.0) (this is perfect!) 2018-11-02_190228

sensor_mode = 2, awb_mode = off, awb_gain=(1.5, 1.0) (at the sides it's a little bit whiter) 2018-11-02_190254

sensor_mode = 2, awb_mode = on (at the sides it's a little bit whiter) 2018-11-02_190202

Here you can see the meaning of the sensor modes: https://picamera.readthedocs.io/en/release-1.13/fov.html#sensor-modes

I would like to use the sensor_mode = 2, because in that mode I can see the full view of the camera.

I think your code calibrate the sensor_mode = 1 picture. I tried to modify the camera config in the script but didn't help. What do you think? Have you got this issue as well?

The other thing: After a few test, I realized that if the awb is on, the lens_shading table is not working well. What's your opinion?

Thanks! Best Regards, zbarna

cpixip commented 5 years ago

Hi @zbarna ,

I rearranged the repository and uploaded a new script. Check it out.

Basically, my routine which converts the raw image to a 4-channel image used for processing assumes implicitly that we work with camera.sensor_mode = 2. The new script makes sure that this is indeed the case. Once the table has been calculated, it should not matter in which mode you are using it. I have checked a few modes (including mode 2), but not all.

The new script also handles various hflip- and vflip-settings. The script figures this out from the header of the raw image data (the Bayer-order) and remaps the table accordingly . Furthermore, I included the proper handling of a shift of the origin of the lens compensation table which I observed during testing.

The raw image format changes if you do not use mode 2 - it is quite complicated and handled in waveform80's PiBayerArray-class. I will however not implement any of the other modes, as the capture of the raw image data at full resolution is the only sensible way to capture for the calculation of the lens compensation table. The routine which decodes the raw image should throw an error if supplied with raw data acquired in wrong mode.

Let me know if the new script does what you want to achieve.

I remember faintly that one of the engineers at the Raspberry Pi forums was mentioning that the white balance algorithm somewhat modifies the lens compensation table supplied to the camera, at least on some occasions. But I cannot find that quote. In my applications, I usually fix the white balance gains, so I never observe that effect.

Try to use a gray surface for the calibration image and record the white balance gains used; later, during normal image operations, just use these gains. As long as you do not change the illumination of the scene, the colors will be ok.

Best, cpixip

zbarna commented 5 years ago

Dear @cpixip!

Finally I had time to check your new script. Unfortunately I got almost the same result like last time. :(

I used the case if there is no hflip and vflip, the table_B1.h. As I see the script using auto awb by default. I used a white surface for the calibration, and the awb_gain is (1.6, 1.2) when the raw picture taken. Your script created the following pictures: raw_B1: raw_b1 x_raw_B1: x_raw_b1

I took pictures with my program with the following settings sensor_mode = 2, awb_mode = off, awb_gain=(1.6,1.2) This should be yellow: 2018-11-07_131858

sensor_mode = 2, awb_mode = off, awb_gain=(1.6,0.9) This color is perfect 2018-11-07_132017 This should be blue, but at the sides it's wrong :( 2018-11-07_132004

So If I change the awb_gain to fix the colors of the yellow item, the blue item become wrong. I don't know what to do :( . I tried to use a gray surface for the calibration but had the same result :( .

Only hope for me is that, you wrote this solution is working for you. Could you show me some sample pictures of your project? What kind of lense do you use? How do you calibrate? Maybe it can help me.

The other thing, maybe you can explain me, but it's not related to the lens shading table: If I use sensor_mode = 1, the native resolution is 1920x1080, but If I set the camera.resolution to 2592x1944 (5 megapixel) and take a photo, it creates an image with 2592x1944 resolution. How can be larger the resolution than 1920x1080 in sensor_mode=1, according to this: https://picamera.readthedocs.io/en/release-1.13/fov.html#sensor-modes

Thanks for you help in advance! Best Regards, zbarna

cpixip commented 5 years ago

Hi @zbarna,

your image "x_raw_B1" in comparison to the starting point, image "raw_B1", shows me that the lens compensation is working, at least in principle.

I can think of several different causes for what you are experiencing:

  1. Your illumination source might have a problem. For the lens compensation calculations to work well, you want an illumination which exposes the raw image equally well in all 4 color channels. I can check this if you make the raw image available to me. Also, the computed lens compensation table would give some hints.
  2. Was the "yellow" compensated image really taken with awb_gain=(1.6,1.2), the same gain which was used to take the raw reference image? If so, the yellow paper image should be yellow, which it clearly is not. Can you take a raw image with the yellow paper and make it available to me?
  3. You might encounter too much cross-talk between the color channels in the edge regions of the image. That will depend on the specific optical path you are using. Please give me some hints what you are actually using as a lens. If you take in addition a raw image of the blue paper, I might get some further information about cross-talk behavior.

You could make your files, especially the raw image used for the calibration and the lens compensation table (the .h-file) available for me via dropbox or so. Also, please tell me the focal length of the lens you are using. Than I could maybe make a more educated guess.

I have uploaded some example images in the geo_05.py directory. Check them out. Why the sensor_mode = 1 create a full res image - I do not know.

Best, cpixip.

zbarna commented 5 years ago

Hi!

Thanks a lot for your time again!

I made a lot of test files.

Here you can download the test files I created, please check my explanation about the items here below: https://drive.google.com/open?id=1tq8WB1By8GnQ5HD7PaGbrTp1YDGfme_j

calib_grey_surface folder:

I named the test pictures this way: raw_"color of the item" surface_awb "awb_gains[0] in float"__"awb_gains[1] in float".jpg

calib_white_surface folder: Here you can find almost the same like in the case of calib_grey_surface folder. Here I didn't take test pictures because it was nearly the same like in the case of the grey surface calibration. If you want I can do it.

Photos_about_the_camera_lens_and_other_stuff folder: Here you can find pictures about the microscope, lens, camera adapter, illumination, ... . The file name explains what you see on the picture.

The lens is simple lens used for microscopes, and the focal length is around 12mm (length between the lens and the picamera sensor)

python_codes folder:

Hopefully these informations will give you a better view on my issue.

I really appreciate your help! Waiting for your answer!

Best Regards, zbarna

cpixip commented 5 years ago

Hi @zbarna,

thanks for the upload. I have checked the images. At this point in time, I can not pinpoint the problem. Basically, the computed lens compensation tables are doing what they are supposed to do: counteracting illumination difference in such a way that a more or less homogeneous image is obtained. The proof is in the images "x_raw_B{0/1/2/3}.jpg" of your test runs: they show a rather homogeneous grey image. The little black dots are dust particles on your sensor which are too small to be compensated.

Looking at one of the raw images,

image

what is striking is the lack of power in the red channel (red channel top-right; than G1, Blue and G2 channels, clockwise) in an image with a gray surface. This points me to your LED-source not providing a power spectrum similar enough to daylight. This situation is also visible in the computed lens compensation table (same color channel arrangement as above):

image

Of course, the lens compensation table is just a coarse inverted raw image. Here the, the largest scale factors are in the red channel (up to about 1.4). Note also that there is some structure visible in the lens compensation table which is probably caused by structures in the gray paper you are imagining.

Now, it is kind of important to have a really good illumination and a really structure-less gray surface. In any case, the results obtained (the "x_raw_B{0/1/2/3}.jpg" images) seem to be ok for me.

Disaster strikes as soon as you are imaging colored stuff. I do not know why you were using the specific awb_gains encoded in the image names - in any case, these gains are most certainly wrong. But first things first.

A histogram of the 4 raw channels of one of the calibration images looks like this:

image

Note again that the red channels shows less power than the other color channels.

Now, the two images marked by you as "...gray_surface..." should show the same histogram. But they don't:

image

It looks like these images were taken with another exposure setting. Or did you use a different surface for calibration and test images (the later being darker?). In any case, neither the awb_gain = (1.6,0.9) nor the awb_gain = (1.6,1,22) gives you a gray surface in the .jpg-images in the end. One of the images is too yellow, the other too red.

The question is: why are you using these gains? I suggest that you print out the gains of the auto white balance algorithm my script is using when taking the "x_raw_B{0/1/2/3}.jpg" and fix the awb_gains at this value. If you look at the RGB-histogram of one these images (you can do this for yourself in GIMP, for example), you will see that it is perfectly white balanced:

image

All 3 color channels (this is the histogram of the .jpg-image, not the raw data) are at the same spot. The histogram of the blue channel is drawn at last, so it covers the histograms for red and green.

Some general remarks: the imaging pipeline of the Raspi works with 10bit or 8bit numbers, so it has a limited numerical resolution. If your light source is not perfect (which I think is the case) or you are using too high gains (which I think is also the case), the results will show this limited numerical resolution. Try setting the awb_gains = (1.0,1.0) to see what changes. By the way, if the awb_gains differ to much from this value, you can see an increase in the saturation of the image.

Best is certainly to set the awb_gains fixed to the gains which are used during the capture of the "x_raw_B{0/1/2/3}.jpg" images. See what happens. Gray should stay gray and any white surface should appear white as well. My final suggestion: try to add one or two red LEDs to your light source and see. Check whether the area you are imaging is evenly lit.

Guess that's all I can remark at the moment. In summary, the computed lens compensation table seems to be ok for calibration images captured. I think your light sources lacks enough power in the red, and it seems that it is also illuminating the scene unevenly. You should find a value for awb_gains which gives you really a gray image with a gray surface. Than stick to these values when taking images of colored objects. Maybe test with some less saturated surfaces, like a stone surface or so.

Ok... - time for the weekend. Have fun and let me know if you get some improvements.

All the best, cpixip.

zbarna commented 5 years ago

Hi!

Thanks for you explanation!

I tried to use different kind of light sources. (warm light, cold light, ... ), but I stucked into the same problem we have. I uploaded some new test files here: https://drive.google.com/open?id=14erFpHoCGpjicgKUp3U_jHrsOt9_hOxa

You were right, the fixed awb_gains I used wasn't correct. I printed out the awb_gains when the raw picture taken without lens shading compensation. I had to change it to print out the awb_gains when the picture taken with the lens shading compensation.

The new values for the awb_gains=(1.5039,1.2382). Now the gray surfaced item histogram is almost the same like in the case of the picture of your calibration script.

With the lens shading calibrated matrix, and the fixed awb_gains(1.5039,1.2382) the gray surface color is gray, the white is white, but if I put a colored item under the lens the lens shading calibration not working. I tried to play with the saturation, but it didn't help also :( . I uploaded pictures with awb_gain=(1.0,1.0) and with different saturation values.

Maybe the problem is with the lens I use to see inside the microscope, and the v1 cam sensor cannot handle this also like the v2 cam couldn't :( .

What do you think? I have really no more idea.

By the way, how did you open the raw images, to see that grayscaled image of the pictures?

Best Regards, zbarna

cpixip commented 5 years ago

Hi @zbarna,

thanks for the new data. At the moment I do not have a definite answer to your problem, but it seems to me that you are running into the same problem with the v1-sensor that I discovered for myself with the v2-sensor. It looks like the micro-lenses on the edge of the sensor are projecting the incoming light in such a way on the Bayer-pattern that the color information is not recoverable in these areas. However, I am not 100% sure that this is actually the case, so let me elaborate a little bit.

The micro-lenses on the Raspi-cams have another pitch than the sensor array. The reason is that they tilt the view line of the outer sensor pixels more towards the center of the lens mounted above the sensor. If the different pitches are properly matched, this improves performance of the sensor. Basically, less vignetting is occurring that way.

Now if you exchange the lens with another one, the "view geometry" of the combined sensor and micro-lens array does change, it no longer matches. Therefore, the vignetting changes also, so more work has to be done by the lens shading compensation algorithm. But, there is another effect which happens also due to a lens change: the rays from the lens, originally focused by the micro-lens array directly on a single pixel, will now fall also onto neighboring pixels. This effect will cause a cross-talk between those pixels. For example, a red surface will illuminating with a lens change also neighboring blue and green pixels slightly. This cross-talk shows up in the raw image as a drop in saturation: the color channels in the outer parts of the sensor are too much mixed to show a saturated enough colors.

I observed this behavior with the v2-sensor, and I am now seeing the same drop in saturation in the raw images you made available to me.

Here's such a saturation signal calculated from one of your raw images (the white surface):

saturation_white_surface

Note the brighter rectangular area in the center of the image indicating a good saturation value. But note also the reduction in the saturation towards the edges of the image, especially toward the right edge of the image. Again, that is an analysis of the raw sensor data - so no lens shading compensation nor white balance algorithm is applied at that early stage.

I am very familiar with this type of behavior from my experiments with the v2-sensor. I did however not notice this - until now - from the v1-sensors. However, I plan to do some further experiments to get more insights into this.

A similar behavior like the one pictured above can be seen in all of your images. However, there is also an asymmetry between the left and right image border - currently I have no clue where this comes from. The saturation pattern should be in principle symmetric. Again, more experiments are needed to give a good answer.

If my suspicion is correct, this is a problem which can not be solved via the simple lens shading compensation we have available. More advanced compensation schemes are possible, but they would have to be implemented in software and they are way more difficult to calibrate and calculate. Also, they would lower the signal-to-noise ratio towards the outer sensor edges.

Frankly, if my suspicion is correct, the best option would be just to use the inner rectangle with well-defined color signals and cut out the edges.

Well, I will investigate more. Your lens is only slightly different from the standard lens of the camera (about 4 times the original focal length if I remember correctly) and I would not have expected this lens change to have such a big effect on the image. I am using a lens which is more than 10 times larger in focal length and I did not see such a thing. Again, further analysis is required here.

Next... - with my remark in a previous answer, "_By the way, if the awbgains differ to much from this value, you can see an increase in the saturation of the image", I didn't want to suggest that you take images with different saturation settings. It is only that large awb_gains tend to have the visual effect of increasing the saturation of an image. In my setup, I can white balance with my illumination (resulting in awb_gains = 1 and a rather dull picture) or I can white balance in the camera (resulting in awb_gains > 1 and enhanced color saturation). However, never mind, as this is a unimportant side track.

Now some more analysis with respect to your light source...

Are you using, by any chance, white LEDs? Well, these kind of LEDs might not have the ideal spectrum for imaging applications. Indeed, if you look at a raw image taken taken under your "coldlight",

coldlight

you see a very low amplitude in the red channel of the image. If you look on the other hand at the raw image taken with the "warmlight",

warmlight

the same issue is seen in the blue color channel. Note that in both cases, the two green channels are brighter than the other channels.

Now, I assume that you just combined the two sources for your "warmlight_coldlight" image.

warmlight_coldlight

But that images shows the deficits of your illumination in both the red and blue channel! The reason is that the two illuminations work both well in the green channel, but in the red and blue channel, only one of the two illuminations works good enough. So ironically, the sum of the two illumination sources does not improve the situation....

You could try a real RGB-LED as illumination. By using a LED for each color channel, you can adjust the illumination in such a way that you get a perfect spectrum. But be careful - RGB-LEDs need to be driven by constant current sources when used for imaging purposes. Never use a RGB-LED which is driven by PWM - this is however mostly the stuff which is on sale.

Anyway. One idea which occurred to me while writing this answer to your post is the following: do you have an illumination which is made up of purely red LEDs (or blue LEDs)? These LEDs should be a more or less monochromatic light source and ideally one would expect (if you imagine a nice white surface) that only the red (or blue) channel would show up in the raw image. However, if the cross-talk I was describing above is real, the red (or blue) illumination should also cause a signal in the other color channels. This would be a way to check whether this cross-talk really occurs in your optical setup.

Lastly, I am using a special program I coded myself to look at the raw image data. If you look however at my first script,

    # read the whole buffer (well, almost)
    # and convert it to the cv2-format we are using
    cplane = readRaw(stream.getvalue()[-6404096:])

    # just for debug information - normally commented out
    print 'Got data with dims:',cplane.shape,' with',cplane.dtype    
    print 'Max red:',cplane[:,:,0].max()
    print 'Max green1:',cplane[:,:,1].max()
    print 'Max green2:',cplane[:,:,2].max()
    print 'Max blue:',cplane[:,:,3].max()    

    # just for fun, writing out the different color planes
    cv2.imwrite('raw_red.jpg',cplane[:,:,0])
    cv2.imwrite('raw_green1.jpg',cplane[:,:,1])
    cv2.imwrite('raw_green2.jpg',cplane[:,:,2])
    cv2.imwrite('raw_blue.jpg',cplane[:,:,3]) 

you see an example on how to access and output the 4 separate color channels to disk. From there you can combine the images into a mosaic, either using cv2 or GIMP.

Anyway, a long reply, so let me close for today. Best, cpixip.

cpixip commented 5 years ago

sorry ... closed the topic by accident (still learning to use github) ...

zbarna commented 5 years ago

Hi!

No problem about closing this topic accidently. First I thougth that you fed up with this issue :D .

First of all, sorry for my late answer, and thanks for your long answer again! :)

I also realized that the color aberration is not symmetric on my pictures, but don't know why :S .

Could you show me your raw pictures with your lense system? I'm curious about that, how should it look like :) .

I'm also thinking about that to use the camera in sensor_mode=1 which means the inner rectangle of the sensor, but it has a resolution of 1920x1080 which means 2mpx instead of 5 mpx :( . sensor_area_1

Yes, If I'm correct, my focal length is around 12 mm, you can also check it: camera_adapter_length camera_adapter_and_lens

More pictures here: https://drive.google.com/open?id=1ANZK8hwhZr_IS2nhJNhUMtfz_5R-Dcsi

Yes I'm using led lights to illuminate the table of the microscope. You are right, in the case of "warmlight_coldlight" I used both the cold and warm lights, but it didn't help.

Unfortunately I don't have real RGB leds, or pure red, pure blue led lights. It's a good idea to try only pure red light to check the cross-talk. I will order the led you mentioned. Could you show me an example, which type of led would be good for this test? (for example a link from the ebay, or the typename)

Thanks! Best Regards, zbarna

cpixip commented 5 years ago

Hi @zbarna,

there are some raw images from my setup already in this repository. Check out raw_original.jpg and any of the raw images in the folder v1_geometry_and_modes/example_results.

The raw_original.jpg was actually taken with an RGB-source, but I did not bother to adjust the RGB-values to an equal setting:

image

As you can see, Red and G1/G2 are about equal in raw signal strength, but the blue channel is too bright in this image. If you look closely, there is a short horizontal lightness variation in the red channel - this is caused by interference on current breadboard setup (the tunable current sources are implemented on a breadboard, and that has some high frequency crosstalk).

One of the raw images from v1_geometry_and_modes/example_results

image

is actually taken under a warmlight white LED (similar to your illumination), and as you can see, the blue channel shows a slightly reduced amplitude.

Using the v1-mode might be an option for utilizing only the center of the image. Can not comment on this as I only work in mode 0 (auto) or 2.

Actually, I can not recommend any RGB-LED out of the box. I am using for my experiments either CREE or Osram LEDs, in the 3 Watt range. They come with good data sheets. I have designed a simple constant current source for these experiments which I can adjust via software by an SPI-interface.

However, I would not bother to go down this road - for all practical purposes the illumination you are currently using should be perfect.

If you want to do however some experiments with colored light, you might want to consider switching off your normal light source and put instead the LCD of your phone directly under your microscope. Depending on the type of display your phone has, you might see a nice display of colored dots. If you prepare pure white, red, blue and green images and display them on your phone screen, you can selectively switch on all dots, or only a pattern of specific color. An experiment along these lines might give at least some hints toward the variation of the color spill (not the amount, because your phone's LCD will not be pure "red", for example). Ideally, the colored dots should have all the same intensity, but I think this will not be the case. Also, the read dots should show up only mildly in the other color channels. Any variation could give you hints on what is happening. Again, just an idea.

As I remarked above, I will do some further experiments on my own, as soon as time permits - that will however take one or two weeks because I am currently busy with other projects. I will report in this thread about the results of my experiments as soon as I have time to do them - this will just take some time.

Have fun!

Best, cpixip

cpixip commented 5 years ago

Hi @zbarna,

here are some results of further experiments I did. I used an integrating sphere with different LEDs to test the v1-camera sensor with the standard lens and with my Schneider-Componon-S 50mm lens.

The integrating sphere (or Ulbricht sphere) ensures that the light falling through the test window features a more or less homogeneous light distribution. Now, using a 451nm blue LED as light source, you would expect only a minor amount of crosstalk occurring in the green and red channels - according to some diagrams floating around the internet, the sensitivity of the red and green channel is less than 15% at that wavelength (a little bit more for the green than the red channel).

Well, here's the result:

crosstalk_blue_channel_standard_lens_a

So there is indeed a little crosstalk, a little bit more in the two green channels than in the red channel. But the important thing is that the crosstalk is more or less evenly varying across the image. This type of crosstalk will not affect color fidelity too much - it will only reduce the saturation of the colors slightly, which of course can be easily corrected in a later processing stage.

Now let's have a look at how the situation changes when we swap lenses. With the Schneider-Componon-S lens, the mircolenses project the light coming from the lenses no longer directly toward the pixel they are in front of, but slightly toward neighboring pixels. So one would expect a spatially varying crosstalk. And, as the following raw image shows

crosstalk_blue_channel_schneider-componon-s-50_mm_a

this is indeed the case. We see more crosstalk especially in the green channels, specifically to towards the left and right side of the Green_1 channel and at the top and bottom sides of the Green_2 channel.

Now, this spatially varying crosstalk could be in principle corrected, but not within the lens shading compensation algorithm we have available within the Raspi processing pipeline. It will manifest itself in slight color shifts toward the edges of the image frame. At least, you will see some drop in color saturation toward the edges of the frame. From my experiments so far, this effect seems to be more visible with v2-camera sensors than with v1-camera sensors.

Another point which is important to note: this increased spatially varying crosstalk has the potential of ruining the standard lens compensation (as the scripts in this repository or any other script I know of are performing). You need to make sure that the reference image you are using to calculate the lens shading table from has really equal amplitudes in all four color channels. If so, the effects of spatially varying crosstalk will be minimized. If not, the crosstalk will ruin the lens compensation in the color channels having a lower amplitude than the other channels. I think that is a major problem you are encountering in your setup. Remember, the amplitudes in your example images are not comparable to each others - either the red or blue channel had a too low amplitude, depending on whether you were using "coldlight" or "warmlight" sources. And the combination of both won't help you either - in this case both the red and the blue color channels have a much lower amplitude than the green channel.

As the lens compensation takes into account both the normal color signal and the crosstalk signals in the compensation, the colors come out right if you test this lens shading with a similar input than the one you used to calibrate with. So your results with the gray or white surface are ok. But as soon as you change to another colored surface, you change the amplitude in the color channels, the balance between compensation vignetting and crosstalk is lost and you encounter color variations in areas with larger crosstalk.

So.... - I think one thing you could try is to improve your light source, to make it more similar to normal daylight. You could do something like


    print 'Max red:',cplane[:,:,0].mean()

    print 'Max green1:',cplane[:,:,1].mean()

    print 'Max green2:',cplane[:,:,2].mean()

    print 'Max blue:',cplane[:,:,3].mean() 

to get an idea whether the amplitudes in the color channels are similar to each others. Calculate the compensation only if all channels show the same mean.

I must confess that I do not know whether that will really improve the situation. Under these circumstances I got at least some reasonable results with my Schneider-Componon lens. I will continue with some further experiments. If anything comes out of these experiments, I will post here in this thread.

Best, cpixip

zbarna commented 5 years ago

Hi!

Thanks for your replies, and for the ideas!

I made some tests according to your comments.

Here you can find the new test files: https://drive.google.com/open?id=1ttlqu_YygXupBVJ-F6du-B_SxI9eK8JB

I have a phone with AMOLED display, and used it to test the cross-talk as you mentioned:

I set the phone screen to blue color (downloaded a screen test application) and captured raw photo, you can see the result in the blue_calib folder, I tested also with red screen, results in the red_calib folder, and green screen, results in the green_calib folder.

I also tested the light values with gray and white surfaces and different illuminations: The results are in the light_test_values.txt.

Unfortunately I cannot reach the 4 values to be the same. I downloaded an application to my phone. With this app I could mix the rgb colors of my screen. I put the screen under the microscope lens, play with the rgb values and try to reach the same values for the 4 channels. I had to set the RGB values for the following: R=255, G=178, B=237

I get these light values (the values are very similar): Max red: 168.213309074 Max green1: 171.196699722 Max green2: 166.53029502 Max blue: 169.799464481

With these RGB values. I run your calibration script when my phone screen was under the microscope lens. The result is here: https://drive.google.com/drive/folders/1ttlqu_YygXupBVJ-F6du-B_SxI9eK8JB

What do you think about the results above?

If something is not clear, please tell me, and I try to explain it better!

One more thing! You mentioned this integration sphere. I've read it also in the description of your github project. You wrote that you used a 3d printer to create this sphere? Maybe I can also print this out with a 3d printer here and use it for the calibration. What do you think, should I try this way? If yes, could you share the design file of the sphere for the 3d printer?

Thanks!

Best Regards, zbarna

cpixip commented 5 years ago

Hi @zbarna,

the integrating sphere you asked about is designed to be used in a Super-8 film scanner. It is very much still in the design phase. Furthermore, it has only a very small window, sufficient for a film frame to be scanned, so it is probably not something that you can use it in your setup.

Here's a picture to give you some idea what it looks like:

49311817-85d64b00-f4e2-11e8-8382-e13c2e888d01

The integrating sphere is the glowing thing on the right.

Well, printing takes a lot of time (about 2 days in total) and some material. The sphere requires further a lot of supporting electronics and software to be usable. That stuff is also still in the design phase (currently using a Raspi/Arduino, plan is to drop either the Arduino or the Raspi). So, I do not think this will help you too much, at least not in the current design phase. Once the design is settled, I will probably put the design on Thingiverse or so.

It is great that you did the experiments with the phone as light source. In this way, you could adjust all 4 raw channels to a similar mean value. Interestingly, you than end up with a lens compensation table

p2

which is very similar to the compensation tables I get with my setup. And totally different from the ones you got with either your cold or warm light source!

Now, note that this compensation table only displays rather coarsely digitized regions. Indeed, the values in the table range only between 32 and 40. The reason is that the coding of the lens shading table in the Raspi pipeline is not exactly perfectly chosen (which I already remarked in a discussion at the Raspberry Pi forum). Because of this, you are going to end up with a coarsely digitized table which will introduce tiny little color variations in your final image. Look at your image which you have taken with the above lens compensation table

p1

I white-balanced your image and increased the intensity range to the maximum. Note that there are patches with varying color across this image. These are color variation caused by the poorly chosen coding of the ls_table.h in the implementation of the Raspberry image pipeline.

Since you supplied me with the raw images, I can show you here what your lens compensation table would look like if it was coded as float:

p3

a rather dramatic difference! Of course, the slightly colored patches would be gone if I would use this "floating point" lens compensation table.

One remark here: you could increase the scaler (per default at 32) to a higher value. This would increase the digital precision of the lens compensation table. It would also lead to an increased sensitivity of the camera and an increase in the noise level. And it could saturate the processing during later stages in the camera pipeline. However, from my experience, scalers of 48 or 64 are fine. They will increase the digital precision of the lens compensation table to a point where the aforementioned color patches are no longer noticeable.

Otherwise, I think the lens compensation did do a good job here. However, I guess you did not succeed in getting the same nice results from arbitrary surfaces with you illumination source. Let us explore why...

So, back to the main issue of this discussion. Thanks again for sharing your raw images with the phone as light source. If you look at one of your "blue" images,

p4

you see exactly the same pattern with your optical setup as the one I found with my lens (the Green1 and Green2 channels are swapped in the image above, as compared to my setup posted earlier).

You can see in both images that the color spill from the blue channel creates a nice pattern in the other channels. And, importantly, the pattern is identical in your and my setup (save of the swap of the green channels, as noted).

Well, this pattern displays the crosstalk I was talking about. The spatially varying spill from the blue color channel is caused by the mismatch between microlens/Bayer pattern and our non-matching lenses. If you think about it geometrically, you would expect in one of the green channels a blue spill which is strongest at the top and bottom of the frame (Gr, top-right) and in the other green channel a spill which strongest at the left and right sides of the frame (Gb, bottom-right). Also, the spill from the blue channel should be roughly circular symmetric in the red channel (top-left) and reduce the values in the blue channel similar like "normal" vignetting (bottom-right). As you can see, this is exactly the case. And again, this color spill pattern shows up with your setup as well as with my lens.

If you analyze the color spill from the red channels it turns out very similar to the spill of the blue channel. Just the roles of the Gr and Gb channels are swapped (again, this is what one would suspect from geometric arguments). The spill from the green channels is different. Here, a circular spill into the red and blue channels is observed. But again, this also follows from the same geometrical argument.

So, at this point in time I am pretty sure that what I suspected some time ago in a discussion at the Raspi forums turns out true: as soon as we change the optical pathway substantially from the imaging geometry of the standard lens, we will have a spatially varying color spill.

Well, the spill will manifest itself in several ways, all of which will make it difficult to obtain high-quality color images. First, the colors recorded towards the edges of the frame will shift, compared to the center. The white balance algorithm of the Raspberry pipeline will tend to enhance this color shift. This is what you are noticing in many of your images of surfaces with varying color.

Also, there will be a desaturation of color values toward the edges - something I have also seen in many images posted on the web where the standard lens of the v1- or v2-camera was removed. As you know, the effect is much more pronounced with the v2-camera, due to the larger CRA-compensation used. This can also been seen in many images available on the net.

Furthermore, a light source which adds uneven power in the color channels will worsen the situation. As your table of light measurements indicates, you have this uneven illumination with your cold/warm light source.

So, I don't know... - the thing is: we can do nothing to compensate this color spill effect within the current Raspi image pipeline. What could be done (and is probably done in the Raspi processing pipeline) is to counteract a more or less constant color spill across the image. However, this is simply not the case with our setups. A spatially varying color spill can not be counteracted with the available hardware pipeline in the Raspi.

At this point in time, I can only see a few approaches which might be possible:

1) one could only use a small subframe of the full image, where the color spill is not too large for image acquisition. I think this is doable, but the resolution will probably be limited to 800x600 at most. From my experience, the situation is better with the v1-sensor than the v2-sensor, as the later has a stronger CRA-compensation build into the microlenses.

2) one could forget about the Raspberry Pi image processing pipeline altogether, capture simply the raw images and do all the processing separately. Advantage would be that you can use more sophisticated algorithms, even ones that are able to counter the spatially varying crosstalk. The primary disadvantage is that your frame rate drops to rather low values of about a single frame per second. Not reasonable for most applications (certainly not applicable for mine). And of course, you need a lot of knowledge to implement appropriate algorithms. Furthermore, I have seen a total desaturation towards the edges in images taken with the v2-sensor, which indicates to me that the crosstalk is so strong in this case that you will never able to recover a meaningful color signal.

3) one could modify the optical pathway in such a way that one can work with the standard lens still attached to the camera. Advantage would be that the camera is used as it was designed to be used. Specifically, the image quality will not be compromised by any mismatch of the microlenses. One could even upgrade to the higher resolution v2-camera in this case. Actually, some people have realized a Super-8 scanner in that way, simply by prefixing a sufficiently strong lens in front of the normal Raspi cam. I tried this approach about a year ago, but I could not obtain a lens with a sufficient magnification and good enough optical properties to map the Super-8 frame to the full sensor. In your case, which is, I assume, microscopic work, you could simply try to have the standard camera look through a normal microscopic eye piece. I know that some people had quite some success with mobile phones attached to a microscope in that way, and the Raspi cams are mobile phone cameras at heart. You might try that road.

4) one could use the camera in monochrome mode only. Not a solution for my project, but astronomy applications might be able to employ such a mode.

Well, that's it for the moment. In case I come up with another solution, I will let you know. At this point in time, I have no other ideas. Actually, I am currently thinking of going go back to an option I was already considering about a year ago, namely using a more expensive machine vision camera for my project.

Best, cpixip

zbarna commented 5 years ago

Hi!

Thanks a lot for analyzing my images and explaining the situation.

Unfortunately I can see that it won't work with the actual raspberry pi lens compensation possibilities.

Thanks for the suggestions! I think we will try to find an optical solution for this issue.

If we have a step forward I will let you know as well!

Have a nice day! :) Best Regards, Zoltán