Kalpanika / x3f

Tools for manipulating X3F files from Sigma cameras
88 stars 28 forks source link

Bug 95 firmware 1.0.4 error - a.k.a. - adding left/right black shields #99

Closed rolkar closed 7 years ago

rolkar commented 7 years ago

Hi, the current pull request is WIP (Work In Progress). So - it shall not be merged.

The solution seems to work.

In the x3f_process.c file there is a debug printout (search for XXXX) that writes the current mean values for the four areas. They are VERY similar - so that is good. Very good. The new left/right areas give the same value as the top/bottom. Nice!

In the x3f_image.c file is the heart of the solution in the function x3f_get_fake_camf_matrix. When getting the two Fake matrix names, it instead fetches the value from the DarkShieldColRange and constricts the matrices. Then, nearly no code needs to be touched, Is that a good solution? I tried to find a better, but could not. But no, it sux. Lets ignore that for now :)

I have two TODO in that function. I have no idea wether the image->rows is the correct value to set. Or - if you need to take keep_area into account.

So - that is it. Please take a look and see if you like it and if you believe in it.

As I said, it make beautiful pictures from the image that previouslu crashed.

rolkar commented 7 years ago

I have now fixed it so that we do not need the kludge to deep in the library know about the fake crop rectangles, just as Erik recommended.

I would say that it might be ready for merge now.

There is only one fly in the ointment. Looking at lots of X3F files, there are some few where one of the areas gives a totally too high black level. I mean, totally to high, like 1000 when it should be 7. I assume this shielded area is broken somehow. To solve this, we need a better algorithm for choosing the black level. Today we sum up all areas before calculating the black level. That is not good enough.

But, I assume this is another issue and do not stop us from merging.

One info - I have three sd Quattro H files. In two of them the top shield seems broken. Maybe just a coincidence, as all three images are from the same camera.

Still, I recommend that we file a new issue instaed of delaying this merge.

rolkar commented 7 years ago

I have now fixed all the above. The "get column" do now take an enum to decide left or right.m I think that is OK. If we, in the future, finds some multi column thingie with more than two areas, then we will add an index instead.

I have fixed the printout so that it no longer writes the accumulated value and also writes deviation. Doing so, I had to no longer accumulate values along the way, so now each call calculates its own.

Then, sometimes, one of the areas is dead wrong. Then, the total sum black level is way off. And then, the deviation is also way off, as it is relative to the black level. The deviation should really be calculated per call.

I think it would be nice to close this issue now anyhow. I think fixing a more elaborate choice of black level and all that comes with that is another story.

NOTE - in the last commit, the black level printouts still are written to INFO. Much easier to debug then. I will, of course, move it to DEBUG before any merge.

rolkar commented 7 years ago

And now I have change the printouts to DEBUG.

erikrk commented 7 years ago

Otherwise, I think it looks good now. One question though: Have you run "make check"? Don't the hashes need updating since you get somewhat different results when all four areas are used for calculating black level?

rolkar commented 7 years ago

Ouch!

Yes, the trillion of hashes are incorrect.

Any way of updating them automatically?

/Roland

On 2017-05-06 21:02, Erik Karlsson wrote:

Otherwise, I think it looks good now. One question though: Have you run "make check"? Don't the hashes need updating since you get somewhat different results when all four areas are used for calculating black level.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/Kalpanika/x3f/pull/99#issuecomment-299659649, or mute the thread https://github.com/notifications/unsubscribe-auth/AAVzmFQyx4qIkVcYsuYni_9sj9V8oKg2ks5r3MPggaJpZM4NRFQ3.


Detta e-postmeddelande har sökts igenom efter virus med antivirusprogram från Avast. https://www.avast.com/antivirus

mmroden commented 7 years ago

I can update the hashes if we all agree that the results are what they should be; if the black level is off enough to make the conversion bad and make the hash wrong, then the hash has done its job of making us check the output.

On May 6, 2017, 12:26 PM -0700, Erik Karlsson notifications@github.com, wrote:

@erikrk commented on this pull request.

In src/x3f_process.c (https://github.com/Kalpanika/x3f/pull/99#discussion_r115127193):

  • black_sqdev_sum = alloca(colors*sizeof(double)); + for (i=0; i<colors; i++) black_sqdev_sum[i] = 0.0; + + for (i=0; i<4; i++) + if (use[i]) { + int color; + int pixels = sum_area_sqdev(area[i], colors, black_level, black_sqdev); + + pixels_sum += pixels; + + x3f_printf(DEBUG, " %s (%d)\n", name[i], pixels); + + for (color = 0; color < colors; color++) { + x3f_printf(DEBUG, " dev[%d] = %g\n", + color, + sqrt(black_sqdev[color]/pixels));

As mentioned, this printout is incorrect, since it uses the overall mean to calculate the standard deviation for individual areas. This need to ether be fixed or removed. In the current state it more confusing than useful. To fix it, you would av do do an additional call too sum_area_squdev in the first loop that calculates the means. The result of this call would be used only for printouts.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub (https://github.com/Kalpanika/x3f/pull/99#pullrequestreview-36650696), or mute the thread (https://github.com/notifications/unsubscribe-auth/AAHGG7n7rBRf7YBXC8Y9gzE-qAhuq9CEks5r3MlZgaJpZM4NRFQ3).

rolkar commented 7 years ago

Thanks Marc.

I will fix the "last" comment by Erik :) It is easy, but it is late and I am tired. So, I will probably do it tomorrow morning.

One thought: I have been looking at several pictures, and they look fine. And, we now will have a new update, hopefully soon, that makes a better decision on black level. It might then be a waste of time to update all hashes now. Maybe wait until after that fix?

rolkar commented 7 years ago

I fixed the output now.

So, if we choose to not update the image hashes, then this is a merge candidate.

rolkar commented 7 years ago

As this change only affects the black level, and we are going to refine the black level computation anyhow, I do not think it is a bad thing to merge this. So, I do it.