OpenPIV / openpiv-python

OpenPIV is an open source Particle Image Velocimetry analysis software written in Python and Cython
http://www.openpiv.net
GNU General Public License v3.0
237 stars 141 forks source link

3rd order image calibration #220

Open ErichZimmer opened 2 years ago

ErichZimmer commented 2 years ago

Is your feature request related to a problem? Please describe. During some experiments, lense distortion or not being perpendicular to the laser sheet makes measurements inaccurate.

Describe the solution you'd like Use the instructions based off of the article Distortion correction of two-component - two-dimensional PIV using a large imaging sensor with application to measurements of a turbulent boundary layer flow at Re = 2386. The calibration script should be placed in its own file or in OpenPIV.tools.

alexlib commented 2 years ago

@ErichZimmer - please add here a link to the repo with your tests or the algorithm. Thanks

TKaeufer commented 2 years ago

Hey Erich,

I already implemented some kind of mapping based on polynoms. I add you to the repo.

ErichZimmer commented 2 years ago

I have access to Alex's fork, and I like your script. It is sort of similar to the aformentioned article and a lot faster than my implementation which uses lots of for-loops.

ErichZimmer commented 2 years ago

I used your script on an experiment, and it worked pretty well on deforming non-optimal calibration images. Worked on skewed images too. The interface (v2 file) was easy for me to use and I think it's simple enough to include it in OpenPIV as a pre/post processing method.

ErichZimmer commented 2 years ago

I wasn't able to make a "nicer" version of the script without sacrificing speed or accuracy. I think it is good as is. What do you think, @alexlib ?

ErichZimmer commented 2 years ago

Is there a reason why the output dewarp image appears to be inverted in the y axis? With the image coordinate system (e.g. origin top left), the image has to be inverted along the y axis for some reason. Could be user error, but I'll see in a few days.

alexlib commented 2 years ago

I didn't notice it. But, you know the best is to show this in an example - and we'll try to take a look.

ErichZimmer commented 2 years ago

It was user error, nothing wrong so far...

ErichZimmer commented 2 years ago

Is there anyway to simplify the code for _calculate_scale by using the aformentioned article?

ErichZimmer commented 2 years ago

Additionally, we should add a plot like the following so a user can observe the deformation pattern. Note, the plot is centralized, not normalized. norm_distort

alexlib commented 2 years ago

_calculate_scale

what do you mean by simplify? https://github.com/TKaeufer/Open_PIV_mapping/blob/7fd62ed05e35030d3e98ce1b64cfa55d4357c435/calibration_functions_cleaned_v2.py#L152

TKaeufer commented 2 years ago

I had a look at the paper you mentioned in the first post. From my understanding, the paper applies approaches developed around 2000 to a camera with a large sensor. But this correction is not exclusively required for cameras with larger sensors but also for cameras that have a wide field of you and fish-eye optics e.g. action cameras. Also in those cases, the distortion caused by the optics can no longer be neglected. I initially coded the polynomial dewarping as a step towards stereo-PIV, but due to time constraints, I had to pause. The code can be used to correct distortion and to calculate the scaling from the image of the calibration target for a single camera. The final steps left from my point of view are some more testing, maybe a GUI, and finally the implementation into the main process.

Best regards

Theo

ErichZimmer commented 2 years ago

@TKaeufer Have you thought about applying the calibration function into the vector deformation field/process? Fluere uses this method and it is quite fast while having a low RMS error.

TKaeufer commented 2 years ago

My idea was to map the images directly after loading them. But combining it with the deformation process might be more efficient. But I have not tested it.

ErichZimmer commented 2 years ago

@TKaeufer Is there a reason why you don't solve the system of equations to find the variables and then compute a deformation field for the image (or vectors)? I don't quite understand how you solved the coefficients to deform the images.

ErichZimmer commented 2 years ago

I got my version of the 3rd order image calibrator based on the aformentioned article working. Will be comparing Theo's with mine, but the underlying code seems to be the same minus the ability to calculate the scale.

ErichZimmer commented 2 years ago

It works by applying it to the deformation field or vector field for multi-pass evaluations or applying it directly to the image for single pass evaluations.

alexlib commented 2 years ago

Where is it ?

ErichZimmer commented 2 years ago

When I get home, I'll make a fork of Theo's repository and merge it with my script.

ErichZimmer commented 1 year ago

@alexlib @TKaeufer On image calibration, do we have any plans on implementing Multiplicative Line of Sight (MLOS), Multiplicative Algebraic Reconstruction Technique (MART), or advanced reconstruction techniques like MLOS-MART? I am very interested in these algorithms and was wondering what your opinions are. Should we start a new feature issue?

alexlib commented 1 year ago

@ErichZimmer, no plans from my side. Please feel free to start a new feature.

TKaeufer commented 1 year ago

@ErichZimmer Also no plans from my side. Indeed sounds very interesting. But, unfortunately, I am currently fully occupied by my work :(

timdewhirst commented 1 year ago

I don't know if this exists already, but for any new features would it be possible to have a set of links to source papers/texts to ensure all contributors have access to the same information?

alexlib commented 1 year ago

I don't know if this exists already, but for any new features would it be possible to have a set of links to source papers/texts to ensure all contributors have access to the same information?

You're right. @ErichZimmer opened this issue https://github.com/OpenPIV/openpiv-python/issues/287 with the relevant information

ErichZimmer commented 1 year ago

I implemented a pinhole camera model w/ 2nd order distortion correction (I did something wrong since the calibration error is quite high, but the preliminary implementation at least works) and MLOS reconstruction technique as a proof of concept. For volume grid generation, two methods were implemented: generate a grid from limits and generate a grid from volume dimensions and voxel resolution. The first method operates by stating the limits of a volume in voxels with origin at 0 (Ex: x_limits = (-20, 20)). The second grid generator takes volume size and origin in mm and voxel/mm resolution to calculate a grid (Ex: volume = (300, 300, 200), origin = (150, 150, 100), voxel_resolution=0.133). Memory taken by grid is 3(X*Y*Z), so a grid could take over 5 GB of space. The MLOS algorithm was implemented by a projection algorithm using linear interpolation. I'll refine things when I get home and time and create a branch so contributors can help with the implementations as I am currently having difficulty implementing methods that use a decent degree of math.

alexlib commented 1 year ago

I implemented a pinhole camera model w/ 2nd order distortion correction (I did something wrong since the calibration error is quite high, but the preliminary implementation at least works) and MLOS reconstruction technique as a proof of concept. For volume grid generation, two methods were implemented: generate a grid from limits and generate a grid from volume dimensions and voxel resolution. The first method operates by stating the limits of a volume in voxels with origin at 0 (Ex: x_limits = (-20, 20)). The second grid generator takes volume size and origin in mm and voxel/mm resolution to calculate a grid (Ex: volume = (300, 300, 200), origin = (150, 150, 100), voxel_resolution=0.133). Memory taken by grid is 3(XYZ), so a grid could take over 5 GB of space. The MLOS algorithm was implemented by a projection algorithm using linear interpolation. I'll refine things when I get home and time and create a branch so contributors can help with the implementations as I am currently having difficulty implementing methods that use a decent degree of math.

Which repository or branch you're working on?

ErichZimmer commented 1 year ago

I haven't created a repository or branch as of yet since I don't currently have access to my laptop and haven't revised the tomo-piv module (there is a lot of random stuff floating around that may make things hard to understand). However, I plan on creating a repository testing camera models and a branch of OpenPIV-Python for the full tomographic piv module.

P.S., I meant to put my comment on the Tomographic PIV feature request, but my phone put the comment here. sigh

ErichZimmer commented 1 year ago

Here is the preliminary calibration branch. I left out a few things as I am still working on them. Once I finalize the calibration module, I'll move on to refining the Tomo-PIV module.

alexlib commented 1 year ago

Thanks @ErichZimmer - looking forward

ErichZimmer commented 1 year ago

I played around with the calibration model and I am quite pleased with it. I must say that the marker detection algorithm could be better along with true center of mass subpixel estimation, but I tried to keep everything simple with minimal dependencies. Nonetheless, I gerry rigged a calibration plane and "calibrated" my smart phone to around 0.8 pixels root mean square error (RMSE). On PIV Challenge 2014, the RMSE was around 0.2 pixels. On a wall jet experiment, the calibration was around 0.6 RMSE. On example calibration data from the MyPTV project, the calibration was around 0.27 pixels RMSE. With RMSE around 0.2 to 0.8 pixels, it could be inferred that I implemented the models properly. Now my area of focus is marker detection, Zhang's auto-calibration method for the pinhole model, and Wieneke's volume self-calibration.

alexlib commented 1 year ago

sounds great. shall we start the pull request?

ErichZimmer commented 1 year ago

I believe we could, but there is still a lot to be done (unit testing, test data, etc).