Closed juvinski closed 5 years ago
Try to re run using the code in the altum-support branch. If that's not any better, try an image with more stucture.
On Fri, Mar 29, 2019, 08:05 Vinicius Juvinski <notifications@github.com wrote:
Hi, I'm trying to align images following the alignment tutorial but the product of the alignment was poor. Anyone has any ideia how I can improve the aligment ? I tried several values for max_iterations=1000 - from 100 to 100000 and I couldn't see any difference. This is the result: http://i63.tinypic.com/2hqb7md.png http://url
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/micasense/imageprocessing/issues/41, or mute the thread https://github.com/notifications/unsubscribe-auth/AGTc02VnR0n6eAyhbpV6--pK_O-NwNYZks5vbiu6gaJpZM4cSmKp .
Hi @poynting I changed the branch but still with bad alignment. I made some tests with opencv to align images and using HOMOGRAPHY the results was great, but when I tried to use HOMOGRAPHY on the micasense lib I'm getting this error on the crop function: new_pts = cv2.transform(new_pts, cv2.invertAffineTransform(affine)) cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/imgwarp.cpp:3116: error: (-215:Assertion failed) matM.rows == 2 && matM.cols == 3 in function 'invertAffineTransform'
Any idea ? Thanks
The default for the library should be Homography. Please double check that your code is calling all of the functions with the match type set consistently. Try running from the Alignment notebook directly.
To help any further I would need the actual code and the images.
On Sat, Mar 30, 2019, 22:57 Vinicius Juvinski <notifications@github.com wrote:
Hi @poynting https://github.com/poynting I changed the branch but still with bad alignment. I made some tests with opencv to align images and using HOMOGRAPHY the results was great, but when I tried to use HOMOGRAPHY on the micasense lib I'm getting this error on the crop function: new_pts = cv2.transform(new_pts, cv2.invertAffineTransform(affine)) cv2.error: OpenCV(4.0.0) /io/opencv/modules/imgproc/src/imgwarp.cpp:3116: error: (-215:Assertion failed) matM.rows == 2 && matM.cols == 3 in function 'invertAffineTransform'
Any idea ? Thanks
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/micasense/imageprocessing/issues/41#issuecomment-478314603, or mute the thread https://github.com/notifications/unsubscribe-auth/AGTc0xBToaM374v72YCROsNekOC9n0Hgks5vcE5UgaJpZM4cSmKp .
Hi @poynting
I was looking at the imageutils and I'not a python specialist but I think the default method is affine: def align_capture(capture, ref_index=4, warp_mode=cv2.MOTION_AFFINE, max_iterations=2500, epsilon_threshold=1e-9):
I'm using the jupyter notebook - on the test image is working perfectly but on my images - i tried several and I always having a bad alignment results.
It appears that you might still be on the master branch.
https://github.com/micasense/imageprocessing/blob/altum-support/micasense/imageutils.py#L174
Did you do a
git fetch git pull altum-support
On Sun, Mar 31, 2019, 11:52 Vinicius Juvinski <notifications@github.com wrote:
Hi @poynting https://github.com/poynting
I was looking at the imageutils and I'not a python specialist but I think the default method is affine: def align_capture(capture, ref_index=4, warp_mode=cv2.MOTION_AFFINE, max_iterations=2500, epsilon_threshold=1e-9):
I'm using the jupyter notebook - on the test image is working perfectly but on my images - i tried several and I always having a bad alignment results.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/micasense/imageprocessing/issues/41#issuecomment-478368764, or mute the thread https://github.com/notifications/unsubscribe-auth/AGTc0-jxDqGh20Pc4wNQHsylDgjI-Chiks5vcQQDgaJpZM4cSmKp .
Hi @juvinski - were you able to check your current branch and verify if this is working for you now?
sorry for my question but how i can enter to the altum-support branch?
As of yesterday it has all been merged to master. You can do a
git pull origin master
Or delete the directory and do a fresh git clone.
On Thu, Apr 11, 2019, 07:29 Mattuc <notifications@github.com wrote:
sorry for my question but how i can enter to the altum-support branch?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/micasense/imageprocessing/issues/41#issuecomment-482138219, or mute the thread https://github.com/notifications/unsubscribe-auth/AGTc0z13kalYCJAGD-McmV3T9tdWXugbks5vf0bTgaJpZM4cSmKp .
thanks poynting for your answer
Hi @Mattuc - does this resolve the issue for you?
@poynting the issue decrease but the alignment is not complete at all
@Mattuc can you be more specific about what your issue is? If the alignment code isn't completing this is probably an issue with the multiprocessing support on your platform/os (others have had this problem, but I can't reproduce it). See this post for a workaround: https://github.com/micasense/imageprocessing/issues/51#issuecomment-480941497
I have the same issue, I can't obtain a good alignment, I'm using these Alignment settings: match_index = 1 # Index of the band max_alignment_iterations = 1000 warp_mode = cv2.MOTION_HOMOGRAPHY pyramid_levels = 0
the result is attached.
That's a quite poor image to try to align, as it has very little features as described by the tutorial. Please try an image with more texture as described by the tutorial.
(edit: the changed/updated image above looks much better and should be alignable with the appropriate settings)
I got the following error message when I was trying to run 'Alignment' code in the tutorial. I got this both in Spyder and Jupyter. Any help would be appreciated.
RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "C:\ProgramData\Anaconda3\envs\micasense\lib\multiprocessing\pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "G:\My Drive\Davis\Research\Python\MicaSense\imageprocessing-master\micasense\imageutils.py", line 161, in align warp_matrix, warp_mode, criteria) TypeError: findTransformECC() missing required argument 'inputMask' (pos 6) """
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
@moghi005 I had the same error, the solution was to download the repository again and recreate the anaconda environment with the updated repository.
That's correct. That issue is a new issue with the April opencv 4 release, so I've pinned to an older version but the best approach is to recreate the conda environment.
On Tue, May 7, 2019, 11:44 jhonjj93 notifications@github.com wrote:
@moghi005 https://github.com/moghi005 I had the same error, the solution was to download the repository again and recreate the anaconda environment with the updated repository.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/micasense/imageprocessing/issues/41#issuecomment-490208364, or mute the thread https://github.com/notifications/unsubscribe-auth/ABSNZU6M5NN6QLEL32SH773PUHEW7ANCNFSM4HCKMKUQ .
@poynting @jhonjj93 Thanks for your reply. I recreate the conda environment and it seems that it solved the issue. Do you have any idea how long this part of the code might take: warp_matrices, alignment_pairs = imageutils.align_capture(capture, ref_index = match_index, max_iterations = max_alignment_iterations, warp_mode = warp_mode, pyramid_levels = pyramid_levels_1) It had it run for 12 hours and I stopped it and ran it again (it is i7 with 32GB ram). I am afraid there is still a problem.
Thanks.
On my i7 laptop with the example data it finishes in under 30 seconds.
You probably have the same problem mentioned in issue #51. Try setting multiprocessing to False.
https://github.com/micasense/imageprocessing/issues/51#issuecomment-480941497
Also try setting the pyramid levels parameter to None. It only works if you have good RigRelatives.
pyramid_levels = 0 # for images with RigRelatives, setting this to 0 or 1 may improve alignment
@poynting Thanks for your comment.
I changed the code to: warp_matrices, alignment_pairs = imageutils.align_capture(capture, ref_index = match_index, max_iterations = max_alignment_iterations, warp_mode = warp_mode, pyramid_levels = pyramid_levels_1, multithreaded=False)
Then I got the same error as before:
File "
File "G:\My Drive\Davis\Research\Python\MicaSense\imageprocessing-master\micasense\imageutils.py", line 228, in align_capture mat = align(pair)
File "G:\My Drive\Davis\Research\Python\MicaSense\imageprocessing-master\micasense\imageutils.py", line 161, in align warp_matrix, warp_mode, criteria)
TypeError: findTransformECC() missing required argument 'inputMask' (pos 6)
It seems that there is an issue here which is not related to multithreaded option. What do you recommend me to do?
Thanks!
It worked eventually! I removed the env and reinstall the Anaconda. I downloaded the repository again. It worked.
@jhonjj93 is alignment working for you now as well, or are you still having troubles? If you are still having trouble can you send a set of images to me?
I got a poor alignment although there are several distinct features in the image. This is the result I got:
by running this: match_index = 2 # Index of the band max_alignment_iterations = 20 warp_mode = cv2.MOTION_HOMOGRAPHY # MOTION_HOMOGRAPHY or MOTION_AFFINE. For Altum images only use HOMOGRAPHY pyramid_levels_1 = 0 # for images with RigRelatives, setting this to 0 or 1 may improve alignment
print("Alinging images. Depending on settings this can take from a few seconds to many minutes")
warp_matrices, alignment_pairs = imageutils.align_capture(capture, ref_index = match_index, max_iterations = max_alignment_iterations, warp_mode = warp_mode, pyramid_levels = pyramid_levels_1, multithreaded=True)
Does anyone have a suggestion? Thanks.
@poynting The alignment is now working for me, thanks, but only with some images. I have a set of images, but these are just rice crops, they do not have specific textures. The main idea is to use them with third-party software to generate orthomosaics. I know, the purpose of this tutorial is to process a capture with some features. Is this set of images only useful to generate an orthomosaic? Is there no way to process them separately? I am new at this, thanks!
@moghi005 what other settings for the parameters have you tried? There are many options to tune the algorithm. For example, try setting pyramid levels to None.
The algorithm used needs edges that appear in all of the bands.
As a reminder: I want to help where I can but this code is provided only as an example. This isn't production code. It will require your work to understand, modify, and tune for your own purposes. It will fail on a lot of different types of images, as is described in the documentation. I won't be able to help you without being provided the original images, and it may not work for your images.
@jhonjj93 have you seen the Batch Processing notebook? Your requested workflow is exactly what I use it for. The resulting files have the original images gps Information and can be processed with a photogrammetry suite or processed on their own.
As is described in the alignment tutorial, you only need one good alignment. Once you have it you can reuse it for every picture in the flight and possibly in multiple flights.
https://micasense.github.io/imageprocessing/Batch%20Processing.html
@poynting Thanks for your response. I am currently working on radiometric calibration and get back to the alignment afterwards.
A question about radiometric calibration, why we need 'row gradient correction'? Why top rows in an image are darker (so that we have to multiply with a number greater than 1)?
Best,
I don't want to hijack this issue to discuss other things, but generally the derivation of our radiometric model is proprietary. However I can say that it's the model we calibrate to and provides better results than simpler models.
On Wed, May 15, 2019, 16:33 moghi005 notifications@github.com wrote:
@poynting https://github.com/poynting Thanks for your response. I am currently working on radiometric calibration and get back to the alignment afterwards.
A question about radiometric calibration, why we need 'row gradient correction'? Why top rows in an image are darker (so that we have to multiply with a number greater than 1)?
Best,
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/micasense/imageprocessing/issues/41?email_source=notifications&email_token=ABSNZU32YTRZIRKCZA27JPDPVSMVDA5CNFSM4HCKMKU2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVQHRFA#issuecomment-492861588, or mute the thread https://github.com/notifications/unsubscribe-auth/ABSNZUYXAH52TUU3BXDDLKDPVSMVDANCNFSM4HCKMKUQ .
@poynting Thanks for the reply. Is there any page (forum) about this?
I understand it is proprietary - I just want to know the reason why we should multiply the first rows of pixels to a larger value! I believe this is related to the sensor, and we should do it to compensate the non-uniformity in sensor/lens.
To Recap on the best settings:
If your camera has RigRelatives (any RedEdge-MX or Altum, most RedEdge-M) set pyramid_levels
to 0 or 1.
If your camera doens't have RigRelatives or your alignment isn't working, set pyramid_levels
to None. Also do contact support to get an updated calibration file for your camera that adds RigRelatives.
Now that pyramiding is working well, I haven't seen a case where I need to set max_alignment_iterations
to more than about 10, but YMMV.
I'm going to close this ticket -- I've been using this code to process data from a lot of different cameras, both RedEdge and Altum, over the past couple of weeks and I haven't run into a single alignment issue that wasn't resolved by changing settings.
If someone has a new alignment issue or a settings question, please first make sure you're running the most recent master
and using the stock Alignment notebook without modifications (other than to change the paths to your images). If you still have problems, please open a new ticket and make sure to include the images you're having a problem with. Including a flight capture and a panel capture makes things easiest. I won't be able to help (or even have the time to try) without being able to run the stock code on the images you are having trouble with.
I have very much been struggling with the previously mentioned error
TypeError: findTransformECC() missing required argument 'inputMask' (pos 6)
I have tracked it down to the wrong opencv version, as per this pull request:
https://github.com/opencv/opencv/pull/13837
However when I do this command:
opencv_version
It returns:
3.4.7
so I am left somewhat confused. It was also mentioned in that pull request that perhaps my python is requiring optional arguments for some reason? However my python version is 3.7.3, and is downloaded/included via miniconda so I'm not sure how it could by the fault of my python. Reinstalling everything, as mentioned by @moghi005 did not work for me.
For anybody else with this same issue, I have managed to find a hack. Open up the imageutils.py file (in the micasense source). Search for : cv2.findTransformECC
. Change the line to this:
cc, warp_matrix = cv2.findTransformECC(grad1, grad2, warp_matrix, warp_mode, criteria, None, 5)
@rrowlands Hi, I also faced with the same error. And it seems it's solved by adding two parameters cc, warp_matrix = cv2.findTransformECC(grad1, grad2, warp_matrix, warp_mode, criteria, None, 5)
. I just wonder that what the None
and 5
represent for. Thanks!
Hi, I'm trying to align images following the alignment tutorial but the product of the alignment was poor. Anyone has any ideia how I can improve the aligment ? I tried several values for max_iterations=1000 - from 100 to 100000 and I couldn't see any difference. This is the result: http://i63.tinypic.com/2hqb7md.png