micasense / imageprocessing

MicaSense RedEdge and Altum image processing tutorials
https://www.micasense.com
MIT License
257 stars 152 forks source link

Error in alignment of images in new example. #45

Closed andbrs closed 5 years ago

andbrs commented 5 years ago

I was looking at the new updates to git and testing the tutorials with different images, I refer to the notebook Alignment-RigRelatives, where if you change the images detects the following error "Panels not detected in all images". It may be because the code expects to receive 6 bands, but in the case of the rededge it has 5. A strange thing that also happens is that if I try to find the panel with the example panel.py each image works well. with this images https://drive.google.com/drive/folders/18uh_zQo6hwKIocRQ9x0khYnBqrfycuyI?usp=sharing

poynting commented 5 years ago

Hi @andbrs, thanks for doing that testing.

It may be that I had not updated the notebooks as of your last pull. I think the latest code should handle both Altum and RedEdge. Can you do a

git pull origin altum-support

in your checkout directory and see if that changes the behavior?

However, the images you provided don't have the required RigRelatives metadata tags set to use that alignment method. The Alignment.ipynb tutorial should still work though. I added some links at the top of the Alignment-Rig that describe both how to update your camera to get the RigRelatives tags and how to update older datasets. This was a feature originally added to support Pix4DFields, so if you're not a Fields user it's likely you haven't run across this update.

poynting commented 5 years ago

Also @andbrs;

I tried the Alignment.ipynb methods on your images. Because the images were taken very close to the subject, the rig translations (distance between the images) matter significantly. The result is that the default matching arguments don't work very well.

To get a good result, I had to:

The results are good, but will only work for that specific distance away from the camera.

image

Here's the whole notebook. Note the one change required above to imageutils.py. https://drive.google.com/file/d/1w3MNcu5l8xtjtzxhLMgBhFx0MZyhTJXN/view?usp=sharing

andbrs commented 5 years ago

hello first of all thank you for the help, the truth is that I have tested the file and it worked very well. A more technical question, when calculating the vegetative indices I need to know how the bands are aligned, in this case for the RedEdge MX camera I observe that NIR(3) and RED(2) and RedEdge(4) What would be the order of blue and Green? Again, thank you so much for the help the truth has been great.

poynting commented 5 years ago

All MicaSense cameras are indexed the same, with LWIR (Thermal) only applicable to Altum.

Blue, Green, Red, NIR, Red edge, (LWIR)

See for example https://micasense.github.io/imageprocessing/Captures.html

On Wed, Apr 3, 2019, 20:57 andbrs <notifications@github.com wrote:

hello first of all thank you for the help, the truth is that I have tested the file and it worked very well. A more technical question, when calculating the vegetative indices I need to know how the bands are aligned, in this case for the RedEdge MX camera I observe that NIR(3) and RED(2) and RedEdge(4) What would be the order of blue and Green? Again, thank you so much for the help the truth has been great.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/micasense/imageprocessing/issues/45#issuecomment-479741097, or mute the thread https://github.com/notifications/unsubscribe-auth/AGTc02LFRSWzlZ16erB4tgr2M-vw9dVfks5vdXg6gaJpZM4cYXiJ .

andbrs commented 5 years ago

I understand, again thank you so much for all the help. The process of taking and processing multispectral images is part of a larger research, where it is expected to perform a research article, I would like to know if you prefer to cite the website of the git or someone in specific? because, I find it an excellent tool for learning about multispectral images.

Denlar2 commented 5 years ago

Also @andbrs;

I tried the Alignment.ipynb methods on your images. Because the images were taken very close to the subject, the rig translations (distance between the images) matter significantly. The result is that the default matching arguments don't work very well.

To get a good result, I had to:

  • Change the warp_mode to cv2.MOTION_AFFINE (two places in the notebook)
  • Change the number of max iterations to 30
  • Change the reference index to 4, since it's in the center and thus geometrically closest in translation to other bands.
  • Increase the number of pyramiding levels in the matcher by 2 by adding nol += 2 on line 114 of imageutils.py. This allows the matcher to move farther by matching at a lower resolution to start.

The results are good, but will only work for that specific distance away from the camera.

image

Here's the whole notebook. Note the one change required above to imageutils.py. https://drive.google.com/file/d/1w3MNcu5l8xtjtzxhLMgBhFx0MZyhTJXN/view?usp=sharing

Hey poynting I am really interrested in this notebook, since i am also trying to get this to work close range. I just cant use the link to the notebook? If you are able to send it again, i will be really thankful

poynting commented 5 years ago

The google drive link above works for me when not logged in or from an incognito window. If this doesn't work the instructions are also provided above.

Denlar2 commented 5 years ago

The google drive link above works for me when not logged in or from an incognito window. If this doesn't work the instructions are also provided above.

Thank you poynting for your reply, I got it to work now.

fernanda0823 commented 4 years ago

Hello Poynting, I am working with multispectral images recently and I have the same problem with alignment. The distance I am working with is approx. 1,8m from the camera lens to the target. I have tried the parameters above and they were not working in my case, then, what parameters do you recommend me for this distance? I really appreciate your help.

poynting commented 4 years ago

Hi, I can't help without access to the data. It's possible this method isn't applicable to your data, which will be true if what you are imaging is not flat. If you can't share the data publicly, please email support@micasense.com with the files.