Closed ivopavlik closed 4 years ago
This is related to a pending feature request from @newcomb-d
Hello, i share my experience with datasets from different cameras. The cameras are a survey 2 RGB and a Survey 2 NIR. The approach is to run the opensfm and mesh processes with both datasets together. Then run mvs_texturing separately for each dataset. The attached file shows the steps for doing this. Steps.pdf
Hello everyone I wonder if it would not be possible to introduce in the georeferencing photos a correction by taking into account the offset of the optics.for the treatment of photos for me the best solution is to have a mosaic of photos per tape and perform the treatment (rgb, ndvi, ndvre) with qgis
Hello could anyone test this manipulation assemble the 5 layers of the same images (mica sense red edge) with qgis ,then make a mosaic of several images of the same type with ODM
The solution I arrived at involved using openCV and findTransformECC + warpAffine to align the images, although I was only comparing individual images not mosaics. Anyone find a better solution?
Very interesting, @bcjarrett -- any code you can share? Maybe that could be a contributors script to prep the data before feeding through ODM.
@smathermather Unfortunately, the process is pretty destructive and strips any existing EXIF data, making further processing challenging. I'll update here if I ever end up with a more fully realized solution.
The solution is to let the photogrammetry align the images. If you have or make different images with different bands, then the photogrametry finds the camera positions and orientations for each image regardless of which band(s) has the image. Then to obtain the ortho for each band you have to run the texturing algorithm separately for each images group (that is for each band or group of bands, each jpg image can have 3 bands). I tested succesfully with 4 bands coming from 2 cameras mapir survey 2. One camara is RGB (3 bands already aligned) and the other camera is NIR (it has 3 bands, but only the first band is usefull at the end). See atached file. Sorry about my bad english. Regards, Lucas
That makes complete sense, and is what I do for orthomosaic projects. With accurate GCPs everything ends up very well positioned. But, if for example, there is tree cover that causes blurs in the mosaic and I need to look at individual images so I can see the forest floor clearly I need to be able to line the images up to generate an NDVI.
I am not sure if we are speaking of the same thing. The process i am speaking about doesnt need GCPs. If you are trying to align individual images Ok this is another thing.
@LucasMM86 -- this makes perfect sense. Now we just need to script this up... .
Hello, i did small python scripts to do the files renaming and to change the "reconstruction.nvm" file, but i didnt make a script for the whole process. I will try to find them. But anyway i am not sure if they are "well" written, my programming skills are limited... Regards, Lucas
Poorly written is fine, @LucasMM86. We can always polish them up as needed. Cheers!
Hello everyone to assemble a photo mosaic I found this software MIC MAC, I can not test it yet  first case make him assemble an image with several bands and then cross the images with QGIS second case with this software to assemble mono-band photo mosaics that we would overlay with QGIS
MIC MAC software link https://micmac.ensg.eu/index.php/Home MIC MAC forum discussion multispectral image http://forum-micmac.forumprod.com/photogrammetry-with-sequoia-multispectral-imaging-t1510.html
@BORDENEUVE -- MIC MAC is an awesome project, and I love how focused they are on research questions. Definitely a lot to learn from their work. Also, AFAIK, their license is compatible with ours, so mixing is a possibility... .
Hello, here i found the scripts i mention in my post on this thread. First I use this python script in order to rename each set of images (all the RGB images are called RGB01.JPG, RGB02.JPG, ... and all the NIR images are called NIR01.JPG, NIR02.JPG, ...). " import glob, os
path='./imagesRGB'
NIR_or_RGB='_RGB'
for file in os.listdir(path): print file os.rename(path+'/'+file,path+'/'+file+NIR_or_RGB+'.JPG') "
Then put all NIR and RGB images in the same folder called "images". Run ODM. Now we take from the opensfm/reconstruction.NVM file only the RGB images. I use this script for doing this: " import os
band='RGB' # RGB or NIR path='./opensfm'
file_=open(path+'/reconstruction.nvm','r') filemod=open('./reconstruction'+band+'.nvm','w') b=1 cuenta_fotos=0 banda_ina=0 while b==1: a=file.readline() if band in a: banda_in_a=1 cuenta_fotos=cuenta_fotos+1 else: banda_in_a=0 if 'undistorted' in a: linea_no_foto=0 else: linea_no_foto=1 if banda_in_a or linea_no_foto: file_mod.write(a)
if a=='':
b=0
print a
file_.close() file_mod.close() print 'number of images: '+str(cuenta_fotos) " Now, runinig ODM again, it happens that the texturing and the orthophoto steps use only the RGB images. In that way we obtain the RGB orthomosaic. Now we do the same process (using the last script) to generate a reconstruction.nvm file only with NIR images and run ODM again, and obtain the NIR orthomosaic, which is perfect aligned with the RGB ortho...
Note: this works only with opensfm for sparse point cloud and not pmvs, since pmvs changes the names of the images in reconstruction.nvm. (I think opensfm is the default now).
I belive this could be better scripted, with a single script runing "on top" of ODM.
It seems this has a very similar approach to split merge and could be integrated when that process gets rewritten. Thoughts @pierotofy @dakotabenjamin?
Somewhat similar, involves processing two datasets separately (from the beginning, split-merge does the split after opensfm's reconstruction) and then merging the results.
Correct. The underlying problem is essentially the same, the current approach for addressing it does differ in some details.
What I think is particularly exciting and compelling about these related solutions is that we need to make split-merge more core to the pipeline and we have this broader set of problems to solve in multi-spectral that overlaps the split-merge problem in substance and likely in implementation.
I wonder too if this points at the solution for >8-bit datasets -- we probably don't need to do matching in 16-bits, but we do need to use the output from an SfM solution to help us provide appropriate reconstructions.
Unless I'm missing something, this means a full 16-bit solution simply (ha!) requires a 16-bit texturing solution. For those interested in quantitative outputs from these data, e.g. when flying a farm field or similar, a lumitexel approach might be necessary.
As normal jpg files support only 3 bands, have you considered using a simple band2band registration script to merge the four individual band images into one jpeg2000 file and then stitching those files?
The issue is that each band doesn’t really align, as the cameras are offset physically from each other. So it’s less a question of how many bands, and more a question of the alignment of the bands.
But if the bands are properly aligned, then ODM will stitch jpg2000?
I suspect not. I don't think OpenSfM, the underlying library, is able to use jpg2000, but I haven't investigated or tested myself.
I know the initial execution in docker doesn't recognize jp2 or jpk extensions.
Hello by measuring the positioning between the different objectives according to the axes (front back and left right) could not your pitch corrected the shift between the different bands
Correct, and I believe this is how it’s handled in agisoft.
for agisoft I do not know but it exists for Pix4D
If you had such a code for band2band registration, you still have the underlining issue of OpenSfM. Is jpg
the only format it takes? png
would also be a good alternative.
It takes PNG as well, but AFAIK, only 8-bit PNG. From a structure from motion perspective, this isn't an major issue -- we can do all the matching and SfM steps in 8-bit under most conditions without any real loss of quality, but the subsequent texturing steps as 8-bit are a problem that needs solved.
Watch Pix4D fields, it's not SFM. It's basic old photogrammetric process with stitching. Just watch what they use (it runs with wine) :
Pretty sure it uses gdal + python to put image on the ground (we have omega phi kappa from exif) then stitch them with opencv => https://docs.opencv.org/3.4.2/d8/d19/tutorial_stitcher.html
Watch @ 34:30min it's on the fly : https://youtu.be/nFOSZyp4sw0?t=2098
And Slantrange does the same with their software : no sfm for multispectral sensor
Wrong previous input. OpenCv is used to defisheye pictures
Pix4dfields is similar to COLMAP + Aerial mapper ( https://github.com/ethz-asl/aerial_mapper )
It's mostly based on Eigen AKAZE feature tracking algorithm ( http://metrology.survice.com/sites/metrology.testing.survice.com/files/cmsc-16-initial.pdf ) and Ceres Solver (Dogleg trust region methods and SPARSE_NORMAL_CHOLESKY with the Tukey biweight loss function which aggressively which attempts to suppress large errors.).
About position of sensors : Green is the master sensor and position of each sensor is shift by these datas =
@kikislater -- for COLMAP + Aerial mapper, you mean the homography approach at aerial_mapper?
Yes ! All the process except reflectance computation and pixel shifting from main sensor
Hello, I am thinking in scripting a bit cleaner the method I mention above. I want to do it in such a way that it is compatible with split merge. Do you thing this has sense? Is someone already rewriting split merge or working on that?
No one, AFAIK is yet working on this, and contributions are most welcome. It’d be beneficial to discuss and collaborate with @pierotofy.
Sorry to cross-post from related https://github.com/OpenDroneMap/OpenDroneMap/issues/190, but I'll drop this here as it seems this related thread is more active.
For RedEdge data, MicaSense offers some (python) code to get users started on radiometric processing, metadata extraction, band alignment, etc. There are examples there for automatically detecting reflectance panel images and using that calibration to convert flight images to reflectance, as well as general example describing the process of undistortion, alignment, and export of aligned images to RGB, CIR, or a multi-layer tiff while removing the unmatched overlap. The same concepts should be applicable to most multispectral cameras.
https://github.com/micasense/imageprocessing https://micasense.github.io/imageprocessing/Alignment.html
Full disclosure: I'm one of the authors and I work at MicaSense.
thanks to Poynting for this information for the treatment of the images I prefer the unique bands because with a software like QSIG one has a multitude of possibility because one can realized the computation of several index (ENDVI, GNDVI, SAVI or others) and also after cutting of the plots and vectorization one can made maps for modulation of nutrient inputs
Thanks @poynting -- this will be a great addition.
Right @BORDENEUVE, I wasn't suggesting that the rendered images be used (although with the example code, that could be easy to do, since ODM only supports JPEG).
One approach is to use a similar method of band alignment as outlined in our example code and then output a multi-band TIFF file with the radiometrically corrected aligned images. There's code in the example (using gdal) to create that output. Next, ODM would read those TIFF files (Issue #190) and perform image-image alignment. Some users of our code examples take this approach for research purposes where black-box photogrammetry products aren't an option.
The ultimate output that I think most uses would be interested in would be a multi-band geotiff with all of the bands included that represents radiometrically-corrected data (either radiance or reflectance).
it is sure that a photo format geotiff mutibandes would be very good I see it with satelitte images and software like SNAP so you advocated that ODM also face the calculation of different indices. I think that a software like QGIS offers more possibility of working but it would be more difficult to manage this type of image it works very well using tape by tape (each of which is used as a raster layer) the problem being to know if the superposition and can be also the assembly of pictures is perfect.
I haven't read the entire discussion, but here some thoughts:
Pix4D is calculating different vegetation indices such as NDVI at the end of the process. However, perfectly aligned / co-registered bands are necessary.
The final caulculations could leverage grass gis. Specifically r.mapcalc: https://grass.osgeo.org/grass74/manuals/imageryintro.html https://grass.osgeo.org/grass74/manuals/r.mapcalc.html
From my outside perspective, I would suggest, that this is probably best served as a plugin similar to the contour plugin. The plugin could also include a module for radiometric calibration from reflectance targets.
See also GRASS GIS i.vi for more vegetation indices
https://grass.osgeo.org/grass76/manuals/i.vi.html
On Tue, May 14, 2019 at 5:41 PM MatthiasSiewert notifications@github.com wrote:
I haven't read the entire discussion, but here some thoughts:
Pix4D is calculating different vegetation indices such as NDVI at the end of the process. However, perfectly aligned / co-registered bands are necessary.
The final caulculations could leverage grass gis. Specifically r.mapcalc: https://grass.osgeo.org/grass74/manuals/imageryintro.html https://grass.osgeo.org/grass74/manuals/r.mapcalc.html
From my outside perspective, I would suggest, that this is probably best served as a plugin similar to the contour plugin. The plugin could also include a module for radiometric calibration from reflectance targets.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/OpenDroneMap/ODM/issues/500?email_source=notifications&email_token=ABSYN7XMSKWRROXGDBFPRSTPVMWXXA5CNFSM4DBY4WGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVM33QA#issuecomment-492420544, or mute the thread https://github.com/notifications/unsubscribe-auth/ABSYN7UQXMVHCAO43GDLBJLPVMWXXANCNFSM4DBY4WGA .
NOTE: This email correspondence and any attachments to and from this sender is subject to the Freedom of Information Act (FOIA) and may be disclosed to third parties.​
by doing research on the web I could find formulas for the calculation of 8 crop indices (NDVI, ENDVI, CVI, EVI, MSR, OSAVI, LAI, GNDVI) if it interests me some I could write them here
https://www.indexdatabase.de/db/i.php
On Wed, May 15, 2019 at 11:58 AM BORDENEUVE notifications@github.com wrote:
by doing research on the web I could find formulas for the calculation of 8 crop indices (NDVI, ENDVI, CVI, EVI, MSR, OSAVI, LAI, GNDVI) if it interests me some I could write them here
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/OpenDroneMap/ODM/issues/500?email_source=notifications&email_token=ABSYN7SYRKMIYPHVU2IZI2DPVQXKFA5CNFSM4DBY4WGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVPD5UQ#issuecomment-492715730, or mute the thread https://github.com/notifications/unsubscribe-auth/ABSYN7TWRLUSOEIXMBP2WX3PVQXKFANCNFSM4DBY4WGA .
NOTE: This email correspondence and any attachments to and from this sender is subject to the Freedom of Information Act (FOIA) and may be disclosed to third parties.​
thanks for your list
here is a text that deals with the problem of multispectral images taken by drone mo2015-pub00047644.pdf
There's possibility for funding of this feature.
Our friends from polish State Forestry are looking for multispectral images and vegetation indices in ODM/WebODM, as they have Sequoia cam, and declared financial support for developer, in crowdfunding model, or direct.
@pierotofy @smathermather @dakotabenjamin - first thing we must know, "how much?" Can you help?
Hey @merkato :hand: that's great to hear!
I could have availability to help with this around the first weeks of September. Get in touch if you want to get the process started https://masseranolabs.com/contact/
If this needs to be implemented sooner we would welcome a PR for it done by somebody else otherwise.
Hello I do not know if this idea is good or not, but here it is rather than to assemble several bands of colors to make an image of it, why not do the opposite. We assemble the RGB photos and after we decompose it by photo processing in several images red, green, blue and infrared
Each band of multispectral sensors (e.g. Micasense Rededge, Sequoia) is taken with separate lens, where different angle for each band is appied (each band image is in separate file). When sources images are merged to multiband image, there bands are significantly shifted each other. Therefore, each band must be processed separately in case of mosaicking. Unfortunately, when merging resulting mosaics, similar shift is present: Possibility of aligning resulting mosaics would be appreciated.
Thanks.