openMVG / awesome_3DReconstruction_list

A curated list of papers & resources linked to 3D reconstruction from images.
4.14k stars 832 forks source link

Use of these algorithm #14

Closed MesHeritage closed 6 years ago

MesHeritage commented 6 years ago

Hi Pierre,

First of all, thanks for putting this list up, it gives a good summary of what is going on in this field. I am a physicist working for cultural heritage and I got interested in the use of 3D reconstruction on the side of my "job/phd".

However, in my field, there is a trend of "over trust" toward softwares and lack of assessment. There is not much work about accurate the measurement is (in cultural heritage I mean, not in general). Most of them are using agisoft photoscan, which does not seem to get the best result.

On the other hand, I see that several work has been done, in particular looking at cultural heritage (ie: rebuilding Rome in one day or blunder) and many example of the algorithm here are using cultural heritage asset.

But these work are rarely reaching people in my field. I think one of the reason is in term of access. Therefore, I was wondering how hard it is to turn these algorithm into a "basic software" like VisualSFM for example. I saw that you are behind many executable such as openMSV or blunder, so you might be the best to ask!

Is there any technical difficulties behind or is it just matter of time/interest? I know it is very easy to exploit command line in a GUI and exploit the executable you complied for example but I would not be that confident regarding the clean compilation and dependencies.

Additionally, how the available softwares like photoscan or realitycapture are performing compare to "the state of the art" develop in research mentioned here?

I would be very interested to push some of these tools to have a more reliable approach to 3D in heritage.

Thanks!

pmoulon commented 6 years ago

Thank you for your detailed message. Appreciated.

FYI, it's true that there is difference between the commercial software and the open source alternative. It depends of the degree of control you want on the solution (black box vs. customizable box).

The other thing is that often some research paper did not bring any contribution back to the community. As you said there is a barrier between research and real application or availability of the technologies.

You will find plenty of solution to perform SFM/MVS as open source (Colmap, MicMac, MVE, OpenMVG, OpenMVS, PMVS, TheiaSfM). -> Most of those frameworks are easy to run (command line or python pipeline).

Gui is one thing (you seems to have more control, visual feedback), but in fact the command line often bring more flexibility and people must not be afraid of it.

As you will see here, the OpenMVG community is often using Cultural Heritage context (building) https://plus.google.com/u/0/s/openMVG/top and it works great.

Past effort to make the technology more available:

I still think that OpenSource project bring more flexibility to the user and they bring knowledge about the technology for free ;-)

Acting as a community we can make great things!

MesHeritage commented 6 years ago

Thanks for these info. Indeed I also think OpenSource project are better since we can adjust and see what is happening to the data. Often people mentioned to use tif to avoid any "modification of the picture" but at the end still use a black box solution like agisoft.

About the command line, I am using it without problem but it is not that simple within heritage sector even within academia. Among most of the research in 3D (for digital heritage) I encountered, very few would go for a "computer vision research algorithm". This can also compare for a more data processing approach, lot of scientist would prefer to use software like origin instead of matlab/R/python which are not that hard to start with for basic analysis.

But even if we can solve the "command line" part, the build step is not that straigh forward. I have tried to build openMVG three time without success (trying to build with VS, I might have overlooked one parameter I guess!). On the other hand I never had to compile anything for VisualSFM, it is quite "directly out of the box". Or with micmac, small installation that went without trouble. Same for COLMAP, even though it never gave that good result or too slow for my setup.

OpenMVG look great, mostly because you care about "keeping it updated" and there is everything needed for SFM. How hard would that be to make openMVG like micmac of VSFM just in term of installation?

Of course, there is the argument that if we want to look at opensource, we should expect to get our hand dirty with code but on the other hand it would be nice to take great open source tools to a wider community, a bit like arduino did for hardware.

pmoulon commented 6 years ago

Like your way of thinking!

Build OpenMVG should be easy in Visual (cmake, enable Release and build the target ALL_BUILD) Feel free to ask help on the github issue channel. The step by step process is explained here https://github.com/openMVG/openMVG/blob/master/BUILD.md#windows

How hard would that be to make openMVG like micmac of VSFM just in term of installation?

OpenMVG is easy to build and setup. For linux we deliver easy to build docker image and snap. We believe that the community can build the code so they become familiar with it. On the other hand if we spend time on delivering ready to use package (we will have less time to code). But any help on this is welcomed ;-)

pmoulon commented 6 years ago

@MesHeritage Do you want to continue the discussion there, or do you can close the issue if you have the answer to your questions?

MesHeritage commented 6 years ago

Hi Pierre,

Sorry for the late reply! Indeed building openMVG is very easy at the end, after another "clean compile" it did work. Apparantly a conflict with VS that was not obvious. My computer skills are more into calculation than "interface", hence I am quite curious how hard it is to do something that work well. But as you said, I can help out on that and figure it out!

My last question was about the comparaison of all these packages, including commercial software. I found some article that compare the camera position, or the topology of a terrain compare with LiDar. But I didn't find "how accurate the end 3D model is" for a given algorithm, parameters and properties of images.

I think it would be possible to make a scene of squares and spheres of known dimension, take pictures, put that into different algorithm and compare the resulting "camera position, dense point, mesh" with the known ones. Hence we could compare which one work the best for that situation, from there we could check other situation.

I have the felling that "for a particular context" a particular software might work better, but all the data I have is base on "people trying things". Also that there are different interests, in computer vision you seem to work a lot of the first steps with accurate camera position while lot of application mostly care on mesh model and rely on photoscan.

Do you know any work that try to answer that issue?

pmoulon commented 6 years ago

Hi,

Here some thoughts:

Comparison of open-source photogrammetry package comparing-7-photogrammetry-systems

Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction, A. Knapitsch, J. Park, Q.Y. Zhou and V. Koltun. SIGGRAPH 2017.

Since MVS starts to be more accessible to anyone more and more study and dataset or benchmark starts to appear on many conference see here: https://github.com/openMVG/awesome_3DReconstruction_list#mvs---point-cloud---surface-accuracy

Note: Thin object, tiny details and hard edges are still hard to recover for most of the MVS framework since it involves a lot of processing power (Mesh Refinement).

MesHeritage commented 6 years ago

Hi,

Thanks for these. I knew most of them (at least those in the list here) but I think I will directly contact the different authors for more details regarding what I am looking for.

In many article there is mention of "sub millimeter accuracy" while several used model have large visual difference. Measuring error overall might be strongly affected by the flat/large area which often come out ok. However, while doing 3D scan, we often look at details. If the process is sub-mm accurate, we should be able to measure sub-mm features with some trust in the system. If we consider mostly the core of an object we will drive down the global error. With a more control setup we could differentiate shapes/context. For example, give the error versus the sharpness of edges. Then we might see that most software work correctly for standard object but some will perform better for a given context.

Also it would be nice to make a relation between output and input. Like this we could evaluate the image dataset prior to the computation. For example feature density, distribution, type. Or more basic (dimension, number, coverage).

I think the tools are already amazing and we know it can work very well but we don't know when it does. Anyway, this would lead to another discussion around the validation process and would reauire a broader discussion between the people making the software, people willing to use it and those who morphology. So let's close this issue.

Thanks again for the small talk and for keeping up to date openMVG.

QuanXn commented 5 years ago

Hello @MesHeritage,

May I ask:

. have you found any good software package that can consistently produce accurate output (down to sub-millimeter resolution)?

. do you concern about the tradeoffs between speed (of reconstruction) vs the accuracy of output?

unfortunately the sensor manufacturers are not too bright :( I do have an idea for a much better, effective 3d reconstruction that potentially will put the dead to photogrametry.

Thanks in advance

Quang