flanggut / smvs

Shading-aware Multi-view Stereo
BSD 3-Clause "New" or "Revised" License
250 stars 75 forks source link

Question about parameters for Middlebury benchmark #29

Closed kristinpro closed 2 years ago

kristinpro commented 5 years ago

Hello, I was wondering if you can reveal the set of parameters that was used for the results reported in the Middlebury dataset?

I am trying to reproduce the results for the Dino and Temple to betetr understand what I can set to obtain better results for my data, which has similar characteristics to the ones from Middlebury.

Thank you.

mordka commented 5 years ago

Hi @kristinpro, I'm not sure if you are asking for this but I noticed the parameter files are included in the archives. Also, useful instruction is available here: https://github.com/simonfuhrmann/mve/wiki/Middlebury-Datasets

kristinpro commented 5 years ago

Hi @mordka , thanks for pointing out the instruction page. Besides, what archives you're atalking about? Are there archives for SMVS or for the Middlebury submission of SMVS?

mordka commented 5 years ago

@kristinpro I meant Middlebury Datasets archive files.

kristinpro commented 5 years ago

@mordka I was actually talking about the parameters of the algorithm which were set to obtain the reconstruction as reported in the Middlebury benchmark website. In their archives, in the other hand, they provide the parameters of the camera for each view - the ground truth poses required to initiate the dense reconstruction.

So I wonder if the reconstruction result as reported on the website was obtained using the default paremeters as they are currently set when you launch the algorithm? OR, which I think is the case, one has to adjust few things. If yes, then HOW?

It is important for me to reproduce the ranking of SMVS with respect to other methods because when I do evaluation on my data (with similar caracteristics), the ranking is different. Precisely, in my evaluation, OpenMVS is on the top of the SMVS while in the benchmarks it is all the way around.

So, I would like to understand why this happens. Is the data I work with really challenging for SMVS? Can the parameter set be arranged to acheave better results?

Thus, I thought if I can reproduce the reconstruction of the sequences from the benchmarks and assess it visually with wahat I see in the MIddlebury (and ETH3D) then I can at least make some conclusion regarding parameters used for that type of data.

In my work I deal with a reconstruction of human baby mannequin - a weakly textured object.

flanggut commented 5 years ago

Hi! The standard parameters do not activate the shading optimization. You should definitely do so by using the ‘-S’ option. If you want the highest quality you should also optimize on the finest level using the ‘-o1’ option.

Keep in mind that the shading term does not work well if your scene has a lot of local lighting variations (self shadowing and interreflections).

On Wed, 11 Jul 2018 at 14:16, kristinpro notifications@github.com wrote:

@mordka https://github.com/mordka I was actually talking about the parameters of the algorithm which were set to obtain the reconstruction as reported in the Middlebury benchmark website. In their archives, in the other hand, they provide the parameters of the camera for each view - the ground truth poses required to initiate the dense reconstruction.

So I wonder if the reconstruction result as reported on the website was obtained using the default paremeters as they are currently set when you launch the algorithm? OR, which I think is the case, one has to adjust few things. If yes, then HOW?

It is important for me to reproduce the ranking of SMVS with respect to other methods because when I do evaluation on my data (with similar caracteristics), the ranking is different. Precisely, in my evaluation, OpenMVS is on the top of the SMVS while in the benchmarks it is all the way around.

So, I would like to understand why this happens. Is the data I work with really challenging for SMVS? Can the parameter set be arranged to acheave better results?

Thus, I thought if I can reproduce the reconstruction of the sequences from the benchmarks and assess it visually with wahat I see in the MIddlebury (and ETH3D) then I can at least make some conclusion regarding parameters used for that type of data.

In my work I deal with a reconstruction of human baby mannequin - a weakly textured object.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/flanggut/smvs/issues/29#issuecomment-404148177, or mute the thread https://github.com/notifications/unsubscribe-auth/ACrqxAbYgHfGTk1X5t5WTy8rKZDe_WWYks5uFeyhgaJpZM4VFa0l .