google / mannequinchallenge

Inference code and trained models for "Learning the Depths of Moving People by Watching Frozen People."
https://google.github.io/mannequinchallenge
Apache License 2.0
493 stars 104 forks source link

Colmap Patch matching settings #14

Closed mbaradad closed 5 years ago

mbaradad commented 5 years ago

Hi, I am trying to replicate your training method, but my results from colmap look more dense (though also noisier) than yours (example at the end). I assume this is caused by a different setting on the patch matching (I am just using the default one) and not only the MegaDepth post processing steps. Could you provide further details about the hyperparameters you used? Thanks!

IggIqNXfu_U_119519400: image Yours: image Mine: image

zhengqili commented 5 years ago

Hi, you could take a look at the documentation I wrote here for reproducing SfM+MVS results in our MC dataset: https://docs.google.com/document/d/1lWOcbLIeGGVVpjkGiMaq0zRVZvaBJnewRUPN2mdAD_A/edit?usp=sharing

mbaradad commented 5 years ago

Thanks, this is really useful!