google / mannequinchallenge

Inference code and trained models for "Learning the Depths of Moving People by Watching Frozen People."
https://google.github.io/mannequinchallenge
Apache License 2.0
492 stars 104 forks source link

will you share the MannequinChallenge Dataset? #3

Closed Lvhhhh closed 5 years ago

Lvhhhh commented 5 years ago

will you share the MannequinChallenge Dataset? and i wonder the details of trainining the 3input model

fcole commented 5 years ago

We've made the list of video ids available now at: google.github.io/mannequinchallenge. We don't have plans to release any MVS depth data, unfortunately.

Lvhhhh commented 5 years ago

We've made the list of video ids available now at: google.github.io/mannequinchallenge. We don't have plans to release any MVS depth data, unfortunately.

fine. Beside the training data, what is the difference of your 3 input monocular model and the network in "Single-Image Depth Perception in the Wild" you referred . because your result is better than the latter. do you have some magic code? i want to learn more about the training details in just 3 input monocular model . can you give me some details?