Closed abcddcbaabcddcba closed 2 years ago
Do you mean MAD or MGD?
For the MC and MGD metrics, we just follow a classical Python package IGL. Please refer to this tutorial page https://libigl.github.io/libigl-python-bindings/tut-chapter1/ for the calculation of mean Gaussian curvature or geodesic distance of recovered meshes. For SIDE and MAD, we follow the code of unsup3d, and please refer to https://github.com/elliottwu/unsup3d/blob/dc961410d61684561f19525c2f7e9ee6f4dacb91/unsup3d/model.py#L216
Very sorry for typing the wrong question. In your paper, a depth estimation CNN network was trained for each generative model in order to calculate SIDE and MAD. If it is convenient for you, could you share the training code?
Do you mean MAD or MGD?
For the MC and MGD metrics, we just follow a classical Python package IGL. Please refer to this tutorial page https://libigl.github.io/libigl-python-bindings/tut-chapter1/ for the calculation of mean Gaussian curvature or geodesic distance of recovered meshes. For SIDE and MAD, we follow the code of unsup3d, and please refer to https://github.com/elliottwu/unsup3d/blob/dc961410d61684561f19525c2f7e9ee6f4dacb91/unsup3d/model.py#L216
I have found the structure of the depth estimation CNN network model in shadeGAN. Did you use the same model structure as shadeGAN? And what data pre-processing methods and loss functions did you use to train this CNN?
Thank you very much.
Yes, we use the same model structure as ShadeGAN. For the details, you can refer to this python script model.zip in ShadeGAN (quite similar to that used here). By the way, our model is modified from https://github.com/elliottwu/unsup3d/blob/master/unsup3d/model.py
Yes, we use the same model structure as ShadeGAN. For the details, you can refer to this python script model.zip in ShadeGAN (quite similar to that used here). By the way, our model is modified from https://github.com/elliottwu/unsup3d/blob/master/unsup3d/model.py
Thank you for your reply! Can you please check whether the following steps are correct? 1.Replace ‘model.py’ in unsup3d with the one you supply. 2.Use the staged_forward function in GOF to export both img and depth. 3.Make a synface-like dataset and modify the path in config file in unsup3d. 4.Train and test.
And I have one more question. I wonder whether the depth exported by the stage_forward function requires some numerical processing?
Looking forward to your reply. Thank you!
Yes, these steps are correct. The depth processing: depth = 1 - (depth_map.clamp(min=0.9, max=1.1) - 0.9) / 0.2
Yes, these steps are correct. The depth processing: depth = 1 - (depth_map.clamp(min=0.9, max=1.1) - 0.9) / 0.2
Thank you for your reply.
I noticed another question. Although you have provided the early training model for Carla and the trained model in another Issue, you have not provided the corresponding curriculum, could you please add it?
Thank you!
Could you please open the code of calculation of scale-invariant depth error (SIDE) and mean geodesic distance (MAD)?
Thank you very much.