Closed Weipeilang closed 3 years ago
Hi! Cars are a bit tricky with the canonical depth maps we are using to represent shapes in the current model. Depth maps cannot capture the full 3D shape of the cars due to occlusion. In our car experiments, we render ShapeNet cars from random top views. The model then automatically discover the canonical viewpoint to be the top view of the cars, and those top view depth maps can therefore capture a farily large surface of the cars. For real car datasets and many other objects, this canonical depth map representation will not work well, and will need to be replaced with other 3D representations.
Although our synthetic car results may not be practically useful, it can be rather helpful for other people to better understand the method and its limitations. So I plan to release our rendered car images and the trained model as well hopefully next week. Thanks!
Any updates on when trained model for cars and rendered images will be available?
@elliottwu Hi, thanks for your open source sharing. Is there a plan to release your car training datasets and car pretrain models?
Finally had the chance to look into this again. The dataset and experiment config on the synthetic cars are now released.
Hello, thank you very much for being open source. I have a problem that I only see about face and cat in your code, but your paper has about car. So I want to ask you how to train cars? Thank you very much! @elliottwu