Closed mmzsolt closed 4 years ago
The released code and models are optimized for single-view training and inference. Currently we don't plan to release one for multi-view separately. However, you could modify the data loader so that it loads the same subject from different view points and consolidate 3D embeddings in SurfaceClassifier (you can find argument called num_views in SurfaceClassifier). Some of comments are actually outdated. I will later update these to be more comprehensive.
Thanks for clarifying.
@mmzsolt did you get how to apply multi-view settings?
@caxapexac No, I did not try to do this. I experimented a bit with generating meshes from different viewpoints than the basic frontal image with arms next to the body, but the results were not so convincing. I did not dig enough to understand if this is more an encoding or a decoding issue.
Ultimately, I think I will solve my scanning problems with an active sensing solution, but it was good fun to try this method.
@shunsukesaito can you give me an example of using multi-view?
@shunsukesaito
The released code and models are optimized for single-view training and inference. Currently we don't plan to release one for multi-view separately. However, you could modify the data loader so that it loads the same subject from different view points and consolidate 3D embeddings in SurfaceClassifier (you can find argument called num_views in SurfaceClassifier). Some of comments are actually outdated. I will later update these to be more comprehensive.
Do you mean the comments as follows?
def forward(self, feature):
'''
:param feature: list of [BxC_inxHxW] tensors of image features
:param xy: [Bx3xN] tensor of (x,y) coodinates in the image plane
:return: [BxC_outxN] tensor of features extracted at the coordinates
'''
in the script SurfaceClassifier.py.
Hi, it would be great if you could document how to use this method in the multi-view setting and also provide the model needed. :)
Thank you, amazing work.