This repository contains the source codes for the paper "AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation ". The network is able to synthesize a mesh (point cloud + connectivity) from a low-resolution point cloud, or from an image.
When trying to do 3D-3D autoencoding using your pretrained model. I get super weird results. Because your code base is a little cluttered and confusing, I extracted the code for the atlasnet model and put it in one file (atlas.txt)
Then I create a simple sanity example:
import numpy as np
import torch
import matplotlib.pyplot as plt
from atlas import get_trained_atlas
atlas = get_trained_atlas() # using 25_squares model
shape = np.load("1016f4debe988507589aae130c1f06fb.points.ply.npy")
shape = torch.from_numpy(shape[np.random.choice(shape.shape[0], 2500),:]).float()
input = torch.cat((shape.unsqueeze(0).permute(0, 2, 1),shape.unsqueeze(0).permute(0, 2, 1))).cuda()
# input shape: [batch size, channels, points] -> [2,3,2500]
shape_decoding = atlas(input)[0].reshape(3,-1).T
I construct a batch of size 2 using the same input because atlasnet cannot handle batch size 1 in training mode, which I use to get the point cloud and not the mesh.
When trying to do 3D-3D autoencoding using your pretrained model. I get super weird results. Because your code base is a little cluttered and confusing, I extracted the code for the atlasnet model and put it in one file (atlas.txt)
Then I create a simple sanity example:
I construct a batch of size 2 using the same input because atlasnet cannot handle batch size 1 in training mode, which I use to get the point cloud and not the mesh.
The point cloud that I used: 1016f4debe988507589aae130c1f06fb.points.ply.npy.zip
And here the result I got Original vs Reconstructed:
Do you have any idea why this problem occurs? I also tried normalizing the input point cloud, but it had no effect.