vishwa91 / wire

wavelet implicit neural representations
MIT License
129 stars 15 forks source link

The data for Representing pointclouds with WIRE #11

Open Aiolus-X opened 11 months ago

Aiolus-X commented 11 months ago

Hallo! I want to ask how to convert the .xyz data from the The Stanford 3D Scanning Repository to the .mat data for the experiments in this paper? Could you give me the code for the conversion please? Thank you very much!

danivelikova commented 10 months ago

Hello, I am also interested in running the 3D experiment. Can you please provide the .mat data or the conversion code?

vishwa91 commented 10 months ago

@danivelikova please check the data folder, we included a 512x512x512 occupancy volume of Thai statue

@Aiolus-X please check below a simple script that generates mat files from ply files. Hope that helps!

#!/usr/bin/env python

import sys
import numpy as np
from scipy import io

# Please add the path to ACORN github repository:
# https://github.com/computational-imaging/ACORN
sys.path.append('/path/to/ACORN/repository')

# ACORN imports
import dataio

def main(expname, res=512):
    expname = 'thai_statue'             # Name of the ply file
    res = 512                           # Size of the occupancy volume (res, res, res)

    print('Loading mesh dataset')
    ocd = dataio.OccupancyDataset('../data/ply/%s.ply'%expname,
                                  nsamples=0, res=res)

    axis_coords = np.linspace(-1, 1, res)
    X, Y, Z = np.meshgrid(axis_coords, axis_coords, axis_coords)

    coords = np.hstack((X.reshape(-1, 1),
                        Y.reshape(-1, 1),
                        Z.reshape(-1, 1)))

    print('Evaluating occupancy')
    occupancy = ocd.evaluate_occupancy(coords)

    occupancy = occupancy.reshape(res, res, res)

    print('Saving data')
    mdict = {'hypercube': (occupancy*255).astype(np.uint8)}

    io.savemat('%s_%d.mat'%(expname, res), mdict)
BowenLyu commented 9 months ago

hello!
I want to know that have you ever tried to reconstruct surface directly from point clouds? Just like SIREN does.
I changed your code to follow SIREN's sample approach, i.e., F(x) = 0 if x is the point of the input point clouds, abs(F(x)) > 0 if x is arbitrary spatial point. And use the eikonal loss. (The same way as SIREN). However, I find it did not converge. I follow your setting in wire_occupancy.py, omega0 = 10.0
sigma0 = 40.0

Can you give some advice? Thanks!

csyhping commented 5 months ago

@vishwa91 , thanks for sharing your preprocessing code. I guess there should be a typo on ocd = dataio.OccupancyDataset('../data/ply/%s.ply'%expname, nsamples=0, res=res) because it seems dataio.OccupancyDataset only accepts one parameter, the filename.

I also wanna ask, ACORNsamples 20000000 points, do you use the same sample number for MINER and wire? Sampling 20000000 points takes a very long time (several hours for one shape..).

vishwa91 commented 5 months ago

@csyhping thank you for catching that. I did modify the OccupancyDataset function as followns.

class OccupancyDataset():
    def __init__(self, pc_or_mesh_filename, nsamples=20000000, res=2048):
        self.intersector = None
        self.kd_tree = None
        self.kd_tree_sp = None
        self.mode = None

        if not pc_or_mesh_filename:
            return

        print("Dataset: loading mesh")
        self.mesh = trimesh.load(pc_or_mesh_filename, process=False, force='mesh', skip_materials=True)

        def normalize_mesh(mesh):
            print("Dataset: scaling parameters: ", mesh.bounding_box.extents)
            mesh.vertices -= mesh.bounding_box.centroid
            mesh.vertices /= np.max(mesh.bounding_box.extents / 2)

        normalize_mesh(self.mesh)

        self.intersector = inside_mesh.MeshIntersector(self.mesh, res)
        self.mode = 'volume'

        print('Dataset: sampling points on mesh')
        if nsamples > 0:
            samples = trimesh.sample.sample_surface(self.mesh, nsamples)[0]

            self.kd_tree_sp = spKDTree(samples)

    def __len__(self):
        return 1

    def evaluate_occupancy(self, pts):        
        # We will need to batch the query process as it is causing memory out
        # errors
        batchsize = int(1e6)
        npts = pts.shape[0]
        occupancy_list = []
        for b_idx in tqdm(range(0, npts, batchsize)):
            b_idx2 = min(npts, batchsize+b_idx)
            pts_sub = pts[b_idx:b_idx2, :]
            query_sub = self.intersector.query(pts_sub).astype(int).reshape(-1, 1)
            occupancy_list.append(query_sub)

        occupancy = np.vstack(occupancy_list)
        return occupancy

I sampled a total of 1024x1024x1024 points. It does take a lot of time, but fortunately it is a one time operation. Hope that helps!

csyhping commented 5 months ago

@vishwa91 Thanks for getting back to me so quickly. Do you observe any apparent difference between the resolution of 512 and 1024? I remember the resolution in the paper was 1024 (which is the same as you replied), while the Lucy and Thai example was 512.

vishwa91 commented 5 months ago

For obtaining metrics like IoU, I think 512 and 1024 are similar. However, I find that a 1024 cube visually has lot more details than 512. That also explains why I used 1024 for all visual results, and 512 for metrics comparison. Hope that helps!

csyhping commented 5 months ago

@vishwa91 Thank you very much!