Closed jatentaki closed 1 year ago
I randomly sample 1000 shapes from the full validation set and run the evaluation on the subset.
May I ask what is that subset? I'm looking to compare my method with LION as reliably as possible, in the absence of model weights to run the evaluation myself.
Another question: the dataset, as downloaded from https://github.com/autonomousvision/convolutional_occupancy_networks#shapenet, includes "scale" and "loc" parameters for each point cloud. Do you apply those in your data loaders or are you modelling the "raw" data?
I would like to ask how you evaluated your code on ShapeNet-vol. Since the benchmark's complexity is quadratic in the number of examples, and already takes quite a while with just the airplane category of ShapeNet-pointflow, I am wondering how you computed the metrics for all categories combined.