Hi I am very interested in your work and I'm trying to retrain the chair parsing network. But after following all the instructions as README and using same arguments, my converged model behaves much worse on Shapenet v2. The visualization for the input X (voxels) fed into model distribution is like this
while the ground truth mesh is like this
Is this input sparsity intended or there is some version problem when releasing? I read the dataloader and find that such phenomenon comes from the occupancy grid voxelizer which only take mesh vertices into voxelize, is this as designed?
Hi I am very interested in your work and I'm trying to retrain the chair parsing network. But after following all the instructions as README and using same arguments, my converged model behaves much worse on Shapenet v2. The visualization for the input X (voxels) fed into model distribution is like this while the ground truth mesh is like this
Is this input sparsity intended or there is some version problem when releasing? I read the dataloader and find that such phenomenon comes from the occupancy grid voxelizer which only take mesh vertices into voxelize, is this as designed?