Open xbq1994 opened 1 year ago
We will push another version to fix these path mismatch
- The codes in this repo use flant5. The weights should be automatically downloaded with the scripts.
- self.feature dir should be "features", and self.voxel dir should be "points" from the scene data
- replace the files with the json file in the google drive
We will push another version to fix these path mismatch
For "3. replace the files with the json file in the google drive", json files in the google drive are 'data_part1_all_objaverse.json' and 'data_part2_scene.json'. In 3dvqa_ft.yaml, there are train, val, and test. Do I need to split the 'data_part1_all_objaverse.json' and 'data_part2_scene.json' into train/val/test sets in proportion of (8:1:1)?
- The codes in this repo use flant5. The weights should be automatically downloaded with the scripts.
- self.feature dir should be "features", and self.voxel dir should be "points" from the scene data
- replace the files with the json file in the google drive
We will push another version to fix these path mismatch
Thanks! Do I need to split the 'data_part1_all_objaverse.json' and 'data_part2_scene.json' into train/val/test sets in proportion of (8:1:1)? I found you have uploaded files "voxelized_features_sam_nonzero_preprocess.zip" and "voxelized_voxels_sam_nonzero_preprocess.zip", what's that for?
- The codes in this repo use flant5. The weights should be automatically downloaded with the scripts.
- self.feature dir should be "features", and self.voxel dir should be "points" from the scene data
- replace the files with the json file in the google drive
We will push another version to fix these path mismatch
Thanks! Do I need to split the 'data_part1_all_objaverse.json' and 'data_part2_scene.json' into train/val/test sets in proportion of (8:1:1)? I found you have uploaded files "voxelized_features_sam_nonzero_preprocess.zip" and "voxelized_voxels_sam_nonzero_preprocess.zip", what's that for?
Are you using the voxel point cloud as input? I thought the paper was using continuous representation. For voxel, how the discretization is done?
Hi, when I run the code
there are some questions:
I downloaded the weights locally from "https://huggingface.co/facebook/opt-2.7b" and replaced 'opt_model' in the code with the local weight file, but it shows that the weight and model sizes don't match.
What directory should I place the downloaded dataset in?
I found that the three annotations files in 3dvqa_ft.yaml do not exist. How can I obtain them?