Closed lxzyuan closed 2 months ago
Hi,
Which boxes.npz files are you using?
Here it reads the num_classes from the boxes.npz dataset: https://github.com/sherwinbahmani/cc3d/blob/928376b398eea892fb878e0615ab15e0aad5a9d7/training/dataset.py#L229
So if you use the bedrooms npz files with wrong num_classes, it will not match.
If this is not the case, what happens if you set add_floor_class and add_none_class to False?
Thank you for your reply. It may be that there is something wrong with my 3D-FRONT dataset preprocessing, so the error occurred.
I have re-run the bedroom data processing program, and it has now successfully run the generate.sh script. I am currently re-running the living room preprocessing program, and after it's finished, I will test it again.
Thank you again for your reply.
I can also successfully run generate.sh for the living room dataset. Thanks.
I have another question. In the evaluate.sh file, why are num_layout_indices 5515 and 2613 for the Bedrooms and Living rooms datasets, and why is num_images 50000? Are the indicators calculated by this script the same as those in the paper?
Yes these are used the same way for the numbers in the papers. FID is usually calculated over 50000 images, as FID can be noisy for small amount of images. 5515 and 2613 are the respective amount of available scenes for bedrooms and living rooms. Since we want to generate diverse 50000 images, we sample all scenes with random seeds and random camera poses. FID calculates the similarity between the training dataset images and the generated images, hence we use training poses and training scene layouts following common practice in the GAN literature.
Thanks.
When I run evaluate.sh on the Living rooms FID dataset, I get the following error:
Setting up PyTorch plugin "bias_act_plugin"... Done.
Setting up PyTorch plugin "upfirdn2d_plugin"... Done.
Traceback (most recent call last):
File "/data/cedar/mycode/cc3d/generate_dataset.py", line 112, in <module>
generate_sample_videos(**vars(args))
File "/data/cedar/mycode/cc3d/generate_dataset.py", line 81, in generate_sample_videos
c, _, seq_name, = get_eval_labels(training_set, layout_idx=seq_idx, coords_idx=coords_idx, num_eval_seeds=1, device=device, out_image=False)
File "/data/cedar/mycode/cc3d/generate.py", line 41, in get_eval_labels
eval_c = [[training_set.get_label(np.floor(coords[0][0]/training_set.img_per_scene_ratio).astype(int), j) for (i, j) in coords] for coords in eval_indices]
File "/data/cedar/mycode/cc3d/generate.py", line 41, in <listcomp>
eval_c = [[training_set.get_label(np.floor(coords[0][0]/training_set.img_per_scene_ratio).astype(int), j) for (i, j) in coords] for coords in eval_indices]
File "/data/cedar/mycode/cc3d/generate.py", line 41, in <listcomp>
eval_c = [[training_set.get_label(np.floor(coords[0][0]/training_set.img_per_scene_ratio).astype(int), j) for (i, j) in coords] for coords in eval_indices]
File "/data/cedar/mycode/cc3d/training/dataset.py", line 287, in get_label
label = self._get_raw_labels(self._raw_idx[idx], coords_idx, traj=traj)
File "/data/cedar/mycode/cc3d/training/dataset.py", line 259, in _get_raw_labels
self._raw_labels = self._load_raw_labels(raw_idx, coords_idx, traj=traj) if self._use_labels else None
File "/data/cedar/mycode/cc3d/training/dataset.py", line 376, in _load_raw_labels
fname = self._label_fnames[raw_idx]
IndexError: list index out of range
I used the living_rooms.pkl you provided. When I lower the value of num_layout_indices, it works fine. Is there something wrong? Is it a problem with my living room dataset preprocessing?
Supplement: I use the labels generated by add_vertices_calc.py to render the living room dataset. But use the labels generated by normalize_dataset.py in cc3d. Do the two labels have to be the same? Is this the error caused by this problem? I will verify it soon.
How many scenes did it generate for you after preprocessing? Also did you render the dataset? I think currently the script assumes that the whole dataset exisits and has been rendered.
After preprocessing, there are 813 living rooms in total. The rendering I mentioned is step 4 of create_dataset.sh in 3D-FRONT.
It should be 2613 scenes. How many scenes are there after the first step (# 1. Pre-process dataset following ATISS)? Also check if there were no errors in the run
The first step generated 813 scenes. How many scenes should be generated?
2613
Ok, thanks for your reply, I will check the scene generation process.
Please reopen if this issue is not solved.
When I run bash generate.sh, I use the living_rooms.pkl pre-trained model in the Google driver you provided, and I get the following error:
After checking, I found that extra_class=-2. Because add_floor_class and add_none_class are both equal to True, the dimensions of x and num_classes are different, so an error occurred. Is there any solution?
Thanks!