sherwinbahmani / cc3d

CC3D: Layout-Conditioned Generation of Compositional 3D Scenes
https://sherwinbahmani.github.io/cc3d
91 stars 3 forks source link

Error run bash generate.sh using Pre-trained Checkpoints #13

Closed lxzyuan closed 2 weeks ago

lxzyuan commented 1 month ago

When I run bash generate.sh, I use the living_rooms.pkl pre-trained model in the Google driver you provided, and I get the following error:

  File "/data/cedar/mycode/cc3d/generate.py", line 156, in <module>
    generate_sample_videos(**vars(args))
  File "/data/cedar/mycode/cc3d/generate.py", line 135, in generate_sample_videos
    G2(z=z, c=c, layout_idx=layout_idx, out_path=out_dir_program, img=imgs, max_coords=max_coords, training_set=training_set, label_names=label_names,z_seed=z_seed, out_video=out_video)
  File "/data/cedar/mycode/cc3d/renderer.py", line 31, in __call__
    outputs = getattr(self, f"render_{self.program}")(*args, **kwargs)
  File "/data/cedar/mycode/cc3d/renderer.py", line 155, in render_fake_single
    self.render_fake(*args, **kwargs)
  File "/data/cedar/mycode/cc3d/renderer.py", line 130, in render_fake
    img = self.generator(z, c_i, noise_mode='none', rand_render=False, norm_depth=True,
  File "/home/cedar/miniconda3/envs/cc3d/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/cedar/mycode/cc3d/training/generator.py", line 392, in forward
    features, masks, semantics, occupancy_grids = self.create_feature_grid(random_latents=random_latents, **boxes)
  File "/data/cedar/mycode/cc3d/training/generator.py", line 826, in create_feature_grid
    class_label_embed = self.class_embedding(class_labels[i,j].unsqueeze(0)).squeeze(0)
  File "/home/cedar/miniconda3/envs/cc3d/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/cedar/mycode/cc3d/training/generator.py", line 1072, in forward
    x = self.net(x)
  File "/home/cedar/miniconda3/envs/cc3d/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/cedar/miniconda3/envs/cc3d/lib/python3.9/site-packages/torch/nn/modules/container.py", line 139, in forward
    input = module(input)
  File "/home/cedar/miniconda3/envs/cc3d/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/cedar/mycode/cc3d/training/networks.py", line 123, in forward
    x = torch.addmm(b.unsqueeze(0), x, w.t())
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x26 and 24x32)

After checking, I found that extra_class=-2. Because add_floor_class and add_none_class are both equal to True, the dimensions of x and num_classes are different, so an error occurred. Is there any solution?

Thanks!

sherwinbahmani commented 1 month ago

Hi,

Which boxes.npz files are you using?

Here it reads the num_classes from the boxes.npz dataset: https://github.com/sherwinbahmani/cc3d/blob/928376b398eea892fb878e0615ab15e0aad5a9d7/training/dataset.py#L229

So if you use the bedrooms npz files with wrong num_classes, it will not match.

If this is not the case, what happens if you set add_floor_class and add_none_class to False?

lxzyuan commented 1 month ago

Thank you for your reply. It may be that there is something wrong with my 3D-FRONT dataset preprocessing, so the error occurred.

I have re-run the bedroom data processing program, and it has now successfully run the generate.sh script. I am currently re-running the living room preprocessing program, and after it's finished, I will test it again.

Thank you again for your reply.

lxzyuan commented 1 month ago

I can also successfully run generate.sh for the living room dataset. Thanks.

I have another question. In the evaluate.sh file, why are num_layout_indices 5515 and 2613 for the Bedrooms and Living rooms datasets, and why is num_images 50000? Are the indicators calculated by this script the same as those in the paper?

sherwinbahmani commented 1 month ago

Yes these are used the same way for the numbers in the papers. FID is usually calculated over 50000 images, as FID can be noisy for small amount of images. 5515 and 2613 are the respective amount of available scenes for bedrooms and living rooms. Since we want to generate diverse 50000 images, we sample all scenes with random seeds and random camera poses. FID calculates the similarity between the training dataset images and the generated images, hence we use training poses and training scene layouts following common practice in the GAN literature.

lxzyuan commented 1 month ago

Thanks.

When I run evaluate.sh on the Living rooms FID dataset, I get the following error:

Setting up PyTorch plugin "bias_act_plugin"... Done.
Setting up PyTorch plugin "upfirdn2d_plugin"... Done.
Traceback (most recent call last):
  File "/data/cedar/mycode/cc3d/generate_dataset.py", line 112, in <module>
    generate_sample_videos(**vars(args))
  File "/data/cedar/mycode/cc3d/generate_dataset.py", line 81, in generate_sample_videos
    c, _, seq_name, = get_eval_labels(training_set, layout_idx=seq_idx, coords_idx=coords_idx, num_eval_seeds=1, device=device, out_image=False)
  File "/data/cedar/mycode/cc3d/generate.py", line 41, in get_eval_labels
    eval_c = [[training_set.get_label(np.floor(coords[0][0]/training_set.img_per_scene_ratio).astype(int), j) for (i, j) in coords] for coords in eval_indices]
  File "/data/cedar/mycode/cc3d/generate.py", line 41, in <listcomp>
    eval_c = [[training_set.get_label(np.floor(coords[0][0]/training_set.img_per_scene_ratio).astype(int), j) for (i, j) in coords] for coords in eval_indices]
  File "/data/cedar/mycode/cc3d/generate.py", line 41, in <listcomp>
    eval_c = [[training_set.get_label(np.floor(coords[0][0]/training_set.img_per_scene_ratio).astype(int), j) for (i, j) in coords] for coords in eval_indices]
  File "/data/cedar/mycode/cc3d/training/dataset.py", line 287, in get_label
    label = self._get_raw_labels(self._raw_idx[idx], coords_idx, traj=traj)
  File "/data/cedar/mycode/cc3d/training/dataset.py", line 259, in _get_raw_labels
    self._raw_labels = self._load_raw_labels(raw_idx, coords_idx, traj=traj) if self._use_labels else None
  File "/data/cedar/mycode/cc3d/training/dataset.py", line 376, in _load_raw_labels
    fname = self._label_fnames[raw_idx]
IndexError: list index out of range

I used the living_rooms.pkl you provided. When I lower the value of num_layout_indices, it works fine. Is there something wrong? Is it a problem with my living room dataset preprocessing?

Supplement: I use the labels generated by add_vertices_calc.py to render the living room dataset. But use the labels generated by normalize_dataset.py in cc3d. Do the two labels have to be the same? Is this the error caused by this problem? I will verify it soon.

sherwinbahmani commented 1 month ago

How many scenes did it generate for you after preprocessing? Also did you render the dataset? I think currently the script assumes that the whole dataset exisits and has been rendered.

lxzyuan commented 1 month ago

After preprocessing, there are 813 living rooms in total. The rendering I mentioned is step 4 of create_dataset.sh in 3D-FRONT.

sherwinbahmani commented 1 month ago

It should be 2613 scenes. How many scenes are there after the first step (# 1. Pre-process dataset following ATISS)? Also check if there were no errors in the run

lxzyuan commented 1 month ago

The first step generated 813 scenes. How many scenes should be generated?

sherwinbahmani commented 1 month ago

2613

lxzyuan commented 1 month ago

Ok, thanks for your reply, I will check the scene generation process.

sherwinbahmani commented 2 weeks ago

Please reopen if this issue is not solved.