nv-tlabs / GET3D

Other
4.2k stars 376 forks source link

unsupported operand type(s) for /: 'NoneType' and 'int' #157

Closed mahmud30tibn closed 7 months ago

mahmud30tibn commented 7 months ago

I am trying to train the model using a selection of chair objects from shapenet data. The command I use is:

python train_3d.py --outdir='log.txt' --data='./Chair_image/img/03001627/' --camera_path ./Chair_image/camera/ --gpus=1 --batch=4 --gamma=400 --data_camera_mode shapenet_chair --dmtet_scale 0.8 --use_shapenet_split 1 --one_3d_generator 1 --fp32 0

I get an error like this:

==> start ==> use shapenet dataset ==> use shapenet folder number 5 ==> use image path: ./Chair_image/img/03001627/, num images: 120 ==> launch training

Training options: { "G_kwargs": { "class_name": "training.networks_get3d.GeneratorDMTETMesh", "z_dim": 512, "w_dim": 512, "mapping_kwargs": { "num_layers": 8 }, "iso_surface": "dmtet", "one_3d_generator": true, "n_implicit_layer": 1, "deformation_multiplier": 1.0, "use_style_mixing": true, "dmtet_scale": 0.8, "feat_channel": 16, "mlp_latent_channel": 32, "tri_plane_resolution": 256, "n_views": 1, "render_type": "neural_render", "use_tri_plane": true, "tet_res": 90, "geometry_type": "conv3d", "data_camera_mode": "shapenet_chair", "channel_base": 32768, "channel_max": 512, "fused_modconv_default": "inference_only" }, "D_kwargs": { "class_name": "training.networks_get3d.Discriminator", "block_kwargs": { "freeze_layers": 0 }, "mapping_kwargs": {}, "epilogue_kwargs": { "mbstd_group_size": 4 }, "data_camera_mode": "shapenet_chair", "add_camera_cond": true, "channel_base": 32768, "channel_max": 512, "architecture": "skip" }, "G_opt_kwargs": { "class_name": "torch.optim.Adam", "betas": [ 0, 0.99 ], "eps": 1e-08, "lr": 0.002 }, "D_opt_kwargs": { "class_name": "torch.optim.Adam", "betas": [ 0, 0.99 ], "eps": 1e-08, "lr": 0.002 }, "loss_kwargs": { "class_name": "training.loss.StyleGAN2Loss", "gamma_mask": 400.0, "r1_gamma": 400.0, "lambda_flexicubes_surface_reg": 0.5, "lambda_flexicubes_weights_reg": 0.1, "style_mixing_prob": 0.9, "pl_weight": 0.0 }, "data_loader_kwargs": { "pin_memory": true, "prefetch_factor": 2, "num_workers": 3 }, "inference_vis": false, "training_set_kwargs": { "class_name": "training.dataset.ImageFolderDataset", "path": "./Chair_image/img/03001627/", "use_labels": false, "max_size": 120, "xflip": false, "resolution": 1024, "data_camera_mode": "shapenet_chair", "add_camera_cond": true, "camera_path": "./Chair_image/camera/", "split": "train", "random_seed": 0 }, "resume_pretrain": null, "D_reg_interval": 16, "num_gpus": 1, "batch_size": 4, "batch_gpu": 4, "metrics": [ "fid50k" ], "total_kimg": 20000, "kimg_per_tick": 1, "image_snapshot_ticks": 50, "network_snapshot_ticks": 200, "random_seed": 0, "ema_kimg": 1.25, "G_reg_interval": 4, "run_dir": "log.txt/00020-stylegan2--gpus1-batch4-gamma400" }

Output directory: log.txt/00020-stylegan2--gpus1-batch4-gamma400 Number of GPUs: 1 Batch size: 4 images Training duration: 20000 kimg Dataset path: ./Chair_image/img/03001627/ Dataset size: 120 images Dataset resolution: 1024 Dataset labels: False Dataset x-flips: False

Creating output directory... Launching processes... Setting up PyTorch plugin "upfirdn2d_plugin"... Done. Setting up PyTorch plugin "bias_act_plugin"... Done. Setting up PyTorch plugin "filtered_lrelu_plugin"... Done. Loading training set... ==> use shapenet dataset ==> use shapenet folder number 5 ==> use image path: ./Chair_image/img/03001627/, num images: 120

Num images: 120 Image shape: [3, 1024, 1024] Label shape: [0]

Constructing networks... Setting up augmentation... Distributing across 1 GPUs... Setting up training phases... Exporting sample images... Initializing logs... Skipping tfevents export: No module named 'tensorboard' Training for 20000 kimg...

tick 0 kimg 0.0 time 23s sec/tick 11.2 sec/kimg 2808.36 maintenance 12.2
==> start visualization /home/tibnmahm/3d/GET3D-master/training/networks_get3d.py:467: UserWarning: torch.range is deprecated and will be removed in a future release because its behavior is inconsistent with Python's range builtin. Instead, use torch.arange, which produces values in [start, end). camera_theta = torch.range(0, n_camera - 1, device=self.device).unsqueeze(dim=-1) / n_camera math.pi 2.0 ==> saved visualization Evaluating metrics... ====> use validation set ==> use shapenet dataset ==> use shapenet folder number 0 ==> use image path: ./Chair_image/img/03001627/, num images: 0 Traceback (most recent call last): File "train_3d.py", line 337, in main() # pylint: disable=no-value-for-parameter File "/home/tibnmahm/anaconda3/envs/get3d_2/lib/python3.8/site-packages/click/core.py", line 1157, in call return self.main(args, kwargs) File "/home/tibnmahm/anaconda3/envs/get3d_2/lib/python3.8/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/home/tibnmahm/anaconda3/envs/get3d_2/lib/python3.8/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "/home/tibnmahm/anaconda3/envs/get3d_2/lib/python3.8/site-packages/click/core.py", line 783, in invoke return __callback(args, kwargs) File "train_3d.py", line 331, in main launch_training(c=c, desc=desc, outdir=opts.outdir, dry_run=opts.dry_run) File "train_3d.py", line 103, in launch_training subprocess_fn(rank=0, c=c, temp_dir=temp_dir) File "train_3d.py", line 49, in subprocess_fn training_loop_3d.training_loop(rank=rank, c) File "/home/tibnmahm/3d/GET3D-master/training/training_loop_3d.py", line 407, in training_loop result_dict = metric_main.calc_metric( File "/home/tibnmahm/3d/GET3D-master/metrics/metric_main.py", line 52, in calc_metric results = _metric_dictmetric File "/home/tibnmahm/3d/GET3D-master/metrics/metric_main.py", line 145, in fid50k fid = frechet_inception_distance.compute_fid(opts, max_real=50000, num_gen=50000) File "/home/tibnmahm/3d/GET3D-master/metrics/frechet_inception_distance.py", line 30, in compute_fid mu_real, sigma_real = metric_utils.compute_feature_stats_for_dataset( File "/home/tibnmahm/3d/GET3D-master/metrics/metric_utils.py", line 172, in get_mean_cov mean = self.raw_mean / self.num_items TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'

I cleaned the cache from the project folder (cd cache; rm -r gan-metrics) and reran the command. But it did not solve the problem. Any idea how to resolve this?

mahmud30tibn commented 7 months ago

Adding an additional argument solved the issue. --use_shapenet_split 0 closing this issue as I found the solution.