Closed soheilAppear closed 3 years ago
Seems like you are not finding any images for training. We are planning to soon deprecate ImageDataset in favor of something more generic and flexible, but in the meantime can you send me the folder structure you are using and pointing the ImageDataset towards?
Sure, I'll send you the images that I used for training. But they are just 5 simple images from cityscape datasets. But I tried to put the only in the directory with the .yaml file that I created for them to use this ImageDataset
Sorry, I'm not sure if I understood you. What is the folder structure that you are using for the ImageDataset, to which you point when you set the path:
field of your dataset?
it only contains images for the training process. without any point clouds or any other things.
and it also contain the list of the images.
I am having the same issue. @VitorGuizilini-TRI can you elaborate on the correct folder structure? @soheilAppear were you able to figure this out?
Also, @soheilAppear why do you use this:
---- monitor: CITY_tiny-city_tiny-abs_rel_pp_gt
as your checkpoint monitoring metric?
I was trying to use the image_dataset to train my own dataset, however it did not work and it could not detect the dataset that I provided. It gives me this error:
`### Preparing Model Model: SelfSupModel DepthNet: PackNet01 PoseNet: PoseNet
Preparing Datasets
Setup train datasets
######### 0 (x1): ./data/datasets/CITY_tiny/city_tiny.txt
Setup validation datasets
######### 0: ./data/datasets/CITY_tiny/city_tiny.txt ######### 0: ./data/datasets/CITY_tiny/city_tiny.txt
Setup test datasets
######### 0: ./data/datasets/CITY_tiny/city_tiny.txt
########################################################################################################################
Config: configs.default_config -> ..configs.train_city_tiny.yaml
Name: default_config-train_city_tiny-2020.07.27-19h11m01s
######################################################################################################################## config: -- name: default_config-train_city_tiny-2020.07.27-19h11m01s -- debug: False -- arch: ---- seed: 42 ---- min_epochs: 1 ---- max_epochs: 50 -- checkpoint: ---- filepath: ./data/experiments_new/default_config-train_citytiny-2020.07.27-19h11m01s/{epoch:02d}{CITY_tiny-city_tiny-abs_rel_pp_gt:.3f} ---- save_top_k: 5 ---- monitor: CITY_tiny-city_tiny-abs_rel_pp_gt ---- monitor_index: 0 ---- mode: min ---- s3_path: ---- s3_frequency: 1 ---- s3_url: -- save: ---- folder: ---- depth: ------ rgb: True ------ viz: True ------ npz: True ------ png: True ---- pretrained: -- wandb: ---- dry_run: True ---- name: ---- project: ---- entity: ---- tags: [] ---- dir: ---- url: -- model: ---- name: SelfSupModel ---- checkpoint_path: ---- optimizer: ------ name: Adam ------ depth: -------- lr: 0.0002 -------- weight_decay: 0.0 ------ pose: -------- lr: 0.0002 -------- weight_decay: 0.0 ---- scheduler: ------ name: StepLR ------ step_size: 30 ------ gamma: 0.5 ------ T_max: 20 ---- params: ------ crop: garg ------ min_depth: 0.0 ------ max_depth: 80.0 ---- loss: ------ num_scales: 4 ------ progressive_scaling: 0.0 ------ flip_lr_prob: 0.5 ------ rotation_mode: euler ------ upsample_depth_maps: True ------ ssim_loss_weight: 0.85 ------ occ_reg_weight: 0.1 ------ smooth_loss_weight: 0.001 ------ C1: 0.0001 ------ C2: 0.0009 ------ photometric_reduce_op: min ------ disp_norm: True ------ clip_loss: 0.0 ------ padding_mode: zeros ------ automask_loss: True ------ velocity_loss_weight: 0.1 ------ supervised_method: sparse-l1 ------ supervised_num_scales: 4 ------ supervised_loss_weight: 0.9 ---- depth_net: ------ name: PackNet01 ------ checkpoint_path: ------ version: 1A ------ dropout: 0.0 ---- pose_net: ------ name: PoseNet ------ checkpoint_path: ------ version: ------ dropout: 0.0 -- datasets: ---- augmentation: ------ image_shape: (192, 640) ------ jittering: (0.2, 0.2, 0.2, 0.05) ---- train: ------ batch_size: 1 ------ num_workers: 16 ------ back_context: 1 ------ forward_context: 1 ------ dataset: ['Image'] ------ path: ['./data/datasets/CITY_tiny'] ------ split: ['city_tiny.txt'] ------ depth_type: [''] ------ cameras: [[]] ------ repeat: [1] ------ num_logs: 5 ---- validation: ------ batch_size: 1 ------ num_workers: 8 ------ back_context: 0 ------ forward_context: 0 ------ dataset: ['Image', 'Image'] ------ path: ['./data/datasets/CITY_tiny', './data/datasets/CITY_tiny'] ------ split: ['city_tiny.txt', 'city_tiny.txt'] ------ depth_type: ['', ''] ------ cameras: [[], []] ------ num_logs: 5 ---- test: ------ batch_size: 1 ------ num_workers: 8 ------ back_context: 0 ------ forward_context: 0 ------ dataset: ['Image'] ------ path: ['./data/datasets/CITY_tiny'] ------ split: ['city_tiny.txt'] ------ depth_type: [''] ------ cameras: [[]] ------ num_logs: 5 -- config: ./configs/train_city_tiny.yaml -- default: configs/default_config -- prepared: True ########################################################################################################################
Config: configs.default_config -> ..configs.train_city_tiny.yaml
Name: default_config-train_city_tiny-2020.07.27-19h11m01s
########################################################################################################################
0.00 images [00:00, ? images/s] Traceback (most recent call last): File "scripts/train.py", line 64, in
train(args.file)
File "scripts/train.py", line 59, in train
trainer.fit(model_wrapper)
File "/workspace/packnet-sfm/packnet_sfm/trainers/horovod_trainer.py", line 58, in fit
self.train(train_dataloader, module, optimizer)
File "/workspace/packnet-sfm/packnet_sfm/trainers/horovod_trainer.py", line 97, in train
return module.training_epoch_end(outputs)
File "/workspace/packnet-sfm/packnet_sfm/models/model_wrapper.py", line 219, in training_epoch_end
loss_and_metrics = average_loss_and_metrics(output_batch, 'avg_train')`
I'm afraid the way that I put my own dataset is wrong or some similar issue. I appreciate if you help me and tell me how can I use Image as the dataset.