Thank you for sharing this wonderful work, i am trying to run the s3dis training from scratch.
I am using google cloud platform to meet the GPU requirements(24GB L4 GPU), and the commands have been executed through the SSH terminal
Thanks to the detailed instructions and previous issues that have been solved, i have been able to execute till preprocessing the dataset as shown,
But when i try to train with the following command as described in the scripts:
"python main_instance_segmentation.py \
general.project_name="s3dis" \
general.experiment_name="area${CURR_AREA}_from_scratch" \
data.batch_size=4 \
data/datasets=s3dis \
general.num_targets=14 \
data.num_labels=13 \
trainer.max_epochs=1001 \
general.area=${CURR_AREA} \
trainer.check_val_every_n_epoch=10 \
data.voxel_size=0.05
Thank you for sharing this wonderful work, i am trying to run the s3dis training from scratch. I am using google cloud platform to meet the GPU requirements(24GB L4 GPU), and the commands have been executed through the SSH terminal Thanks to the detailed instructions and previous issues that have been solved, i have been able to execute till preprocessing the dataset as shown,
But when i try to train with the following command as described in the scripts: "python main_instance_segmentation.py \ general.project_name="s3dis" \ general.experiment_name="area${CURR_AREA}_from_scratch" \ data.batch_size=4 \ data/datasets=s3dis \ general.num_targets=14 \ data.num_labels=13 \ trainer.max_epochs=1001 \ general.area=${CURR_AREA} \ trainer.check_val_every_n_epoch=10 \ data.voxel_size=0.05
i am facing the following error
1 [2024-01-05 10:41:11,321][main][INFO] - {'general_train_mode': True, 'general_task': 'instance_segmentation', 'general_seed': None, 'general_checkpoint': None, 'general_backbone_checkpoint': None, 'general_freeze_backbone': False, 'general_linear_probing_backbone': False, 'general_train_on_segments': False, 'general_eval_on_segments': False, 'general_filter_out_instances': False, 'general_save_visualizations': True, 'general_visualization_point_size': 20, 'general_decoder_id': -1, 'general_export': False, 'general_use_dbscan': False, 'general_ignore_class_threshold': 100, 'general_project_name': 's3dis', 'general_workspace': 'jonasschult', 'general_experiment_name': 'area_from_scratch', 'general_num_targets': 14, 'general_add_instance': True, 'general_dbscan_eps': 0.95, 'general_dbscan_min_points': 1, 'general_export_threshold': 0.0001, 'general_reps_per_epoch': 1, 'general_on_crops': False, 'general_scores_threshold': 0.0, 'general_iou_threshold': 1.0, 'general_area': '', 'general_eval_inner_core': -1, 'general_topk_per_image': 100, 'general_ignore_mask_idx': [], 'general_max_batch_size': 99999999, 'general_save_dir': 'saved/area_from_scratch', 'general_gpus': 1, 'data_train_mode': 'train', 'data_validation_mode': 'validation', 'data_test_mode': 'validation', 'data_ignore_label': 255, 'data_add_raw_coordinates': True, 'data_add_colors': True, 'data_add_normals': False, 'data_in_channels': 3, 'data_num_labels': 13, 'data_add_instance': True, 'data_task': 'instance_segmentation', 'data_pin_memory': False, 'data_num_workers': 4, 'data_batch_size': 4, 'data_test_batch_size': 1, 'data_cache_data': False, 'data_voxel_size': 0.05, 'data_reps_per_epoch': 1, 'data_cropping': False, 'data_cropping_args_min_points': 30000, 'data_cropping_args_aspect': 0.8, 'data_cropping_args_min_crop': 0.5, 'data_cropping_args_max_crop': 1.0, 'data_crop_min_size': 20000, 'data_crop_length': 6.0, 'data_cropping_v1': True, 'data_train_dataloadertarget_': 'torch.utils.data.DataLoader', 'data_train_dataloader_shuffle': True, 'data_train_dataloader_pin_memory': False, 'data_train_dataloader_num_workers': 4, 'data_train_dataloader_batch_size': 4, 'data_validation_dataloadertarget_': 'torch.utils.data.DataLoader', 'data_validation_dataloader_shuffle': False, 'data_validation_dataloader_pin_memory': False, 'data_validation_dataloader_num_workers': 4, 'data_validation_dataloader_batch_size': 1, 'data_test_dataloadertarget_': 'torch.utils.data.DataLoader', 'data_test_dataloader_shuffle': False, 'data_test_dataloader_pin_memory': False, 'data_test_dataloader_num_workers': 4, 'data_test_dataloader_batch_size': 1, 'data_train_datasettarget_': 'datasets.semseg.SemanticSegmentationDataset', 'data_train_dataset_dataset_name': 's3dis', 'data_train_dataset_data_dir': 'data/processed/s3dis', 'data_train_dataset_image_augmentations_path': 'conf/augmentation/albumentations_aug.yaml', 'data_train_dataset_volume_augmentations_path': 'conf/augmentation/volumentations_aug.yaml', 'data_train_dataset_label_db_filepath': 'data/processed/s3dis/label_database.yaml', 'data_train_dataset_color_mean_std': 'data/processed/s3dis/color_mean_std.yaml', 'data_train_dataset_data_percent': 1.0, 'data_train_dataset_mode': 'train', 'data_train_dataset_ignore_label': 255, 'data_train_dataset_num_labels': 13, 'data_train_dataset_add_raw_coordinates': True, 'data_train_dataset_add_colors': True, 'data_train_dataset_add_normals': False, 'data_train_dataset_add_instance': True, 'data_train_dataset_cache_data': False, 'data_train_dataset_instance_oversampling': 0.0, 'data_train_dataset_place_around_existing': False, 'data_train_dataset_point_per_cut': 0, 'data_train_dataset_max_cut_region': 0, 'data_train_dataset_flip_in_center': False, 'data_train_dataset_noise_rate': 0, 'data_train_dataset_resample_points': 0, 'data_train_dataset_cropping': False, 'data_train_dataset_cropping_args_min_points': 30000, 'data_train_dataset_cropping_args_aspect': 0.8, 'data_train_dataset_cropping_args_min_crop': 0.5, 'data_train_dataset_cropping_args_max_crop': 1.0, 'data_train_dataset_is_tta': False, 'data_train_dataset_crop_min_size': 20000, 'data_train_dataset_crop_length': 6.0, 'data_train_dataset_cropping_v1': True, 'data_train_dataset_area': '', 'data_train_dataset_filter_out_classes': [], 'data_train_dataset_label_offset': 0, 'data_validation_datasettarget_': 'datasets.semseg.SemanticSegmentationDataset', 'data_validation_dataset_dataset_name': 's3dis', 'data_validation_dataset_data_dir': 'data/processed/s3dis', 'data_validation_dataset_image_augmentations_path': None, 'data_validation_dataset_volume_augmentations_path': None, 'data_validation_dataset_label_db_filepath': 'data/processed/s3dis/label_database.yaml', 'data_validation_dataset_color_mean_std': 'data/processed/s3dis/color_mean_std.yaml', 'data_validation_dataset_data_percent': 1.0, 'data_validation_dataset_mode': 'validation', 'data_validation_dataset_ignore_label': 255, 'data_validation_dataset_num_labels': 13, 'dat[wandb: long log line truncated] 2 /opt/conda/envs/mask3d_cuda113/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:446: LightningDeprecationWarning: Setting
Trainer(gpus=1)
is deprecated in v1.7 and will be removed in v2.0. Please useTrainer(accelerator='gpu', devices=1)
instead. 3 rank_zero_deprecation( 4 /opt/conda/envs/mask3d_cuda113/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py:57: LightningDeprecationWarning: SettingTrainer(weights_save_path=)
has been deprecated in v1.6 and will be removed in v1.8. Please passdirpath
directly to theModelCheckpoint
callback 5 rank_zero_deprecation( 6 GPU available: True (cuda), used: True 7 TPU available: False, using: 0 TPU cores 8 IPU available: False, using: 0 IPUs 9 HPU available: False, using: 0 HPUs 10 generate data/processed/s3dis/train_Areadatabase.yaml firstCould you please help me to solve the issue.