IGNF / myria3d

Myria3D: Aerial Lidar HD Semantic Segmentation with Deep Learning
https://ignf.github.io/myria3d/
BSD 3-Clause "New" or "Revised" License
169 stars 23 forks source link

The speed of training a new model will drop significantly after 15 epoch #74

Closed lqlnbcr closed 1 year ago

lqlnbcr commented 1 year ago

I want to train myria3d using my own dataset. But no matter whether I use cpu or gpu to train, in the first 15 epochs, its training speed per epoch is about 20 minutes, which is good. However, after 15 epochs, the training speed will drop significantly, each training will take about 5 hours. I don't know why this happens.

CPU: 11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz 2.30 GHz GPU: NVIDIA GeForce RTX 3070 Laptop RAM: 32 GB available for training

The dataset is in las format, and it is also urban LIDAR 3d point cloud data scanned in the air. The HDF5 file created using the training data is about 10GB. The Experiment I use is RandLaNet_base_run_FR.yaml Task_name = fit Data_descripyion has been modified according to my las data

CPU training log 屏幕截图 2023-05-03 022409

GPU traing log image

lqlnbcr commented 1 year ago

Config of GPU training:

├── task │ └── task_name: fit

├── seed │ └── 12345
├── logger │ └── csv:
target: pytorch_lightning.loggers.csv_logs.CSVLogger
│ save_dir: ${hydra:run.dir}
│ name: csv/
│ prefix: ''
│ comet:
│ experiment_name: RandLaNet_base_run_FR-(BatchSize10xBudget(300pts-40000pts))

├── trainer │ └── target: pytorch_lightning.Trainer
│ gpus:
│ - 0
│ min_epochs: 100
│ max_epochs: 150
│ log_every_n_steps: 1
│ weights_summary: null
│ progress_bar_refresh_rate: 1
│ auto_lr_find: false
│ num_sanity_val_steps: 2
│ accumulate_grad_batches: 3

├── model │ └── optimizer:
target: functools.partial
args:
│ - ${get_method:torch.optim.Adam}
│ lr: ${model.lr}
│ lr_scheduler:
target: functools.partial
args:
│ - ${get_method:torch.optim.lr_scheduler.ReduceLROnPlateau}
│ mode: min
│ factor: 0.5
│ patience: 20
│ cooldown: 5
│ verbose: true
│ criterion:
target: torch.nn.CrossEntropyLoss
│ label_smoothing: 0.0
│ ignore_index: 65
target: myria3d.models.model.Model
│ d_in: ${dataset_description.d_in}
│ num_classes: ${dataset_description.num_classes}
│ ckpt_path: null
│ neural_net_class_name: PyGRandLANet
│ neural_net_hparams:
│ num_features: ${model.d_in}
│ num_classes: ${model.num_classes}
│ num_neighbors: 16
│ decimation: 4
│ return_logits: true
│ interpolation_k: ${predict.interpolator.interpolation_k}
│ num_workers: 4
│ iou:
target: functools.partial
args:
│ - ${get_method:torchmetrics.JaccardIndex}
│ - ${model.num_classes}
│ absent_score: 1.0
│ momentum: 0.9
│ monitor: val/loss_epoch
│ lr: 0.003933709606504788

├── datamodule │ └── transforms:
│ preparations:
│ train:
│ TargetTransform:
target: myria3d.pctl.transforms.transforms.TargetTransform
args:
│ - ${dataset_description.classification_preprocessing_dict}
│ - ${dataset_description.classification_dict}
│ DropPointsByClass:
target: myria3d.pctl.transforms.transforms.DropPointsByClass
│ GridSampling:
target: torch_geometric.transforms.GridSampling
args:
│ - 0.25
│ MinimumNumNodes:
target: myria3d.pctl.transforms.transforms.MinimumNumNodes
args:
│ - 300
│ MaximumNumNodes:
target: myria3d.pctl.transforms.transforms.MaximumNumNodes
args:
│ - 40000
│ Center:
target: torch_geometric.transforms.Center
│ eval:
│ TargetTransform:
target: myria3d.pctl.transforms.transforms.TargetTransform
args:
│ - ${dataset_description.classification_preprocessing_dict}
│ - ${dataset_description.classification_dict}
│ DropPointsByClass:
target: myria3d.pctl.transforms.transforms.DropPointsByClass
│ CopyFullPos:
target: myria3d.pctl.transforms.transforms.CopyFullPos
│ CopyFullPreparedTargets:
target: myria3d.pctl.transforms.transforms.CopyFullPreparedTargets
│ GridSampling:
target: torch_geometric.transforms.GridSampling
args:
│ - 0.25
│ MinimumNumNodes:
target: myria3d.pctl.transforms.transforms.MinimumNumNodes
args:
│ - 300
│ MaximumNumNodes:
target: myria3d.pctl.transforms.transforms.MaximumNumNodes
args:
│ - 40000
│ CopySampledPos:
target: myria3d.pctl.transforms.transforms.CopySampledPos
│ Center:
target: torch_geometric.transforms.Center
│ predict:
│ DropPointsByClass:
target: myria3d.pctl.transforms.transforms.DropPointsByClass
│ CopyFullPos:
target: myria3d.pctl.transforms.transforms.CopyFullPos
│ GridSampling:
target: torch_geometric.transforms.GridSampling
args:
│ - 0.25
│ MinimumNumNodes:
target: myria3d.pctl.transforms.transforms.MinimumNumNodes
args:
│ - 300
│ MaximumNumNodes:
target: myria3d.pctl.transforms.transforms.MaximumNumNodes
args:
│ - 40000
│ CopySampledPos:
target: myria3d.pctl.transforms.transforms.CopySampledPos
│ Center:
target: torch_geometric.transforms.Center
│ augmentations:
│ x_flip:
target: torch_geometric.transforms.RandomFlip
args:
│ - 0
│ p: 0.5
│ y_flip:
target: torch_geometric.transforms.RandomFlip
args:
│ - 1
│ p: 0.5
│ normalizations:
│ NullifyLowestZ:
target: myria3d.pctl.transforms.transforms.NullifyLowestZ
│ NormalizePos:
target: myria3d.pctl.transforms.transforms.NormalizePos
│ subtile_width: ${datamodule.subtile_width}
│ StandardizeRGBAndIntensity:
target: myria3d.pctl.transforms.transforms.StandardizeRGBAndIntensity
│ augmentations_list: '${oc.dict.values: datamodule.transforms.augmentations}'
│ preparations_train_list: '${oc.dict.values: datamodule.transforms.preparations.train}'
│ preparations_eval_list: '${oc.dict.values: datamodule.transforms.preparations.eval}'
│ preparations_predict_list: '${oc.dict.values: datamodule.transforms.preparations.predict}'
│ normalizations_list: '${oc.dict.values: datamodule.transforms.normalizations}'
target: myria3d.pctl.datamodule.hdf5.HDF5LidarDataModule
│ data_dir: /home/lqlnbcr/myria3d/tests/data/sydney_dataset/
│ split_csv_path: /home/lqlnbcr/myria3d/tests/data/toy_dataset_src/toy_dataset_split.csv
│ hdf5_file_path: /home/lqlnbcr/myria3d/tests/data/sydney_dataset/sydney_dataset_1.hdf5
│ points_pre_transform:
target: functools.partial
args:
│ - ${get_method:myria3d.pctl.points_pre_transform.lidar_hd.lidar_hd_pre_transform}
│ pre_filter:
target: functools.partial
args:
│ - ${get_method:myria3d.pctl.dataset.utils.pre_filter_below_n_points}
│ min_num_nodes: 50
│ tile_width: 2000
│ subtile_width: 50
│ subtile_shape: square
│ subtile_overlap_train: 0
│ subtile_overlap_predict: ${predict.subtile_overlap}
│ batch_size: 10
│ num_workers: 3
│ prefetch_factor: 3

├── dataset_description │ └── convert: all
│ classification_preprocessing_dict:
│ 0: 1
│ 4: 3
│ 5: 3
│ 7: 1
│ 8: 1
│ 11: 1
│ 12: 1
│ 13: 1
│ 14: 1
│ 15: 1
│ 17: 16
│ 18: 1
│ 19: 1
│ 20: 1
│ 21: 1
│ 22: 1
│ classification_dict:
│ 1: default
│ 2: ground
│ 3: vegetation
│ 6: building
│ 9: water
│ 10: bridge
│ 16: seabed
│ class_weights:
│ - 0.5
│ - 1.5
│ - 1.5
│ - 1.5
│ - 1.0
│ - 0.5
│ - 0.5
│ d_in: 9
│ num_classes: 7

├── callbacks │ └── log_code:
target: myria3d.callbacks.comet_callbacks.LogCode
│ code_dir: ${work_dir}/myria3d
│ log_logs_dir:
target: myria3d.callbacks.comet_callbacks.LogLogsPath
│ lr_monitor:
target: pytorch_lightning.callbacks.LearningRateMonitor
│ logging_interval: step
│ log_momentum: true
│ log_iou_by_class:
target: myria3d.callbacks.logging_callbacks.LogIoUByClass
│ classification_dict: ${dataset_description.classification_dict}
│ model_checkpoint:
target: pytorch_lightning.callbacks.ModelCheckpoint
│ monitor: val/loss_epoch
│ mode: min
│ save_top_k: 1
│ savelast: true
│ verbose: true
│ dirpath: checkpoints/
│ filename: epoch
{epoch:03d}
│ auto_insert_metric_name: false
│ early_stopping:
target: pytorch_lightning.callbacks.EarlyStopping
│ monitor: val/loss_epoch
│ mode: min
│ patience: 6
│ min_delta: 0

└── predict └── src_las: /path/to/input.las
output_dir: /path/to/output_dir/
ckpt_path: /path/to/lightning_model.ckpt
gpus: 0
subtile_overlap: 0
interpolator:
target: myria3d.models.interpolation.Interpolator
interpolation_k: 10
classification_dict: ${dataset_description.classification_dict}
probas_to_save: all
predicted_classification_channel: PredictedClassification
entropy_channel: entropy

Run log of GPU training:

Global seed set to 12345 [2023-05-03 02:42:37,584][myria3d.train][INFO] - Instantiating datamodule [2023-05-03 02:42:38,009][myria3d.train][INFO] - Instantiating model [2023-05-03 02:42:38,020][torch.distributed.nn.jit.instantiator][INFO] - Created a temporary directory at /tmp/tmp3yijeevi [2023-05-03 02:42:38,021][torch.distributed.nn.jit.instantiator][INFO] - Writing /tmp/tmp3yijeevi/_remote_module_non_sriptable.py [2023-05-03 02:42:38,044][myria3d.train][INFO] - Instantiating callback [2023-05-03 02:42:38,047][myria3d.train][INFO] - Instantiating callback [2023-05-03 02:42:38,047][myria3d.train][INFO] - Instantiating callback [2023-05-03 02:42:38,047][myria3d.train][INFO] - Instantiating callback [2023-05-03 02:42:38,049][myria3d.train][INFO] - Instantiating callback [2023-05-03 02:42:38,050][myria3d.train][INFO] - Instantiating callback [2023-05-03 02:42:38,051][myria3d.train][INFO] - Instantiating logger [2023-05-03 02:42:38,053][myria3d.train][INFO] - Instantiating trainer GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs [2023-05-03 02:42:38,057][myria3d.train][INFO] - Logging hyperparameters! [2023-05-03 02:42:38,059][myria3d.train][INFO] - Starting training and validating! Preparing train set...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 1144.42it/s] Preparing val set...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1119.68it/s]

CharlesGaydon commented 1 year ago

Hi @lqlnbcr, This is not something we have experienced on our side so I do not really know how to help you further solve this issue. The number 15 does not seem to be related to any specific Callback, so there seem to me that the reason for this is not to search in specific pytorch-related behaviors. I think this could have to do with different things, but probably with memory-related issues. My training usually have a memory print slightly below 30GB, which is not far from your max available of 32GB. Here are some logs for a training (it used two gpus instead of one in this case) image

You may want to check your memory during training, and perhaps try some differnet configurations of batchs_size / accumulate_grad_batches / maximum "point budget" per cloud (currently 40000 points).

A more direct way could be to lower your datamodule.num_workers from the current 3, to 1 or 2.

CharlesGaydon commented 1 year ago

@lqlnbcr Any new element on this?

CharlesGaydon commented 1 year ago

Closing. Please reopen if needed.