SHI-Labs / OneFormer

OneFormer: One Transformer to Rule Universal Image Segmentation, arxiv 2022 / CVPR 2023
https://praeclarumjj3.github.io/oneformer
MIT License
1.39k stars 129 forks source link

Fine-tuning custom COCO instance segmentation dataset using DiNAT backbone #76

Closed TayJen closed 10 months ago

TayJen commented 1 year ago

Hello! I am trying to fine-tune COCO pre-trained oneformer model with DiNAT backbone. My dataset is not completely in panoptic form, so I wrote my own custom datamapper, where I also added augmentations from albumentations.

But when I am trying to run the train_net.py with this command: CUDA_VISIBLE_DEVICES=0, nohup python3 train_net.py --num-gpus 1 --config-file configs/train_fine_tuning_dinat_full/main_fine_tuning_dinat_all_classes.yaml MODEL.IS_TRAIN True MODEL.TEST.TASK instance MODEL.WEIGHTS weights/150_16_dinat_l_oneformer_coco_100ep.pth OUTPUT_DIR outputs/finetune_dinat_large WANDB.NAME full_config_dinat_large_finetune

I get the following error:

[07/03 17:10:23 d2.data.build]: Removed 54 images with no usable annotations. 11331 images left.
[07/03 17:10:23 d2.data.build]: Using training sampler TrainingSampler
[07/03 17:10:23 d2.data.common]: Serializing 11331 elements to byte tensors and concatenating them all ...
[07/03 17:10:23 d2.data.common]: Serialized dataset takes 27.92 MiB
[07/03 17:10:23 fvcore.common.checkpoint]: [Checkpointer] Loading from weights/150_16_dinat_l_oneformer_coco_100ep.pth ...
WARNING [07/03 17:10:23 fvcore.common.checkpoint]: Skip loading parameter 'sem_seg_head.predictor.class_embed.weight' to the model due to incompatible shapes: (134, 256) in the checkpoint but (14, 256) in the model! You might want to double check if this is expected.
WARNING [07/03 17:10:23 fvcore.common.checkpoint]: Skip loading parameter 'sem_seg_head.predictor.class_embed.bias' to the model due to incompatible shapes: (134,) in the checkpoint but (14,) in the model! You might want to double check if this is expected.
WARNING [07/03 17:10:23 fvcore.common.checkpoint]: Skip loading parameter 'criterion.empty_weight' to the model due to incompatible shapes: (134,) in the checkpoint but (14,) in the model! You might want to double check if this is expected.
WARNING [07/03 17:10:23 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
criterion.empty_weight
sem_seg_head.predictor.class_embed.{bias, weight}
Total Params: 240.701985 M
[07/03 17:10:26 d2.engine.train_loop]: Starting training from iteration 0
ERROR [07/03 17:10:28 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/detectron2/engine/train_loop.py", line 149, in train
    self.run_step()
  File "/usr/local/lib/python3.8/dist-packages/detectron2/engine/defaults.py", line 494, in run_step
    self._trainer.run_step()
  File "/usr/local/lib/python3.8/dist-packages/detectron2/engine/train_loop.py", line 273, in run_step
    loss_dict = self.model(data)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1185, in _call_impl
    return forward_call(*input, **kwargs)
  File "/app/work_folder/oneformer/oneformer_model.py", line 280, in forward
    outputs = self.sem_seg_head(features, tasks)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1185, in _call_impl
    return forward_call(*input, **kwargs)
  File "/app/work_folder/oneformer/modeling/meta_arch/oneformer_head.py", line 118, in forward
    return self.layers(features, tasks, mask)
  File "/app/work_folder/oneformer/modeling/meta_arch/oneformer_head.py", line 121, in layers
    mask_features, transformer_encoder_features, multi_scale_features, _, _ = self.pixel_decoder.forward_features(features)
  File "/usr/local/lib/python3.8/dist-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
    return func(*args, **kwargs)
  File "/app/work_folder/oneformer/modeling/pixel_decoder/msdeformattn.py", line 328, in forward_features
    y, spatial_shapes, level_start_index, valid_ratios = self.transformer(srcs, pos)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1185, in _call_impl
    return forward_call(*input, **kwargs)
  File "/app/work_folder/oneformer/modeling/pixel_decoder/msdeformattn.py", line 87, in forward
    level_start_index = torch.cat((spatial_shapes.new_zeros((1, )), spatial_shapes.prod(1).cumsum(0)[:-1]))

If I understood correctly it is mainly connected to the fact that the main COCO dataset has 133 classes, while mine has only 13. Is there a way to change it? Somehow add empty final layers like its done in general fine-tuning?

Thanks in advance!

praeclarumjj3 commented 1 year ago

Hi @TayJen, do you follow the recommended environment setup? Also, your shared error log seems incomplete (missing the actual error statement). Could you share the complete error log? The error is not due to the difference in number of classes in your custom dataset than COCO dataset.

TayJen commented 1 year ago

@praeclarumjj3 No, I didn't follow recommended setup, instead I use nvcr.io/nvidia/tensorrt:21.06-py3 dev docker container without conda. And yes, there are no errors, but only those warnings which embarass me, but just after them oneformer starts training and I don't really know if it is being trained from scratch or fine-tuned.

praeclarumjj3 commented 12 months ago

Hi @TayJen, I would suggest using the recommended environment setup. Is there a particular reason that you are not using it?

ZhouYC-X commented 11 months ago

I encountered similar problem when use the cityscape pretrained model and continue training on a self-collected datasets.

[07/27 04:51:36] d2.engine.train_loop ERROR: Exception during training: Traceback (most recent call last): File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 150, in train self.after_step() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 180, in after_step h.after_step() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 552, in after_step self._do_eval() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 525, in _do_eval results = self._func() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 453, in test_and_save_results self._last_eval_results = self.test(self.cfg, self.model) File "/clever/volumes/gfs-35-31/zhouyuchen/code/segmentation/OneFormer-main/train_net.py", line 366, in test results_i = inference_on_dataset(model, data_loader, evaluator) File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 204, in inference_on_dataset results = evaluator.evaluate() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 93, in evaluate result = evaluator.evaluate() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/evaluation/panoptic_evaluation.py", line 144, in evaluate pq_res = pq_compute( File "/opt/anaconda3/lib/python3.8/site-packages/panopticapi-0.1-py3.8.egg/panopticapi/evaluation.py", line 221, in pq_compute pq_stat = pq_compute_multi_core(matched_annotations_list, gt_folder, pred_folder, categories) File "/opt/anaconda3/lib/python3.8/site-packages/panopticapi-0.1-py3.8.egg/panopticapi/evaluation.py", line 174, in pq_compute_multi_core workers = multiprocessing.Pool(processes=cpu_num) File "/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 212, in init self._repopulate_pool() File "/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 303, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 326, in _repopulate_pool_static w.start() File "/opt/anaconda3/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/opt/anaconda3/lib/python3.8/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 58, in _launch self.pid = util.spawnv_passfds(spawn.get_executable(), File "/opt/anaconda3/lib/python3.8/multiprocessing/util.py", line 452, in spawnv_passfds return _posixsubprocess.fork_exec( BlockingIOError: [Errno 11] Resource temporarily unavailable

I had follow the recommended environment setup. I tried to add try catch on in train_net.py line 366, however, it does not work. I wonder if there is a solution to resolve this ERROR.

Thanks in advance!

praeclarumjj3 commented 11 months ago

@ZhouYC-X, I believe the error is due to your machine losing the network connection while executing the code. Are you training on multiple nodes? I am unsure how I can help by looking at the error. Would appreciate if you can provide me with details about your machine setup. Thanks

praeclarumjj3 commented 10 months ago

I am closing this issue due to inactivity. Feel free to re-open.