Closed TayJen closed 10 months ago
Hi @TayJen, do you follow the recommended environment setup? Also, your shared error log seems incomplete (missing the actual error statement). Could you share the complete error log? The error is not due to the difference in number of classes in your custom dataset than COCO dataset.
@praeclarumjj3 No, I didn't follow recommended setup, instead I use nvcr.io/nvidia/tensorrt:21.06-py3
dev docker container without conda.
And yes, there are no errors, but only those warnings which embarass me, but just after them oneformer starts training and I don't really know if it is being trained from scratch or fine-tuned.
Hi @TayJen, I would suggest using the recommended environment setup. Is there a particular reason that you are not using it?
I encountered similar problem when use the cityscape pretrained model and continue training on a self-collected datasets.
[07/27 04:51:36] d2.engine.train_loop ERROR: Exception during training: Traceback (most recent call last): File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 150, in train self.after_step() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/train_loop.py", line 180, in after_step h.after_step() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 552, in after_step self._do_eval() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/hooks.py", line 525, in _do_eval results = self._func() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/engine/defaults.py", line 453, in test_and_save_results self._last_eval_results = self.test(self.cfg, self.model) File "/clever/volumes/gfs-35-31/zhouyuchen/code/segmentation/OneFormer-main/train_net.py", line 366, in test results_i = inference_on_dataset(model, data_loader, evaluator) File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 204, in inference_on_dataset results = evaluator.evaluate() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/evaluation/evaluator.py", line 93, in evaluate result = evaluator.evaluate() File "/opt/anaconda3/lib/python3.8/site-packages/detectron2/evaluation/panoptic_evaluation.py", line 144, in evaluate pq_res = pq_compute( File "/opt/anaconda3/lib/python3.8/site-packages/panopticapi-0.1-py3.8.egg/panopticapi/evaluation.py", line 221, in pq_compute pq_stat = pq_compute_multi_core(matched_annotations_list, gt_folder, pred_folder, categories) File "/opt/anaconda3/lib/python3.8/site-packages/panopticapi-0.1-py3.8.egg/panopticapi/evaluation.py", line 174, in pq_compute_multi_core workers = multiprocessing.Pool(processes=cpu_num) File "/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 212, in init self._repopulate_pool() File "/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 303, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "/opt/anaconda3/lib/python3.8/multiprocessing/pool.py", line 326, in _repopulate_pool_static w.start() File "/opt/anaconda3/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/opt/anaconda3/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in init super().init(process_obj) File "/opt/anaconda3/lib/python3.8/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/opt/anaconda3/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 58, in _launch self.pid = util.spawnv_passfds(spawn.get_executable(), File "/opt/anaconda3/lib/python3.8/multiprocessing/util.py", line 452, in spawnv_passfds return _posixsubprocess.fork_exec( BlockingIOError: [Errno 11] Resource temporarily unavailable
I had follow the recommended environment setup. I tried to add try catch on in train_net.py line 366, however, it does not work. I wonder if there is a solution to resolve this ERROR.
Thanks in advance!
@ZhouYC-X, I believe the error is due to your machine losing the network connection while executing the code. Are you training on multiple nodes? I am unsure how I can help by looking at the error. Would appreciate if you can provide me with details about your machine setup. Thanks
I am closing this issue due to inactivity. Feel free to re-open.
Hello! I am trying to fine-tune COCO pre-trained oneformer model with DiNAT backbone. My dataset is not completely in panoptic form, so I wrote my own custom datamapper, where I also added augmentations from albumentations.
But when I am trying to run the
train_net.py
with this command:CUDA_VISIBLE_DEVICES=0, nohup python3 train_net.py --num-gpus 1 --config-file configs/train_fine_tuning_dinat_full/main_fine_tuning_dinat_all_classes.yaml MODEL.IS_TRAIN True MODEL.TEST.TASK instance MODEL.WEIGHTS weights/150_16_dinat_l_oneformer_coco_100ep.pth OUTPUT_DIR outputs/finetune_dinat_large WANDB.NAME full_config_dinat_large_finetune
I get the following error:
If I understood correctly it is mainly connected to the fact that the main COCO dataset has 133 classes, while mine has only 13. Is there a way to change it? Somehow add empty final layers like its done in general fine-tuning?
Thanks in advance!