Closed aymuos15 closed 1 year ago
If it matters:
https://github.com/carolinepacheco/convert-yolo-to-pascalvoc
I used this to convert my annotations. My original annotations were in the yolo format. (They work with YOLO-NAS)
Can you try commenting out line numbers 180 to 207 in datasets.py
and let me know?
It looks some border issue for bounding boxes in the dataset itself. Very hard to tell though.
raise ValueError(f"y_max is less than or equal to y_min for bbox {bbox}.") ValueError: y_max is less than or equal to y_min for bbox (tensor(0.7314), tensor(0.9971), tensor(0.7857), tensor(0.9971), tensor(1)).
I think commenting that part has caused this issue.
Ok. That seems expected. If there would have been no error then that part was causing the issue. In this case, it seems like a xmin is less than then image width. I am not really sure I will be able to help any more as this a dataset issue.
Alright, thanks for that. Will look into it!
Code Output:
device cuda Checking Labels and images... 100%|██████████████████████████████████████| 916/916 [00:00<00:00, 78550.48it/s] Checking Labels and images... 100%|█████████████████████████████████████| 137/137 [00:00<00:00, 233206.03it/s] Creating data loaders Number of training samples: 916 Number of validation samples: 137
Building model from scratch... 43,256,153 total parameters. 43,030,809 training parameters. Epoch: [0] [ 0/115] eta: 0:12:56 lr: 0.000010 loss: 1.3367 (1.3367) loss_classifier: 0.8205 (0.8205) loss_box_reg: 0.0023 (0.0023) loss_objectness: 0.4907 (0.4907) loss_rpn_box_reg: 0.0232 (0.0232) time: 6.7497 data: 1.1793 max mem: 7776
ISSUE:
Traceback (most recent call last): File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/fastercnn-pytorch-training-pipeline/train.py", line 565, in
main(args)
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/fastercnn-pytorch-training-pipeline/train.py", line 405, in main
batch_loss_rpn_list = train_one_epoch(
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/fastercnn-pytorch-training-pipeline/torch_utils/engine.py", line 45, in train_one_epoch
for images, targets in metric_logger.log_every(data_loader, print_freq, header):
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/fastercnn-pytorch-training-pipeline/torch_utils/utils.py", line 173, in log_every
for obj in iterable:
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in next
data = self._next_data()
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/torch/_utils.py", line 644, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/fastercnn-pytorch-training-pipeline/datasets.py", line 314, in getitem
sample = self.transforms(image=image_resized,
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/albumentations/core/composition.py", line 207, in call
p.preprocess(data)
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/albumentations/core/utils.py", line 83, in preprocess
data[data_name] = self.check_and_convert(data[data_name], rows, cols, direction="to")
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/albumentations/core/utils.py", line 91, in check_and_convert
return self.convert_to_albumentations(data, rows, cols)
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/albumentations/core/bbox_utils.py", line 142, in convert_to_albumentations
return convert_bboxes_to_albumentations(data, self.params.format, rows, cols, check_validity=True)
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/albumentations/core/bbox_utils.py", line 408, in convert_bboxes_to_albumentations
return [convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity) for bbox in bboxes]
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/albumentations/core/bbox_utils.py", line 408, in
return [convert_bbox_to_albumentations(bbox, source_format, rows, cols, check_validity) for bbox in bboxes]
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/albumentations/core/bbox_utils.py", line 352, in convert_bbox_to_albumentations
check_bbox(bbox)
File "/gpfs3/well/papiez/users/yev566/python/RCNN-skylake/lib/python3.10/site-packages/albumentations/core/bbox_utils.py", line 435, in check_bbox
raise ValueError(f"Expected {name} for bbox {bbox} to be in the range [0.0, 1.0], got {value}.")
ValueError: Expected x_min for bbox (tensor(-0.0029), tensor(0.), tensor(0.), tensor(0.0486), tensor(1)) to be in the range [0.0, 1.0], got -0.0028571428265422583.
Sample .xml file (ignore the folder and filename etc.)
Question: Any idea why a negative value is being generated?