TDIT-haha / StoneSegmentator

0 stars 0 forks source link

SAM标注欠理想 #6

Closed cgliu2000 closed 1 year ago

cgliu2000 commented 1 year ago

image image image 随意截取了几张masks,可见效果不太理想。上千张图片又不太可能手动标注。这么训练结果想必不会太好,怎么办呢? 之前得到的模型指标是怎么达到这么好的呢,是否使用了全部图片训练模型? 另外:运行run.sh,输入输出都只是val,可是train里面的jsons和labels还是空的?

cgliu2000 commented 1 year ago

此外,有关labelme的使用,应当将json文件下载到本地,使用labelme标注后再上传到远程吗?

cgliu2000 commented 1 year ago

试着先训练一下,报错,原因应当是datasets/train里面没有jsons和labels,如同我上面所说

(segstone) root@autodl-container-d04f44a835-2d5c31f5:~/project/StoneSegmentator# sh run_seg_train.sh YOLOv5 🚀 72d6aae Python-3.8.16 torch-2.0.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3090, 24260MiB)

hyperparameters: lr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.3, cls_pw=1.0, obj=0.7, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0, hsv_s=0, hsv_v=0, degrees=0.5, translate=0.5, scale=0.9, shear=0.5, perspective=0.0001, flipud=0.5, fliplr=0.5, mosaic=1.0, mixup=0.5, copy_paste=0.5 TensorBoard: Start with 'tensorboard --logdir runs/train-minseg/exp', view at http://localhost:6006/ Overriding model.yaml nc=80 with nc=1

             from  n    params  module                                  arguments                     

0 -1 1 8800 models.common.Conv [3, 80, 6, 2, 2]
1 -1 1 115520 models.common.Conv [80, 160, 3, 2]
2 -1 4 309120 models.common.C3 [160, 160, 4]
3 -1 1 461440 models.common.Conv [160, 320, 3, 2]
4 -1 8 2259200 models.common.C3 [320, 320, 8]
5 -1 1 1844480 models.common.Conv [320, 640, 3, 2]
6 -1 12 13125120 models.common.C3 [640, 640, 12]
7 -1 1 7375360 models.common.Conv [640, 1280, 3, 2]
8 -1 4 19676160 models.common.C3 [1280, 1280, 4]
9 -1 1 4099840 models.common.SPPF [1280, 1280, 5]
10 -1 1 820480 models.common.Conv [1280, 640, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 4 5332480 models.common.C3 [1280, 640, 4, False]
14 -1 1 205440 models.common.Conv [640, 320, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 4 1335040 models.common.C3 [640, 320, 4, False]
18 -1 1 922240 models.common.Conv [320, 320, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 4 4922880 models.common.C3 [640, 640, 4, False]
21 -1 1 3687680 models.common.Conv [640, 640, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 4 19676160 models.common.C3 [1280, 1280, 4, False]
24 [17, 20, 23] 1 2110486 models.yolo.Segment [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], 32, 320, [320, 640, 1280]] YOLOv5x-seg summary: 456 layers, 88287926 parameters, 88287926 gradients, 264.9 GFLOPs

Transferred 756/763 items from /root/project/StoneSegmentator/pretrains/yolov5x-seg.pt AMP: checks passed ✅ optimizer: SGD(lr=0.01) with parameter groups 126 weight(decay=0.0), 129 weight(decay=0.0005), 129 bias train: Scanning /root/project/StoneSegmentator/example/datasets/train/labels... 0 images, 7680 backgrounds, 0 corrupt: 100%|██████████| 7680/7680 [00 train: WARNING ⚠️ No labels found in /root/project/StoneSegmentator/example/datasets/train/labels.cache. See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data train: New cache created: /root/project/StoneSegmentator/example/datasets/train/labels.cache Traceback (most recent call last): File "segment/train.py", line 666, in main(opt) File "segment/train.py", line 557, in main train(opt.hyp, opt, device, callbacks) File "segment/train.py", line 182, in train train_loader, dataset = create_dataloader( File "/root/project/StoneSegmentator/utils/segment/dataloaders.py", line 46, in create_dataloader dataset = LoadImagesAndLabelsAndMasks( File "/root/project/StoneSegmentator/utils/segment/dataloaders.py", line 102, in init super().init(path, img_size, batch_size, augment, hyp, rect, image_weights, cache_images, single_cls, File "/root/project/StoneSegmentator/utils/dataloaders.py", line 502, in init assert nf > 0 or not augment, f'{prefix}No labels found in {cache_path}, can not start training. {HELP_URL}' AssertionError: train: No labels found in /root/project/StoneSegmentator/example/datasets/train/labels.cache, can not start training. See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 35279) of binary: /root/miniconda3/envs/segstone/bin/python Traceback (most recent call last): File "/root/miniconda3/envs/segstone/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/miniconda3/envs/segstone/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 798, in main() File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper return f(*args, **kwargs) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

segment/train.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-05-22_20:40:05 host : autodl-container-d04f44a835-2d5c31f5 rank : 0 (local_rank: 0) exitcode : 1 (pid: 35279) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================
cgliu2000 commented 1 year ago

试着先训练一下,报错,原因应当是datasets/train里面没有jsons和labels,如同我上面所说

(segstone) root@autodl-container-d04f44a835-2d5c31f5:~/project/StoneSegmentator# sh run_seg_train.sh YOLOv5 🚀 72d6aae Python-3.8.16 torch-2.0.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3090, 24260MiB)

hyperparameters: lr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.3, cls_pw=1.0, obj=0.7, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0, hsv_s=0, hsv_v=0, degrees=0.5, translate=0.5, scale=0.9, shear=0.5, perspective=0.0001, flipud=0.5, fliplr=0.5, mosaic=1.0, mixup=0.5, copy_paste=0.5 TensorBoard: Start with 'tensorboard --logdir runs/train-minseg/exp', view at http://localhost:6006/ Overriding model.yaml nc=80 with nc=1

             from  n    params  module                                  arguments                     

0 -1 1 8800 models.common.Conv [3, 80, 6, 2, 2]
1 -1 1 115520 models.common.Conv [80, 160, 3, 2]
2 -1 4 309120 models.common.C3 [160, 160, 4]
3 -1 1 461440 models.common.Conv [160, 320, 3, 2]
4 -1 8 2259200 models.common.C3 [320, 320, 8]
5 -1 1 1844480 models.common.Conv [320, 640, 3, 2]
6 -1 12 13125120 models.common.C3 [640, 640, 12]
7 -1 1 7375360 models.common.Conv [640, 1280, 3, 2]
8 -1 4 19676160 models.common.C3 [1280, 1280, 4]
9 -1 1 4099840 models.common.SPPF [1280, 1280, 5]
10 -1 1 820480 models.common.Conv [1280, 640, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 4 5332480 models.common.C3 [1280, 640, 4, False]
14 -1 1 205440 models.common.Conv [640, 320, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 4 1335040 models.common.C3 [640, 320, 4, False]
18 -1 1 922240 models.common.Conv [320, 320, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 4 4922880 models.common.C3 [640, 640, 4, False]
21 -1 1 3687680 models.common.Conv [640, 640, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 4 19676160 models.common.C3 [1280, 1280, 4, False]
24 [17, 20, 23] 1 2110486 models.yolo.Segment [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], 32, 320, [320, 640, 1280]] YOLOv5x-seg summary: 456 layers, 88287926 parameters, 88287926 gradients, 264.9 GFLOPs

Transferred 756/763 items from /root/project/StoneSegmentator/pretrains/yolov5x-seg.pt AMP: checks passed ✅ optimizer: SGD(lr=0.01) with parameter groups 126 weight(decay=0.0), 129 weight(decay=0.0005), 129 bias train: Scanning /root/project/StoneSegmentator/example/datasets/train/labels... 0 images, 7680 backgrounds, 0 corrupt: 100%|██████████| 7680/7680 [00 train: WARNING ⚠️ No labels found in /root/project/StoneSegmentator/example/datasets/train/labels.cache. See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data train: New cache created: /root/project/StoneSegmentator/example/datasets/train/labels.cache Traceback (most recent call last): File "segment/train.py", line 666, in main(opt) File "segment/train.py", line 557, in main train(opt.hyp, opt, device, callbacks) File "segment/train.py", line 182, in train train_loader, dataset = create_dataloader( File "/root/project/StoneSegmentator/utils/segment/dataloaders.py", line 46, in create_dataloader dataset = LoadImagesAndLabelsAndMasks( File "/root/project/StoneSegmentator/utils/segment/dataloaders.py", line 102, in init super().init(path, img_size, batch_size, augment, hyp, rect, image_weights, cache_images, single_cls, File "/root/project/StoneSegmentator/utils/dataloaders.py", line 502, in init assert nf > 0 or not augment, f'{prefix}No labels found in {cache_path}, can not start training. {HELP_URL}' AssertionError: train: No labels found in /root/project/StoneSegmentator/example/datasets/train/labels.cache, can not start training. See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 35279) of binary: /root/miniconda3/envs/segstone/bin/python Traceback (most recent call last): File "/root/miniconda3/envs/segstone/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/miniconda3/envs/segstone/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 798, in main() File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper return f(*args, **kwargs) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

segment/train.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-05-22_20:40:05 host : autodl-container-d04f44a835-2d5c31f5 rank : 0 (local_rank: 0) exitcode : 1 (pid: 35279) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================
cgliu2000 commented 1 year ago

试着先训练一下,报错,原因应当是datasets/train里面没有jsons和labels,如同我上面所说

(segstone) root@autodl-container-d04f44a835-2d5c31f5:~/project/StoneSegmentator# sh run_seg_train.sh YOLOv5 🚀 72d6aae Python-3.8.16 torch-2.0.1+cu117 CUDA:0 (NVIDIA GeForce RTX 3090, 24260MiB)

hyperparameters: lr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.3, cls_pw=1.0, obj=0.7, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0, hsv_s=0, hsv_v=0, degrees=0.5, translate=0.5, scale=0.9, shear=0.5, perspective=0.0001, flipud=0.5, fliplr=0.5, mosaic=1.0, mixup=0.5, copy_paste=0.5 TensorBoard: Start with 'tensorboard --logdir runs/train-minseg/exp', view at http://localhost:6006/ Overriding model.yaml nc=80 with nc=1

             from  n    params  module                                  arguments                     

0 -1 1 8800 models.common.Conv [3, 80, 6, 2, 2]
1 -1 1 115520 models.common.Conv [80, 160, 3, 2]
2 -1 4 309120 models.common.C3 [160, 160, 4]
3 -1 1 461440 models.common.Conv [160, 320, 3, 2]
4 -1 8 2259200 models.common.C3 [320, 320, 8]
5 -1 1 1844480 models.common.Conv [320, 640, 3, 2]
6 -1 12 13125120 models.common.C3 [640, 640, 12]
7 -1 1 7375360 models.common.Conv [640, 1280, 3, 2]
8 -1 4 19676160 models.common.C3 [1280, 1280, 4]
9 -1 1 4099840 models.common.SPPF [1280, 1280, 5]
10 -1 1 820480 models.common.Conv [1280, 640, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 4 5332480 models.common.C3 [1280, 640, 4, False]
14 -1 1 205440 models.common.Conv [640, 320, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 4 1335040 models.common.C3 [640, 320, 4, False]
18 -1 1 922240 models.common.Conv [320, 320, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 4 4922880 models.common.C3 [640, 640, 4, False]
21 -1 1 3687680 models.common.Conv [640, 640, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 4 19676160 models.common.C3 [1280, 1280, 4, False]
24 [17, 20, 23] 1 2110486 models.yolo.Segment [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], 32, 320, [320, 640, 1280]] YOLOv5x-seg summary: 456 layers, 88287926 parameters, 88287926 gradients, 264.9 GFLOPs

Transferred 756/763 items from /root/project/StoneSegmentator/pretrains/yolov5x-seg.pt AMP: checks passed ✅ optimizer: SGD(lr=0.01) with parameter groups 126 weight(decay=0.0), 129 weight(decay=0.0005), 129 bias train: Scanning /root/project/StoneSegmentator/example/datasets/train/labels... 0 images, 7680 backgrounds, 0 corrupt: 100%|██████████| 7680/7680 [00 train: WARNING ⚠️ No labels found in /root/project/StoneSegmentator/example/datasets/train/labels.cache. See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data train: New cache created: /root/project/StoneSegmentator/example/datasets/train/labels.cache Traceback (most recent call last): File "segment/train.py", line 666, in main(opt) File "segment/train.py", line 557, in main train(opt.hyp, opt, device, callbacks) File "segment/train.py", line 182, in train train_loader, dataset = create_dataloader( File "/root/project/StoneSegmentator/utils/segment/dataloaders.py", line 46, in create_dataloader dataset = LoadImagesAndLabelsAndMasks( File "/root/project/StoneSegmentator/utils/segment/dataloaders.py", line 102, in init super().init(path, img_size, batch_size, augment, hyp, rect, image_weights, cache_images, single_cls, File "/root/project/StoneSegmentator/utils/dataloaders.py", line 502, in init assert nf > 0 or not augment, f'{prefix}No labels found in {cache_path}, can not start training. {HELP_URL}' AssertionError: train: No labels found in /root/project/StoneSegmentator/example/datasets/train/labels.cache, can not start training. See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 35279) of binary: /root/miniconda3/envs/segstone/bin/python Traceback (most recent call last): File "/root/miniconda3/envs/segstone/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/miniconda3/envs/segstone/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 798, in main() File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper return f(*args, **kwargs) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/root/miniconda3/envs/segstone/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

segment/train.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-05-22_20:40:05 host : autodl-container-d04f44a835-2d5c31f5 rank : 0 (local_rank: 0) exitcode : 1 (pid: 35279) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================
TDIT-haha commented 1 year ago

对SAM的输出结果而言,是需要人工进行调整的,这是无法避免的问题,因为这就是模型运行的。对于如何生成训练中的文件,你需要txt文件,txt文件的来源于json文件。你可以参考readme中的说明

python sam_mask2json.py  #生成json文件,可用于labelme的可视化或进行修改
cd ./tools
python labelme2mask.py  #可视化json文件的图像进行保存

对json文件转换为yolov5可以训练的数据集

cd /root/project/Modules/yolov5/tools
python json2txt_seg.py  #生成可用于训练的数据集
TDIT-haha commented 1 year ago

鉴于你的问题,我会在今晚录制一个流程操作视频进行说明,这样应该能解决大部分操作中的问题,你看还需要我提供些什么帮助呢?

TDIT-haha commented 1 year ago

labelme是标注软件,建议是在windows上运行的,linux上你暂时无法进行可视化界面使用,所以无需上传到服务器。

cgliu2000 commented 1 year ago

补充: 怎么可以仅仅选取我提供的少数几张图片,训练出一个模型,据此说结果很好? 滑动窗口在哪里可以体现出来?(split时候并没有重叠像素) 必要的DA(数据增强)似乎没有使用?

TDIT-haha commented 1 year ago

@lugushaoye 1)图像是随机选取的,由于实际的图像的对象分布是差不多,你可以自己写段demo随机抽取你需要的数据量进行标注修改。在你训练一个模型后,你可以对剩余数据进行预标注再进行标注修改,再训练,这是一个不断迭代的过程。 2)滑动窗口?你是指split数据时么?你可以自己查看splitImg.py是怎么切割的。 3)你所指的数据增强是指模型训练时么?在训练代码中,你自己去看./data/hyps/hyp.scratch-med.yaml的参数,并去代码中查找实现过程。如果你是指离线数据增强话,我觉是没必要的,训练中就可以实现,这样减少空间的占用。