CASIA-IVA-Lab / FastSAM

Fast Segment Anything
GNU Affero General Public License v3.0
7.54k stars 711 forks source link

about network: FastSAM is only to train a YOLOV8-seg? #206

Open suliangxu opened 11 months ago

suliangxu commented 11 months ago

When I trained FastSAM on coco128-seg dataset, I found that the part that needed training was the YOLOV8-seg model. So FastSam is only to train a YOLOV8-seg and then adding prompting oprations to it?

diya-he commented 10 months ago

Yeah, I have the same doubt as you do in this regard. The text_prompt only load 'ViT-B/32' weights and don't take any finetune. So, I guass the FastSAM.pt is just that the yolov8-seg.pt. But I try to train yolov8-seg with my datasets. When I use it in FastSAM,I have encountered a problem. There is no complete tutorial for developer about how to train a FastSAM model.

diya-he commented 10 months ago

image

suliangxu commented 10 months ago

image

you can get the training code at this link https://github.com/CASIA-IVA-Lab/FastSAM/releases/tag/v0.0.2

diya-he commented 10 months ago

I use it to train my model, but when I use it in FastSAM code, it always occurs error.

diya-he commented 10 months ago

I solved my problem. It needs to put cfg folder to ultralytics install path

suliangxu commented 10 months ago

I can successfully train the model with the above-mentioned training code. And the contents of train_sa.py are:

from ultralytics import YOLO
model = YOLO(model="./FastSAM.pt")   # the path of the pretrained model

model.train(data="coco.yaml",
            epochs=100,
            batch=2,
            imgsz=1024,
            overlap_mask=False,
            save=True,
            save_period=5,
            device='4',
            project='fastsam',
            name='test', 
            val=False,)

Then, running this file we can successfully train the model.

( If you have any more detailed questions, you can contact me at suliangxu@nuaa.edu.cn.)

joshua-atolagbe commented 10 months ago

Hello @suliangxu,

Thank you for opening this issue and your useful comments. Using the training and validation codes released, I have been trying to train the FastSAM on my custom dataset for instance segmentation (with 6 classes). I structured my dataset following the coco8-seg. But I am getting this error:

image

Seems the problem is with the augmentation, so I explicitly disabled augmentation by setting augment=False. But I still get the same error.

Here's my train_sa.py:

from ultralytics import YOLO

model = YOLO(model="FastSAM-s.pt")
model.train(
    data="sa.yaml",
    task='segment',
    epochs=3,
    augment=False,
    batch=8,
    imgsz=255,
    overlap_mask=False,
    save=True,
    save_period=5,
    project="fastsam",
    name="test",
    val=False,
)

I will appreciate your help in fixing this. Thank you!

suliangxu commented 10 months ago

Hello @suliangxu,

Thank you for opening this issue and your useful comments. Using the training and validation codes released, I have been trying to train the FastSAM on my custom dataset for instance segmentation (with 6 classes). I structured my dataset following the coco8-seg. But I am getting this error:

image

Seems the problem is with the augmentation, so I explicitly disabled augmentation by setting augment=False. But I still get the same error.

Here's my train_sa.py:

from ultralytics import YOLO

model = YOLO(model="FastSAM-s.pt")
model.train(
    data="sa.yaml",
    task='segment',
    epochs=3,
    augment=False,
    batch=8,
    imgsz=255,
    overlap_mask=False,
    save=True,
    save_period=5,
    project="fastsam",
    name="test",
    val=False,
)

I will appreciate your help in fixing this. Thank you!

It seems this error occurs because of the img.shape[2]. Here is the debug info for training the coco-seg128 dataset. I hope it is helpful for you. image

joshua-atolagbe commented 10 months ago

Hello @suliangxu, Thank you for opening this issue and your useful comments. Using the training and validation codes released, I have been trying to train the FastSAM on my custom dataset for instance segmentation (with 6 classes). I structured my dataset following the coco8-seg. But I am getting this error: image Seems the problem is with the augmentation, so I explicitly disabled augmentation by setting augment=False. But I still get the same error. Here's my train_sa.py:

from ultralytics import YOLO

model = YOLO(model="FastSAM-s.pt")
model.train(
    data="sa.yaml",
    task='segment',
    epochs=3,
    augment=False,
    batch=8,
    imgsz=255,
    overlap_mask=False,
    save=True,
    save_period=5,
    project="fastsam",
    name="test",
    val=False,
)

I will appreciate your help in fixing this. Thank you!

It seems this error occurs because of the img.shape[2]. Here is the debug info for training the coco-seg128 dataset. I hope it is helpful for you. image

Thank you for your response @suliangxu , I am working with grayscale images (single channel of with shape [h, w], and not [h,w,c]). Initially, I had saved the images, which is made up of floating point array into .tif. Is there anyway around this? Because it seems the error message is with the shape in the Mosaic class. I tried to set the p=0.0 to disable the mosaic augmentation but it is not working. I will appreciate your help. Thanks!

suliangxu commented 10 months ago

Hello @suliangxu, Thank you for opening this issue and your useful comments. Using the training and validation codes released, I have been trying to train the FastSAM on my custom dataset for instance segmentation (with 6 classes). I structured my dataset following the coco8-seg. But I am getting this error: image Seems the problem is with the augmentation, so I explicitly disabled augmentation by setting augment=False. But I still get the same error. Here's my train_sa.py:

from ultralytics import YOLO

model = YOLO(model="FastSAM-s.pt")
model.train(
    data="sa.yaml",
    task='segment',
    epochs=3,
    augment=False,
    batch=8,
    imgsz=255,
    overlap_mask=False,
    save=True,
    save_period=5,
    project="fastsam",
    name="test",
    val=False,
)

I will appreciate your help in fixing this. Thank you!

It seems this error occurs because of the img.shape[2]. Here is the debug info for training the coco-seg128 dataset. I hope it is helpful for you. image

Thank you for your response @suliangxu , I am working with grayscale images (single channel of with shape [h, w], and not [h,w,c]). Initially, I had saved the images, which is made up of floating point array into .tif. Is there anyway around this? Because it seems the error message is with the shape in the Mosaic class. I tried to set the p=0.0 to disable the mosaic augmentation but it is not working. I will appreciate your help. Thanks!

I changed the hyper-parameter mosaic: 0.0 # (float) image mosaic (probability) in yolo/cfg/default.yaml, and successfully disabled the mosaic augmentation. Maybe you can try it!

joshua-atolagbe commented 10 months ago

Hello @suliangxu, Thank you for opening this issue and your useful comments. Using the training and validation codes released, I have been trying to train the FastSAM on my custom dataset for instance segmentation (with 6 classes). I structured my dataset following the coco8-seg. But I am getting this error: image Seems the problem is with the augmentation, so I explicitly disabled augmentation by setting augment=False. But I still get the same error. Here's my train_sa.py:

from ultralytics import YOLO

model = YOLO(model="FastSAM-s.pt")
model.train(
    data="sa.yaml",
    task='segment',
    epochs=3,
    augment=False,
    batch=8,
    imgsz=255,
    overlap_mask=False,
    save=True,
    save_period=5,
    project="fastsam",
    name="test",
    val=False,
)

I will appreciate your help in fixing this. Thank you!

It seems this error occurs because of the img.shape[2]. Here is the debug info for training the coco-seg128 dataset. I hope it is helpful for you. image

Thank you for your response @suliangxu , I am working with grayscale images (single channel of with shape [h, w], and not [h,w,c]). Initially, I had saved the images, which is made up of floating point array into .tif. Is there anyway around this? Because it seems the error message is with the shape in the Mosaic class. I tried to set the p=0.0 to disable the mosaic augmentation but it is not working. I will appreciate your help. Thanks!

I changed the hyper-parameter mosaic: 0.0 # (float) image mosaic (probability) in yolo/cfg/default.yaml, and successfully disabled the mosaic augmentation. Maybe you can try it!

Thank you @suliangxu . That solved the augmentation issue. But another error popped up. I think the issue still revolves around the number of color channels. Is it that the FastSAM can't be trained on single-channel images? Any help?

image

joshua-atolagbe commented 10 months ago

@suliangxu . Thank you for your help. I fixed the issue already and have been able to train the model on my custom dataset. I had to convert my grayscale images to three-color channel images. Seems the training and validation codes were designed to work explicitly with BGR images.

joshua-atolagbe commented 10 months ago

When I trained FastSAM on coco128-seg dataset, I found that the part that needed training was the YOLOV8-seg model. So FastSAM is only to train a YOLOV8-seg and then adding prompting operations to it?

@glenn-jocher can you please clarify this?

BirATMAN commented 9 months ago

Hey, I'm training the yolov8seg model as provided in the latest released train codes. Has anyone trained FASTsam or added the prompt selection part? Please share the codes if possible.

joshua-atolagbe commented 9 months ago

@BirATMAN Yes, I have. You need to read the paper to understand how FastSAM works. Then, download a model checkpoint (which is basically a yolov8). Follow the training and validation codes provided in the repo to finetune the model on your custom dataset (all-instance prediction). Then use the FastSAM prompt for post processing (prompt-guided prediction). Check the ‘Inference.py’ on the repo on how to do this. Very easy. Hope this helps

joshua-atolagbe commented 9 months ago

@suliangxu were you able to compute IoU score as part of the evaluation metrics? @glenn-jocher keeps saying the val mode in YOLOv8 automatically calculates and reports IoU metrics during the model evaluation. But that's not true. Any help?

MMa321 commented 9 months ago

我可以用上面提到的训练代码成功训练模型。其内容是:train_sa.py

from ultralytics import YOLO
model = YOLO(model="./FastSAM.pt")   # the path of the pretrained model

model.train(data="coco.yaml",
            epochs=100,
            batch=2,
            imgsz=1024,
            overlap_mask=False,
            save=True,
            save_period=5,
            device='4',
            project='fastsam',
            name='test', 
            val=False,)

然后,运行这个文件,我们可以成功训练模型。

(如果您有任何更详细的问题,可以通过 suliangxu@nuaa.edu.cn 与我联系。

Hello, how to set the data set path for running train.py and coco

pcycccccc commented 3 months ago

Hi,@joshua-atolagbe !I still have some confusion regarding this. I would like to know if the segmentation performance of FastSAM relies on the YOLOv8-seg model. Is the function of the second stage of FastSAM merely to select and output a specific part of the content based on the text prompts? If that's the case, can it be understood that the segmentation accuracy of FastSAM is consistent with that of YOLOv8-seg?

YA12SHYAM commented 2 months ago

Hi, can anyone clarify about the annotation format? this page https://docs.ultralytics.com/datasets/segment/#ultralytics-yolo-format mentions a typical row in .txt of dataset labels to be

"(class-index ) (segmentation points) "

while some other internet sources mentions "(class-index) (bbox points) (Segmentation points) "

Should bbox points are included or not ?

zgf1739 commented 2 months ago

我用它来训练我的模型,但是当我在 FastSAM 代码中使用它时,它总是出现错误。

Hello, I trained the best.pt file by myself, and the result does not come out when fastsam is divided, and there is no information such as the processing speed of a single image, and there is no result after processing, have you encountered this problem, thank you for answering 您好 我自己训练的best.pt 文件在fastsam分割时出不来结果 , 并没有 单张图片的处理速度等信息 也没有处理后的结果 ,请问遇到过这个问题么 cfg 文件夹放到 ultralytics 安装路径并没有解决 谢谢回答

joshua-atolagbe commented 2 months ago

Hi,@joshua-atolagbe !I still have some confusion regarding this. I would like to know if the segmentation performance of FastSAM relies on the YOLOv8-seg model. Is the function of the second stage of FastSAM merely to select and output a specific part of the content based on the text prompts? If that's the case, can it be understood that the segmentation accuracy of FastSAM is consistent with that of YOLOv8-seg?

Hi,@joshua-atolagbe !I still have some confusion regarding this. I would like to know if the segmentation performance of FastSAM relies on the YOLOv8-seg model. Is the function of the second stage of FastSAM merely to select and output a specific part of the content based on the text prompts? If that's the case, can it be understood that the segmentation accuracy of FastSAM is consistent with that of YOLOv8-seg?

@pcycccccc , yes you’re correct

joshua-atolagbe commented 2 months ago

@YA12SHYAM . It’s

(class-index ) (segmentation points) for segmentation

(class-index ) (bbox points) for detection

joshua-atolagbe commented 2 months ago

@zgf1739 , what version of ultralytics are you using?

zgf1739 commented 2 months ago

@zgf1739,你用的是什么版本的超解药?

The code used for training in FastSam. Is it related to the version that the specific ultralyrics are consistent with the train and validate code

zgf1739 commented 2 months ago

@zgf1739,你用的是什么版本的超解药?

Consistent use of ultralytics in train and validate code

zgf1739 commented 2 months ago

,您使用的是哪个版本的 Ultralytics?

Successfully uninstalled ultralytics-8.1.34

zgf1739 commented 2 months ago

To fix this, the Ultrytics version should be 8.0.120 in setup.py hope it helps you guys thanks

lydiapan-lotes commented 3 weeks ago

@zgf1739 要解決此問題,setup.py 中的 Ultrytics 版本應為 8.0.120 希望對大家有幫助,謝謝

您好,我確認使用Ultrytics的版本與setup.py裡面的版本都設8.0.120,但是輸出結果依舊是空白的 請問您有沒有什麼建議或是可能遇到其他問題嗎?

image