MarkusRosen / markusrosen.github.io

GNU General Public License v3.0
3 stars 0 forks source link

Instance_Image_Segmentation_for_Window_and_Building_Detection_with_detectron2/ #2

Open utterances-bot opened 3 years ago

utterances-bot commented 3 years ago

Detectron2 - How to use Instance Image Segmentation for Building Recognition - Python Tutorials for Machine Learning, Deep Learning and Data Visualization

This tutorial teaches you how to implement instance image segmentation with a real use case.

https://rosenfelder.ai/Instance_Image_Segmentation_for_Window_and_Building_Detection_with_detectron2/

morganaribeiro commented 3 years ago

Is it possible to perform a train_test_split before starting training on detectron2?

MarkusRosen commented 3 years ago

It is possible, but requires some additional work in the preprocessing and modelling. I will post an updated article at the beginning of 2021 to explain this in more detail. Until then, you could try this article: https://medium.com/@apofeniaco/training-on-detectron2-with-a-validation-set-and-plot-loss-on-it-to-avoid-overfitting-6449418fbf4e

morganaribeiro commented 3 years ago

Could you like me divide the training and test in a good proportion 70/30? Because I don't know how detectron2 does this. Do you know?

Markus Rosenfelder notifications@github.com escreveu no dia terça, 8/12/2020 à(s) 09:08:

It is possible, but requires some additional work in the preprocessing and modelling. I will post an updated article at the beginning of 2021 to explain this in more detail. Unit then, you could try this article: https://medium.com/@apofeniaco/training-on-detectron2-with-a-validation-set-and-plot-loss-on-it-to-avoid-overfitting-6449418fbf4e

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-740581217, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AQBDS2ICJC2MLWW273STYJM3ANCNFSM4UQ74HQQ .

MarkusRosen commented 3 years ago

Could you like me divide the training and test in a good proportion 70/30? Because I don't know how detectron2 does this. Do you know? Markus Rosenfelder notifications@github.com escreveu no dia terça, 8/12/2020 à(s) 09:08: It is possible, but requires some additional work in the preprocessing and modelling. I will post an updated article at the beginning of 2021 to explain this in more detail. Unit then, you could try this article: @.***/training-on-detectron2-with-a-validation-set-and-plot-loss-on-it-to-avoid-overfitting-6449418fbf4e — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#2 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AQBDS2ICJC2MLWW273STYJM3ANCNFSM4UQ74HQQ .

Please read the link I posted. What you need to do in short:

As I have written I will go into more detail in January when I have more free time. Good luck to you! 🍀

morganaribeiro commented 3 years ago

@MarkusRosen Thanks. Could I contribute to your January tutorial? I have a personal dataset and I would like to rate it. In my opinion, I will only better understand this process of evolution of metrics by putting my hands in the dough.

morganaribeiro commented 3 years ago

To avoid overfitting I just divide the notes in the COCO format into a training folder (70%) and test (30%) as does this coco_split https://github.com/akarazniewicz/cocosplit library from github? Or would it be correct to divide it into training (60%), testing (20%) and validation (20%)? Remembering I must use this coco_split https://github.com/akarazniewicz/cocosplit library after making the annotations in Labelme.

Markus Rosenfelder notifications@github.com escreveu no dia terça, 8/12/2020 à(s) 14:32:

Could you like me divide the training and test in a good proportion 70/30? Because I don't know how detectron2 does this. Do you know? Markus Rosenfelder notifications@github.com escreveu no dia terça, 8/12/2020 à(s) 09:08: … <#m6814489193970132723> It is possible, but requires some additional work in the preprocessing and modelling. I will post an updated article at the beginning of 2021 to explain this in more detail. Unit then, you could try this article: @.***/training-on-detectron2-with-a-validation-set-and-plot-loss-on-it-to-avoid-overfitting-6449418fbf4e — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#2 (comment) https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-740581217>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AQBDS2ICJC2MLWW273STYJM3ANCNFSM4UQ74HQQ .

Please read the link I posted. What you need to do in short:

  • create two data sets, one for training, one for testing/validation
  • export each dataset in the COCO-fileformat
  • register each dataset with detectron2 using

register_coco_instances("train", {}, "./train_coco.json", "./train") register_coco_instances("val", {}, "./val_coco.json", "./val")

  • edit the CocoTrainer class:

class CocoTrainer(DefaultTrainer):

@classmethod

def build_evaluator(cls, cfg, dataset_name, output_folder=None):

    if output_folder is None:

        os.makedirs("coco_eval", exist_ok=True)

        output_folder = "coco_eval"

    return COCOEvaluator(dataset_name, cfg, False, output_folder)

@classmethod

def build_train_loader(cls, cfg):

    return build_detection_train_loader(

        cfg,

        mapper=DatasetMapper(

            cfg,

            is_train=True,

        ),

    )
  • specify the train and test sets in the config:

cfg.DATASETS.TRAIN = ("train",) cfg.DATASETS.TEST = ("val",) cfg.TEST.EVAL_PERIOD = 1000 # evaluate every 1000 steps

  • start the evaluation

evaluator = COCOEvaluator("val", cfg, False, output_dir="./output/") val_loader = build_detection_test_loader(cfg, "val")

As I have written I will go into more detail in January when I have more free time. Good luck to you! 🍀

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-740784857, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AWGD5TLVQIHUMLQSK3STZPKTANCNFSM4UQ74HQQ .

MarkusRosen commented 3 years ago

@MarkusRosen Thanks. Could I contribute to your January tutorial? I have a personal dataset and I would like to rate it. In my opinion, I will only better understand this process of evolution of metrics by putting my hands in the dough.

Of course! If your data is already labeled and in the COCO-format, you can send me a link to markus@rosenfelder.ai.

To avoid overfitting I just divide the notes in the COCO format into a training folder (70%) and test (30%) as does this coco_split https://github.com/akarazniewicz/cocosplit library from github? Or would it be correct to divide it into training (60%), testing (20%) and validation (20%)? Remembering I must use this coco_split https://github.com/akarazniewicz/cocosplit library after making the annotations in Labelme. - Could you please give me some feedback from your experience.😊

Both types of splits are fine, but depending on how little data you have labeled, I would tend to choose 80/20, this way you have a bit more data to train and learn from.

morganaribeiro commented 3 years ago

I'm labeling a fish dataset that fragments parts of the body of the fish, it would be a Course Completion Work in Computer Science, in January I wanted to keep in touch with you to create the tutorial on top of it and serve as a learning experience for me and other people of the area. After finishing the labeling and converting it into COCO format, I share it.

Markus Rosenfelder notifications@github.com escreveu no dia quinta, 10/12/2020 à(s) 10:46:

@MarkusRosen https://github.com/MarkusRosen Thanks. Could I contribute to your January tutorial? I have a personal dataset and I would like to rate it. In my opinion, I will only better understand this process of evolution of metrics by putting my hands in the dough.

Of course! If your data is already labeled and in the COCO-format, you can send me a link to markus@rosenfelder.ai.

To avoid overfitting I just divide the notes in the COCO format into a training folder (70%) and test (30%) as does this coco_split https://github.com/akarazniewicz/cocosplit library from github? Or would it be correct to divide it into training (60%), testing (20%) and validation (20%)? Remembering I must use this coco_split https://github.com/akarazniewicz/cocosplit library after making the annotations in Labelme. - Could you please give me some feedback from your experience.😊

Both types of splits are fine, but depending on how little data you have labeled, I would tend to choose 80/20, this way you have a bit more data to train and learn from.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-742530388, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AXGUX2DBACQ6D4RHUTSUDGLRANCNFSM4UQ74HQQ .

morganaribeiro commented 3 years ago

@markus@rosenfelder.ai markus@rosenfelder.ai Good Morning. Do you think that using different sized images increases the detection accuracy of the segmentation or do you think it is more feasible to determine a standard size of (600x600) for the entire elaborated dataset?

Morgana Oliveira morganfrime2017@gmail.com escreveu no dia quinta, 10/12/2020 à(s) 17:43:

I'm labeling a fish dataset that fragments parts of the body of the fish, it would be a Course Completion Work in Computer Science, in January I wanted to keep in touch with you to create the tutorial on top of it and serve as a learning experience for me and other people of the area. After finishing the labeling and converting it into COCO format, I share it.

Markus Rosenfelder notifications@github.com escreveu no dia quinta, 10/12/2020 à(s) 10:46:

@MarkusRosen https://github.com/MarkusRosen Thanks. Could I contribute to your January tutorial? I have a personal dataset and I would like to rate it. In my opinion, I will only better understand this process of evolution of metrics by putting my hands in the dough.

Of course! If your data is already labeled and in the COCO-format, you can send me a link to markus@rosenfelder.ai.

To avoid overfitting I just divide the notes in the COCO format into a training folder (70%) and test (30%) as does this coco_split https://github.com/akarazniewicz/cocosplit library from github? Or would it be correct to divide it into training (60%), testing (20%) and validation (20%)? Remembering I must use this coco_split https://github.com/akarazniewicz/cocosplit library after making the annotations in Labelme. - Could you please give me some feedback from your experience.😊

Both types of splits are fine, but depending on how little data you have labeled, I would tend to choose 80/20, this way you have a bit more data to train and learn from.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-742530388, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AXGUX2DBACQ6D4RHUTSUDGLRANCNFSM4UQ74HQQ .

MarkusRosen commented 3 years ago

In a recent research project we used 500x500 pixel images and had quite good results, therefore I would try either 500x500 or as you stated 600x600.

morganaribeiro commented 3 years ago

Which script did you use to make this size adjustment, could you please provide me?

Markus Rosenfelder notifications@github.com escreveu no dia sexta, 18/12/2020 à(s) 09:53:

In a recent research project we used 500x500 pixel images and had quite good results, therefore I would try either 500x500 or as you stated 600x600.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-748068729, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AVXS6TQUL7KICR565LSVNGDBANCNFSM4UQ74HQQ .

morganaribeiro commented 3 years ago

Could you tell me how you analyze the evolution of detection using the graphics generated on the tensorboard? Would you have any example commented on the graphics generated using detectron2?

Morgana Oliveira morganfrime2017@gmail.com escreveu no dia sexta, 18/12/2020 à(s) 10:35:

Which script did you use to make this size adjustment, could you please provide me?

Markus Rosenfelder notifications@github.com escreveu no dia sexta, 18/12/2020 à(s) 09:53:

In a recent research project we used 500x500 pixel images and had quite good results, therefore I would try either 500x500 or as you stated 600x600.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-748068729, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AVXS6TQUL7KICR565LSVNGDBANCNFSM4UQ74HQQ .

morganaribeiro commented 3 years ago

from detectron2.data import detection_utils as utilsimport detectron2.data.transforms as Timport copy def custom_mapper(dataset_dict): dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below image = utils.read_image(dataset_dict["file_name"], format="BGR") transform_list = [ T.Resize((800,600)), T.RandomBrightness(0.8, 1.8), T.RandomContrast(0.6, 1.3), T.RandomSaturation(0.8, 1.4), T.RandomRotation(angle=[90, 90]), T.RandomLighting(0.7), T.RandomFlip(prob=0.4, horizontal=False, vertical=True), ] image, transforms = T.apply_transform_gens(transform_list, image) dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))

annos = [
    utils.transform_instance_annotations(obj, transforms, image.shape[:2])
    for obj in dataset_dict.pop("annotations")
    if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image.shape[:2])
dataset_dict["instances"] = utils.filter_empty_instances(instances)
return dataset_dict

Morgana Oliveira morganfrime2017@gmail.com escreveu no dia sexta, 18/12/2020 à(s) 10:38:

Could you tell me how you analyze the evolution of detection using the graphics generated on the tensorboard? Would you have any example commented on the graphics generated using detectron2?

Morgana Oliveira morganfrime2017@gmail.com escreveu no dia sexta, 18/12/2020 à(s) 10:35:

Which script did you use to make this size adjustment, could you please provide me?

Markus Rosenfelder notifications@github.com escreveu no dia sexta, 18/12/2020 à(s) 09:53:

In a recent research project we used 500x500 pixel images and had quite good results, therefore I would try either 500x500 or as you stated 600x600.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-748068729, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AVXS6TQUL7KICR565LSVNGDBANCNFSM4UQ74HQQ .

morganaribeiro commented 3 years ago

I tried to "increase the data" with the custom mapper class and I would like to know when I randomly select some samples from the "dataset_test" do not segment the images?

Morgana Oliveira morganfrime2017@gmail.com escreveu no dia sexta, 18/12/2020 à(s) 11:10:

  • Do you know if applying this here to "increase data" is after making an annotation on all the dataset images in the Labelme tool?

from detectron2.data import detection_utils as utilsimport detectron2.data.transforms as Timport copy def custom_mapper(dataset_dict): dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below image = utils.read_image(dataset_dict["file_name"], format="BGR") transform_list = [ T.Resize((800,600)), T.RandomBrightness(0.8, 1.8), T.RandomContrast(0.6, 1.3), T.RandomSaturation(0.8, 1.4), T.RandomRotation(angle=[90, 90]), T.RandomLighting(0.7), T.RandomFlip(prob=0.4, horizontal=False, vertical=True), ] image, transforms = T.apply_transform_gens(transform_list, image) dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))

annos = [
    utils.transform_instance_annotations(obj, transforms, image.shape[:2])
    for obj in dataset_dict.pop("annotations")
    if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image.shape[:2])
dataset_dict["instances"] = utils.filter_empty_instances(instances)
return dataset_dict

Morgana Oliveira morganfrime2017@gmail.com escreveu no dia sexta, 18/12/2020 à(s) 10:38:

Could you tell me how you analyze the evolution of detection using the graphics generated on the tensorboard? Would you have any example commented on the graphics generated using detectron2?

Morgana Oliveira morganfrime2017@gmail.com escreveu no dia sexta, 18/12/2020 à(s) 10:35:

Which script did you use to make this size adjustment, could you please provide me?

Markus Rosenfelder notifications@github.com escreveu no dia sexta, 18/12/2020 à(s) 09:53:

In a recent research project we used 500x500 pixel images and had quite good results, therefore I would try either 500x500 or as you stated 600x600.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-748068729, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK3B5AVXS6TQUL7KICR565LSVNGDBANCNFSM4UQ74HQQ .

26tanishabanik commented 3 years ago

I want to use detectron2 for production. How can I use it for production as now I am using it in colab, how will I use it in real time as after the restart the runtime everything is deleted and I need to register the dataset again.

MarkusRosen commented 3 years ago

I want to use detectron2 for production. How can I use it for production as now I am using it in colab, how will I use it in real time as after the restart the runtime everything is deleted and I need to register the dataset again.

This tutorial from paperspace should help you with deploying your trained model into production: https://blog.paperspace.com/object-detection-segmentation-with-detectron2-on-paperspace-gradient/

MarkusRosen commented 3 years ago

Thank you for your response sir, but I just wanted to know how to save the weights in pickle, and also the configuration files and later use it fot inference and I am deploying it on AWS. Kindly guide me sir

Thats pretty much exactly what is explained in the paperspace tutorial, please read that. I cannot do all the programming and work for you, you need to do some research yourself. This was just a short introduction to detectron2.

MLDeep414 commented 2 years ago

@morganOliveira2018 https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-748208664

from detectron2.utils.visualizer import ColorMode dataset_dicts = DatasetCatalog.get('dataset_test') for d in random.sample(dataset_dicts, 3):
im = cv2.imread(d["file_name"]) outputs = predictor(im) # format is documented at https://detectron2.readthedocs.io/tutorials/models.html#model-output-format v = Visualizer(im[:, :, ::-1], metadata=fish_metadata, scale=0.5, instance_mode=ColorMode.IMAGE_BW # remove the colors of unsegmented pixels. This option is only available for segmentation models ) out = v.draw_instance_predictions(outputs["instances"].to("cpu")) cv2_imshow(out.get_image()[:, :, ::-1])

I think you need to add this, fish_metadata = MetadataCatalog.get('dataset_test')

ShuxiLin commented 2 years ago

Hi, thx for sharing, it really helps me to go through my final! And may I ask why in the line "poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]" x and y need to add 0.5?

MarkusRosen commented 2 years ago

Hi, thx for sharing, it really helps me to go through my final! And may I ask why in the line "poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]" x and y need to add 0.5?

Honestly, I don't really remember 🙈 As far as I can see looking at the code now, it should work fine without this line. Maybe try it once with and then without and compare the results.

schauko commented 2 years ago

hey, thanks for your detailed explainations! one question though: in the postprocessing of the data you are using the validated json object ("with open("/content/val/via_region_data.json") as f") but that never gets generated in your code right? means one needs to do it via the annotation app? or where does it come from? how would one implement it using the actual generated data?

drswapnil commented 1 year ago

I am trying to train a custom model using detectron2. The code works fine for polygons but i need to identify circles. I have annonated the images using labelme and the JSON shows

"points": [ [ 819.3766233766233, 302.5064935064935 ], [ 662.2337662337662, 123.2857142857143 ] ], "group_id": null, "shape_type": "circle", "flags": {}

I am a noob and i dont know how to indicate the center of circle and radius. I need to pass this to the detector2 script. But i am getting the error here

def get_data_dicts(directory, classes): dataset_dicts = [] for filename in [file for file in os.listdir(directory) if file.endswith('.json')]: json_file = os.path.join(directory, filename) with open(json_file) as f: img_anns = json.load(f)

    record = {}

    filename = os.path.join(directory, img_anns["imagePath"])

    record["file_name"] = filename
    record["height"] = 900
    record["width"] = 1440

    annos = img_anns["shapes"]
    objs = []
    for anno in annos:
        px = [a[0] for a in anno['points']] # x coord
        py = [a[1] for a in anno['points']] # y-coord
        poly = [(x, y) for x, y in zip(px, py)] # poly for segmentation
        poly = [p for x in poly for p in x]
        obj = {
            "bbox": [np.min(px), np.min(py), np.max(px), np.max(py)],
            "bbox_mode": BoxMode.XYXY_ABS,
            "segmentation": [poly],
            "category_id": classes.index(anno['label']),
            "iscrowd": 0
        }
        objs.append(obj)
    record["annotations"] = objs
    dataset_dicts.append(record)
return dataset_dicts

I assume there will be changes needed here -

for anno in annos: px = [a[0] for a in anno['points']] # x coord py = [a[1] for a in anno['points']] # y-coord poly = [(x, y) for x, y in zip(px, py)] # poly for segmentation poly = [p for x in poly for p in x]

But for the life of me, i cant figure out how to go about it. Tried multiple avenues to find out circle detection using detectron2, but was unable to get anywhere. Any help would be deeply appreciated. Thanks in advance.

MarkusRosen commented 1 year ago

I am trying to train a custom model using detectron2. The code works fine for polygons but i need to identify circles. I have annonated the images using labelme and the JSON shows

"points": [ [ 819.3766233766233, 302.5064935064935 ], [ 662.2337662337662, 123.2857142857143 ] ], "group_id": null, "shape_type": "circle", "flags": {}

But for the life of me, i cant figure out how to go about it. Tried multiple avenues to find out circle detection using detectron2, but was unable to get anywhere. Any help would be deeply appreciated. Thanks in advance.

I don't have any experience using labelme. You would first need to find out what these two points represent within the circle. If they are the center and a random point on the circle itself I would first calculate the euclidean distance to get the radius and then use shapely to create polygon that roughly estimates the circle:

import numpy as np
import shapely
from shapely.geometry.point import Point

p1 = [819.3766233766233,302.5064935064935]
p2 = [662.2337662337662, 123.2857142857143] 
radius = np.linalg.norm(p1-p2) # calculate euclidean distance
circle = p1.buffer(radius) # create shapely circle
polygon_circle = list(circle.exterior.coords) # get coordinates of circle polygon

I have not tested the code above, but it should work approximately like this.

References for code above: https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy https://stackoverflow.com/questions/13105915/draw-an-ellipse-using-shapely

drswapnil commented 1 year ago

Thanks a million Markus.

I tried this and its giving me the radius as well as the circle on standalone images. What i am struggling with is how to put in the detectron training function

def get_data_dicts(directory, classes): dataset_dicts = [] for filename in [file for file in os.listdir(directory) if file .endswith('.json')]: json_file = os.path.join(directory, filename) with open(json_file) as f: img_anns = json.load(f)

    record = {}

    filename = os.path.join(directory, img_anns["imagePath"])

    record["file_name"] = filename
    record["height"] = 900
    record["width"] = 1440

    annos = img_anns["shapes"]
    objs = []

    px1 = [a[0] for a in annos['points']] # x1 coord
    py1 = [a[1] for a in annos['points']] # y1-coord
    px2 = [a[2] for a in annos['points']] # x2 coord
    py2 = [a[3] for a in annos['points']] # y2-coord

      # calculate the distance between the two points as radius
    radius = np.sqrt((px2 - px1)**2 + (py2 - py1)**2)
      # center of the circle is px1,py1 and radius is radius

    # for anno in annos:
    #     px = [a[0] for a in anno['points']] # x coord
    #     py = [a[1] for a in anno['points']] # y-coord

poly = [(x, y) for x, y in zip(px, py)] # poly for segmentation

    #     poly = [p for x in poly for p in x]
    circle = plt.Circle((px1,py1), radius)
    obj = {
        "bbox": [np.min(px1), np.min(py1), np.max(px2), np.max(py2)],
        "bbox_mode": BoxMode.XYXY_ABS,
        "segmentation": [circle],
        "category_id": classes.index(annos['label']),
        "iscrowd": 0
    }
    objs.append(obj)
    record["annotations"] = objs
    dataset_dicts.append(record)
return dataset_dicts

I have taken the liberty to attach both the detectron2 file and a json file for your consideration. Kindly do let me know where am i going wrong?

Thanks and regards

Dr.Swapnil Kothari,

On Mon, Aug 8, 2022 at 7:49 PM Markus Rosenfelder @.***> wrote:

I am trying to train a custom model using detectron2. The code works fine for polygons but i need to identify circles. I have annonated the images using labelme and the JSON shows

"points": [ [ 819.3766233766233, 302.5064935064935 ], [ 662.2337662337662, 123.2857142857143 ] ], "group_id": null, "shape_type": "circle", "flags": {}

But for the life of me, i cant figure out how to go about it. Tried multiple avenues to find out circle detection using detectron2, but was unable to get anywhere. Any help would be deeply appreciated. Thanks in advance.

I don't have any experience using labelme. You would first need to find out what these two points represent within the circle. If they are the center and a random point on the circle itself I would first calculate the euclidean distance to get the radius and then use shapely to create polygon that roughly estimates the circle:

import numpy as npimport shapelyfrom shapely.geometry.point import Point p1 = [819.3766233766233,302.5064935064935]p2 = [662.2337662337662, 123.2857142857143] radius = np.linalg.norm(p1-p2) # calculate euclidean distancecircle = p1.buffer(radius) # create shapely circlepolygon_circle = list(circle.exterior.coords) # get coordinates of circle polygon

I have not tested the code above, but it should work approximately like this.

References for code above:

https://stackoverflow.com/questions/1401712/how-can-the-euclidean-distance-be-calculated-with-numpy https://stackoverflow.com/questions/13105915/draw-an-ellipse-using-shapely

— Reply to this email directly, view it on GitHub https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-1208193532, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD3WUZBW5WQ2244J5OLDJB3VYEJQFANCNFSM4UQ74HQQ . You are receiving this because you commented.Message ID: @.***>

MarkusRosen commented 1 year ago

Thanks a million Markus. I tried this and its giving me the radius as well as the circle on standalone images. What i am struggling with is how to put in the detectron training function I have taken the liberty to attach both the detectron2 file and a json file for your consideration. Kindly do let me know where am i going wrong? Thanks and regards --- Dr.Swapnil Kothari,

Do you get any error message? You are using plt.Circle((px1,py1), radius) which is suspect is a function from matplotlib. I don't know if matplotlib returns an approximated list of the circle. Your circle object needs to be a list containing x and y values, this is the reason I suggested using shapely in https://github.com/MarkusRosen/markusrosen.github.io/issues/2#issuecomment-1208193532