Closed jpainam closed 3 years ago
Hi,
So, the problem is happenning with the maskrcnn library that you are using, right?
Could you indicate me which implementation of MaskRcnn are you using so I can try to reproduce the error? Could you also send me the augmentations that you applied to your dataset?
Best
Hi, this is the augmentations i applied to my dataset
import matplotlib.pyplot as plt
from clodsa.augmentors.augmentorFactory import createAugmentor
from clodsa.transformers.transformerFactory import transformerGenerator
from clodsa.techniques.techniqueFactory import createTechnique
import cv2
PROBLEM = "instance_segmentation"
ANNOTATION_MODE = "coco"
INPUT_PATH = "/home/eldad/data/VOCdevkit/VOC2007/JPEGImages/"
GENERATION_MODE = "linear"
OUTPUT_MODE = "coco"
OUTPUT_PATH = "/home/eldad/data/maskrcnn_train_augmented/"
augmentor = createAugmentor(PROBLEM, ANNOTATION_MODE, OUTPUT_MODE,
GENERATION_MODE, INPUT_PATH,
{"outputPath": OUTPUT_PATH}
)
transformer = transformerGenerator(PROBLEM)
for angle in [90, 180]:
rotate = createTechnique("rotate", {"angle": angle})
augmentor.addTransformer(transformer(rotate))
flip = createTechnique("flip", {"flip": 1})
augmentor.addTransformer(transformer(flip))
none = createTechnique("none", {})
augmentor.addTransformer(transformer(none))
augmentor.applyAugmentation()
I used maskscoring
which is built on top of maskscrcnn
. The core of maskscoring
is till maskrcnn
as you can see in the Traceback error. Here the repository
https://github.com/zjhuang22/maskscoring_rcnn
Thanks
I have never used that library, could you provide me a minimal example to try it? Best, Jónathan
El lun., 14 sept. 2020 a las 9:11, Jean-Paul AINAM (< notifications@github.com>) escribió:
Hi, this is the augmentations i applied to my dataset
import matplotlib.pyplot as pltfrom clodsa.augmentors.augmentorFactory import createAugmentorfrom clodsa.transformers.transformerFactory import transformerGeneratorfrom clodsa.techniques.techniqueFactory import createTechniqueimport cv2 PROBLEM = "instance_segmentation"ANNOTATION_MODE = "coco"INPUT_PATH = "/home/eldad/data/VOCdevkit/VOC2007/JPEGImages/"GENERATION_MODE = "linear"OUTPUT_MODE = "coco"OUTPUT_PATH = "/home/eldad/data/maskrcnn_train_augmented/" augmentor = createAugmentor(PROBLEM, ANNOTATION_MODE, OUTPUT_MODE, GENERATION_MODE, INPUT_PATH, {"outputPath": OUTPUT_PATH} )transformer = transformerGenerator(PROBLEM) for angle in [90, 180]: rotate = createTechnique("rotate", {"angle": angle}) augmentor.addTransformer(transformer(rotate)) flip = createTechnique("flip", {"flip": 1})augmentor.addTransformer(transformer(flip)) none = createTechnique("none", {})augmentor.addTransformer(transformer(none)) augmentor.applyAugmentation()
I used maskscoring which is based on maskscrcnn. The core of maskscoring is till maskrcnn. Here the repository https://github.com/zjhuang22/maskscoring_rcnn Thanks
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/17#issuecomment-691864701, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG4F7JCXSCNETLLFBWLSFW6Y3ANCNFSM4RKREHCQ .
Hi, it's a little bit difficult to provide you with a minimal example. We have generated more than 4000
images using your tool and we don't really know which image is causing this problem.
Since rotation, and flipping change the width and height of the image, i would like to know if your tool also rotates or flips the annotation values?
Thank you. Meanwhile, we will try our best to provide you with a minimal example.
Hi, Yes, the tool rotates and flips the annotation values to obtain the correct annotation. I will try to see how the tool works so I can try to reproduce the problem. Best, Jónathan
El mar., 15 sept. 2020 a las 11:13, Jean-Paul AINAM (< notifications@github.com>) escribió:
Hi, it's a little bit difficult to provide you with a minimal example. We have generated more than 4000 images using your tool and we don't really know which image is causing this problem. Since rotation, and flipping change the width and height of the image, i would like to know if your tool also rotates or flips the annotation values?
Thank you. Meanwhile, we will try our best to provide you with a minimal example.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/17#issuecomment-692583524, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG3FO5UIPM6KVRUVWEDSF4V33ANCNFSM4RKREHCQ .
@joheras I've been following this.
can you please provide a brief explanation of how this augmentation process (for instance segmentation) handle images in a dataset without annotations (no ground truth)? What's your advice on augmenting images without ground truth (negative samples)?
Could there be a possibility that negative coordinates are generated as part of the augmentation through some faulty files during the augmentation process?
@joheras @Eldad27 I found out that, though I have images without annotations. (i.e, the segmentation field is [[]]
, these images do not appear in the new json file or the output folder. I guess CLoDSA removes them before creating the new json file.
Empty segmentation fields are also represented as [[]]
. I didn't find a single one in my annotation file
Ok, I see. I didn't consider empty annotations. I will release a new version, hopefully tomorrow, solving that issue. I'll keep you posted. Best, Jónathan
El dom., 20 sept. 2020 8:54, Jean-Paul AINAM notifications@github.com escribió:
@joheras https://github.com/joheras @Eldad27 https://github.com/Eldad27 I found out that, though I have images without annotations. (i.e, the segmentation field is [[]], these images do not appear in the new json file or the output folder. I guess CLoDSA removes them before creating the new json file.
Empty segmentation fields are also represented as [[]]. I didn't find a single one in my annotation file
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/17#issuecomment-695753013, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG76SGE4W5K23ZWTIELSGWRJLANCNFSM4RKREHCQ .
Hi, I am trying to understand the problem. I have checked that when the json file contains an image, but that image does not have any annotation, that is, when in the list associated with the "annotations" key of the json file it does not appear the reference to the image, clodsa augments those images correctly. However, you indicated that the segmentation field contains [[]] but this does not seem to follow the COCO convention. Am I missing something? Could you send me an example of your annotation file to know what is happening? Best, Jónathan
El dom., 20 sept. 2020 a las 9:51, Jónathan Heras (joheras@gmail.com) escribió:
Ok, I see. I didn't consider empty annotations. I will release a new version, hopefully tomorrow, solving that issue. I'll keep you posted. Best, Jónathan
El dom., 20 sept. 2020 8:54, Jean-Paul AINAM notifications@github.com escribió:
@joheras https://github.com/joheras @Eldad27 https://github.com/Eldad27 I found out that, though I have images without annotations. (i.e, the segmentation field is [[]], these images do not appear in the new json file or the output folder. I guess CLoDSA removes them before creating the new json file.
Empty segmentation fields are also represented as [[]]. I didn't find a single one in my annotation file
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/joheras/CLoDSA/issues/17#issuecomment-695753013, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJRAG76SGE4W5K23ZWTIELSGWRJLANCNFSM4RKREHCQ .
Hi, I found the problem. It is related with maskrcnn. So, i'm going to close this issuee.
When bbox
is empty, it raises this error, i fixed it by removing all images without bbox
as shown in this issue
https://github.com/facebookresearch/maskrcnn-benchmark/issues/31
This is what i did to make the training works
#data/datasets/coco.py
ids_to_remove = []
for img_id in self.ids:
ann_ids = self.coco.getAnnIds(imgIds=img_id)
anno = self.coco.loadAnns(ann_ids)
if all(
any(o <= 1 for o in obj['bbox'][2:])
for obj in anno
if obj['iscrowd'] == 0
):
ids_to_remove.append(img_id)
self.ids = [img_id for img_id in self.ids if img_id not in ids_to_remove]
Maybe, it is also related with the way CLoDSA deals with empty bbox
. You can check that too
Problem with Instance Segmentation
Hi, I trained a maskrcnn model without your augmented data/annotation and it worked well. Using your data/annotation folder and file. I got this error
So i guess, the error is coming from the fact that, for a rotation, the shape of the images does not match anymore. I used this tutorial https://colab.research.google.com/github/joheras/CLoDSA/blob/master/notebooks/CLODSA_Instance_Segmentation.ipynb