Open omerbrandis opened 3 years ago
hi
increased top_k to 20 ( there are 14 annotated objects in train data) i now see more info, but still very far off the original.
10k steps:
50k steps:
Can you check if your annotations are valid?
hello sdimantsd,
Can you please suggest a method for doing that ? ( how to check if annotations are valid)
thanks, Omer.
You can annotate them on the image. I don't know any tool for doing it, so you can build your own or google for one.
hello sdimandtsd,
I've created my annotations using coco-annotator. I can't say with 100% of certainty that it does not have any bugs, but it not very likely that my use case is a fringe one and has not been tested.
you mentioned "testing the validity of the annotations", can you please be more specific as to what should be checked?
(+ is it not the responsibility of the software that receives input (=yolact) to check the inputs validity ?) )
thanks, Omer.
What I meant was for you to check that the polygons were correct. It's true that yolact should check that it's true. But I do not know whether they carried out such a test or not.
how many images you created for training, and is the mAP values good?
Hi,
In this case, i'm using 1 image for training, and evaluating on same image. the intent is to overfit and make sure i can make the cnn learn the image by heart. :-) omer.
during the training, there is a validation phase which shows mAP , if you use the same validation data as the training data, you can check if mAP is growing up and stop at a good time.
Hello ynma-hanvo,
I'm afraid i don't understand your answer.
why isn't loss the correct metric for this case (considering my eval data = training data)?
if using map, what is a good time/value to stop ?
thanks, Omer.
Hi @omerbrandis, Did you solved this problem?
its been a long time since, if i remember correctly i was not able to solve this problem ( i would have closed the ticket if that were the case ).
omer.
On Mon, 11 Oct 2021, 13:04 sdimantsd, @.***> wrote:
Hi @omerbrandis https://github.com/omerbrandis, Did you solved this problem?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dbolya/yolact/issues/598#issuecomment-939879915, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAYC24Y475ENTAIBFFQFQEDUGKZDVANCNFSM4WTBZLPA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
@omerbrandis This problem happened also in my case. I think there is a bug in yolact. I am checking it now.
Hi, I think I found the problem. In my case it happened because touse parameters: 'positive_iou_threshold': 0.5, 'negative_iou_threshold': 0.4,
It's mean in the training process, if the IoU of the prediction bbox and the gt bbox is bigger then positive_iou_threshold it's the same object. If it's small then negative_iou_threshold it's a background. And if it's bigger then negative_iou_threshold and smaller than positive_iou_threshold, the network does not know if it's an object or a background. therefore it does nothing (zero loss). That is the problem! If the loss is zero it means 100% success! so the "don't know" become "great success".
I changed the negative_iou_threshold to be the same as positive_iou_threshold (both 0.5) and the overfitting works.I don't know why @dbolya did it. maybe he can answer
Hello ,
I have tried training on one image from my custom dataset.
my intent was to overfit so that when I evaluate on the same image i'll get an exact copy of the training annotation.
I was not able to do this, even though the reported loss was nearly zero for all metrics: [10000] 10000 || B: 0.011 | C: 0.000 | M: 0.049 | S: 0.017 | T: 0.076 || ETA: 60 days, 11:14:29 || timer: 0.780 [20000] 20000 || B: 0.001 | C: 0.000 | M: 0.002 | S: 0.001 | T: 0.005 || ETA: 60 days, 10:29:48 || timer: 0.766 [30000] 30000 || B: 0.001 | C: 0.000 | M: 0.001 | S: 0.000 | T: 0.002 || ETA: 60 days, 7:38:31 || timer: 0.774 [40000] 40000 || B: 0.000 | C: 0.000 | M: 0.000 | S: 0.000 | T: 0.001 || ETA: 60 days, 3:27:07 || timer: 0.798 [50000] 50000 || B: 0.000 | C: 0.000 | M: 0.000 | S: 0.000 | T: 0.001 || ETA: 60 days, 3:53:50 || timer: 0.781
it looks like some logic was learned , hopefully ruling out problems with the annotation data.
eval output for each of the reported stages ( 10k to 50k)
very little difference between output2 and output5 - matching the loss report.
my questions:
my config: ffr1_dataset = dataset_base.copy({ 'name': 'ffr1 Dataset',
})
ffr_base_config = yolact_base_config.copy({ 'name': 'ffr_base',
})
original image:
for anotation i've used the coco-annotator here's the file ( had to zip in order to upload) img2784.json.zip
Thanks, Omer