Open LZDSJTU opened 5 years ago
Indeed, our method does not output a score for each mask, so all masks get a score of 1. For evaluating and calculating the AP metric, we use the official Cityscapes evaluation scripts, which you can find here: https://github.com/mcordts/cityscapesScripts
Hello~
I have successfully trained the semantic segmentation with a discriminative loss function. However, I find that the discriminative loss cannot decrease easily.
I want to ask your tricks when training.
I do the following steps:
I think that many factors will influence the final training results. So I want to ask your experience when training.
Thank you very much~
lzd950512@sjtu.edu.cn
From: DavyNeven Date: 2018-11-26 20:57 To: DavyNeven/fastSceneUnderstanding CC: LZDSJTU; Author Subject: Re: [DavyNeven/fastSceneUnderstanding] How do you calculate the average precision in your paper (#12) Indeed, our method does not output a score for each mask, so all masks get a score of 1. For evaluating and calculating the AP metric, we use the official Cityscapes evaluation scripts, which you can find here: https://github.com/mcordts/cityscapesScripts ― You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
Your network has two outputs: semantic predictions and embeddings. Then you get instances by clustering embeddings.
However, it seems that you do not get a score for each instance. So how do you get the PR curve and then calculate the AP metric?