Closed edwardyangxin closed 4 years ago
We only measure the accuracy of foreground anchors since they are scarce. We did not treat all anchors as foreground.
On Wed, Dec 4, 2019 at 4:41 PM edwardyangxin notifications@github.com wrote:
For retinanet, I see there's only one metric available which is Foreground acc: [image: image] https://user-images.githubusercontent.com/13101725/70126475-11fdee00-16b4-11ea-9942-0e55fe3239ec.png my question is: why you treat all anchor as foreground? what is a rational threshold for foreground? I'm training on a nuclei dataset and I only have 1 foreground class which is "Positive". In my situation, it seems its never learning here.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/TuSimple/simpledet/issues/265?email_source=notifications&email_token=ABGODH5S3N3756RRES7Z7L3QW5UKFA5CNFSM4JVE6RJKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H55YVDQ, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGODH3T76SPFGHKW6SBTATQW5UKFANCNFSM4JVE6RJA .
thanks for your quick reply:) I don't know if I get it right. For my situation, with only 1 foreground class, I can set threshold parameter as 0.5 for calculating accuracy metric. Is it a rational choice? Otherwise, I always get 100% accuracy:(
We only measure the accuracy of foreground anchors since they are scarce. We did not treat all anchors as foreground. … On Wed, Dec 4, 2019 at 4:41 PM edwardyangxin @.***> wrote: For retinanet, I see there's only one metric available which is Foreground acc: [image: image] https://user-images.githubusercontent.com/13101725/70126475-11fdee00-16b4-11ea-9942-0e55fe3239ec.png my question is: why you treat all anchor as foreground? what is a rational threshold for foreground? I'm training on a nuclei dataset and I only have 1 foreground class which is "Positive". In my situation, it seems its never learning here. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#265?email_source=notifications&email_token=ABGODH5S3N3756RRES7Z7L3QW5UKFA5CNFSM4JVE6RJKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H55YVDQ>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGODH3T76SPFGHKW6SBTATQW5UKFANCNFSM4JVE6RJA .
If you use all the detected after NMS with a score >= 0.5, then 0.5 is a good threshold. For the COCO dataset, we evaluate all boxes with scores >= 0.05, so the threshold is 0.05.
On Wed, Dec 4, 2019 at 8:07 PM edwardyangxin notifications@github.com wrote:
thanks for your quick reply:) I don't know if I get it right. For my situation, with only 1 foreground class, I can set threshold parameter as 0.5 for calculating accuracy metric. Is it a rational choice? Otherwise, I always get 100% accuracy:(
We only measure the accuracy of foreground anchors since they are scarce. We did not treat all anchors as foreground. … <#m8728754230831242186> On Wed, Dec 4, 2019 at 4:41 PM edwardyangxin @.***> wrote: For retinanet, I see there's only one metric available which is Foreground acc: [image: image] https://user-images.githubusercontent.com/13101725/70126475-11fdee00-16b4-11ea-9942-0e55fe3239ec.png my question is: why you treat all anchor as foreground? what is a rational threshold for foreground? I'm training on a nuclei dataset and I only have 1 foreground class which is "Positive". In my situation, it seems its never learning here. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#265 https://github.com/TuSimple/simpledet/issues/265?email_source=notifications&email_token=ABGODH5S3N3756RRES7Z7L3QW5UKFA5CNFSM4JVE6RJKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4H55YVDQ>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGODH3T76SPFGHKW6SBTATQW5UKFANCNFSM4JVE6RJA .
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/TuSimple/simpledet/issues/265?email_source=notifications&email_token=ABGODH5TZFSYOYBVKSOTCZTQW6MO5A5CNFSM4JVE6RJKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEF4ZGXY#issuecomment-561615711, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGODH2EX6ELS4FJSNHISJTQW6MO5ANCNFSM4JVE6RJA .
For retinanet, I see there's only one metric available which is Foreground acc: my question is: why you treat all anchor as foreground? what is a rational threshold for foreground? I'm training on a nuclei dataset and I only have 1 foreground class which is "Positive". In my situation, it seems its never learning here.