Open kalikhademi opened 1 year ago
@kalikhademi
Right now, the code computes only average precision and recall.
While executing eval.py
, if you pass --verbose
, it will show you class-wise precision also. Regarding the TP, FP, FN, I will need to check a bit and get back to you.
I hope that's fine.
@sovit-123 could you please tell me how to print the recall?
Thank you
@samahwaleed After eval.py
finishes executing, the recall will be automatically printed.
@sovit-123 it only print the AP for each class.
@samahwaleed Yes, right now, it's AP for each class and the overall recall. I will try to add recall for each class as well.
@samahwaleed Yes, right now, it's AP for each class and the overall recall. I will try to add recall for each class as well.
I would be thankful if you add it as soon as possible,
Thank you so much
@samahwaleed Sure, I will try my best and update here.
@samahwaleed
Hello, the update is done. Now the --verbose
flag will print AR in the table also. You should get output similar to the following.
AP / AR per class
-------------------------------------------------------------------------
| | Class | AP | AR |
-------------------------------------------------------------------------
|1 | fish | 0.189 | 0.296 |
|2 | jellyfish | 0.267 | 0.346 |
|3 | penguin | 0.105 | 0.210 |
|4 | shark | 0.196 | 0.351 |
|5 | puffin | 0.062 | 0.122 |
|6 | stingray | 0.227 | 0.327 |
|7 | starfish | 0.190 | 0.263 |
-------------------------------------------------------------------------
|Avg | 0.176 | 0.274 |
@samahwaleed Hello, the update is done. Now the
--verbose
flag will print AR in the table also. You should get output similar to the following.AP / AR per class ------------------------------------------------------------------------- | | Class | AP | AR | ------------------------------------------------------------------------- |1 | fish | 0.189 | 0.296 | |2 | jellyfish | 0.267 | 0.346 | |3 | penguin | 0.105 | 0.210 | |4 | shark | 0.196 | 0.351 | |5 | puffin | 0.062 | 0.122 | |6 | stingray | 0.227 | 0.327 | |7 | starfish | 0.190 | 0.263 | ------------------------------------------------------------------------- |Avg | 0.176 | 0.274 |
Thank you so much Sir,
Hi,
I would like to use your repo to run fasterRCNN on my custom dataset. I would like to validate if detections are TP,FP, and FN and use it to compute TruePositiveRate, Precision, Recall ,and F-1. Could you please guide me how I should this for the inference mode? Do you think I can use the following section in eval.py to compute these metrics:
for i in range(len(images)): true_dict = dict() preds_dict = dict() true_dict['boxes'] = targets[i]['boxes'].detach().cpu() true_dict['labels'] = targets[i]['labels'].detach().cpu() preds_dict['boxes'] = outputs[i]['boxes'].detach().cpu() preds_dict['scores'] = outputs[i]['scores'].detach().cpu() preds_dict['labels'] = outputs[i]['labels'].detach().cpu() preds.append(preds_dict) target.append(true_dict)
I only see the average precision.