Open raulmat19 opened 1 year ago
Hello @raulmat19
Most probably it should originate from the evaluate()
function in the engine.py
file. But I will have to take a closer look to implement it in a clean way.
Hello! Okey, thanks man. By the way, it would also be interesting to use the evaluate() function to show some validation loss graphs.
@raulmat19 I completely understand your concern. It is a bit tricky to do with Torchvision object detection models. But I will try again for that in a few days.
I tried something I don't know if it makes any sense to you.
So in the eval.py using the length of the true prediction I was able to cut down my predicted predictions to be exactly the length of the true prediction (if my predicted is less than my True I append Zeros as predicted as a background). I then appended all this values to an external list for each of predicted and actual values.
The idea here is that all classes from all true and predicted images have been combined in a list. then I did confusion matrix on that using torchmetrics.classification
Got it. Can you post an image/example of the result that you got?
yes sure! this is the result of the training I did. I printed the confusion matrix for each image also, below is the last Image evaluation
Predicted:
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
True
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2])
Confusion Matrix
tensor([[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 11, 0],
[ 0, 0, 0, 0]])
100% 684/684 [01:18<00:00, 8.68it/s]
Totoal Confusion matrix:
tensor([[ 0, 0, 0, 0],
[ 0, 2018, 86, 179],
[ 0, 18, 2291, 59],
[ 0, 58, 58, 2342]])
{'map': tensor(0.4217),
'map_50': tensor(0.6997),
'map_75': tensor(0.4417),
'map_large': tensor(0.5473),
'map_medium': tensor(0.2703),
'map_per_class': tensor(-1.),
'map_small': tensor(0.0383),
'mar_1': tensor(0.2611),
'mar_10': tensor(0.5222),
'mar_100': tensor(0.5572),
'mar_100_per_class': tensor(-1.),
'mar_large': tensor(0.6750),
'mar_medium': tensor(0.4548),
'mar_small': tensor(0.1978)}
would you mind if I write the code and push it to you?
If you can create a PR that would be great. I pushed some changes yesterday. But they were to train.py
and datasets.py
. Basically, mosaic augmentation was applied all the time or not applied based on one command line argument. Now, I made it a float value so that it can be passed a probability between 0 and 1. Please check once whether you need to pull the new code.
Hey @Engineer-D! I wanted to thank you, that looks good. If you could create the PR or paste here the code would be great.
@Engineer-D have you pushed the code this?
Hi! I would love to know if there is a way or could be implemented a way of generating a confusion matrix while validation, among other metrics as F1-score.