Open saikrishnadas opened 3 years ago
@saikrishnadas
The object detected wrongly doesn't look similar at all.
Show 2-3 detection examples with bad bounded boxes
Attach your cfg-file
Few training images ( Capsicum Yellow and Capsicum Green):
Image that was tested:
tested on postman, result: { "CameraID": "1", "id": [ "Capsicum_Yellow10000209", "Capsicum_Yellow10000209", "Capsicum_Yellow10000209" ], "score": [ "99.81", "99.80", "99.72" ], "count": 3 }
{ "CameraID": "1", "id": [ "Capsicum_Yellow10000209", "Capsicum_Yellow10000209", "Capsicum_Yellow10000209", "Capsicum_Yellow10000209" ], "score": [ "99.93", "99.93", "99.88", "99.63" ], "count": 4 }
This is an example, the wrong prediction occurs to other objects too.
Total number of classes trained: 70
Cfg file used:
Use for training in cfg-file (disable color data augmentation):
saturation = 1.1
exposure = 1.5
hue=0.0
More about it: https://github.com/AlexeyAB/darknet/wiki/CFG-Parameters-in-the-%5Bnet%5D-section
Thanks for the response. Let me try it that way. And will get back to you with the result
I'm planning a training, yolov4 with two classes: male and female. The only way two be able to differentiate a male from a female is through the colour of the feathers. Should I use the same parameters as above?
Oh, there were mistake, use
saturation = 1.1
exposure = 1.5
hue=0.0
Should I use the same parameters as above?
Yes.
Damn! I started my 2,50,000 with expo 1.1 and Hue .0 as you said before.
On Thu, 3 Dec 2020, 7:45 pm Alexey, notifications@github.com wrote:
Oh, there were mistake, use
saturation = 1.1 exposure = 1.5 hue=0.0
Should I use the same parameters as above?
Yes.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/AlexeyAB/darknet/issues/7065#issuecomment-738020239, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHLIRA4DI5JAF63DDVM6HC3SS6MP3ANCNFSM4UKHOYJA .
If you're looking to get a better understand of those values, @saikrishnadas, see this page: https://www.ccoderun.ca/darkmark/DataAugmentationColour.html
I get a very low map rate. I usually get 89-90% at this epoch range.
(next mAP calculation at 135322 iterations) Last accuracy mAP@0.5 = 8.62 %, best = 8.90 % 135248: 1.293633, 1.071062 avg loss, 0.002608 rate, 1.879425 seconds, 51935232images, 115.112189 hours left Loaded: 0.000055 seconds
And a few quick questions,
@stephanecharette
- Making the saturation = 0.0 , exposure = 0.0 , hue=.0 will train my image as it is?
Yes.
- Will that improve my detection in my case?
That I don't know. Trial and error, you'll have to try different things and see. It really depends on the images in your training set, and the actual images you are using for inference.
I get a very low map rate. I usually get 89-90% at this epoch range.
(next mAP calculation at 135322 iterations) Last accuracy mAP@0.5 = 8.62 %, best = 8.90 % 135248: 1.293633, 1.071062 avg loss, 0.002608 rate, 1.879425 seconds, 51935232images, 115.112189 hours left Loaded: 0.000055 seconds
@stephanecharette
I need support with the above problem. @AlexeyAB
Goal : To train a model that can detect 70 different Classes of fruits and vegetables that look similar.
I trained these models with YOLOv4 and the model was biased to 2-3 classes. The object detected wrongly doesn't look similar at all. ( eg. The model detected banana as apple ) .And sometimes no detection at all. I cross-checked the dataset and annotations, everything was alright.
Can anyone help me why the model is too biased? @AlexeyAB