Open AvaniPitre opened 6 years ago
@AvaniPitre Hi, I think - wrong dataset, or wrong cfg-file. What params do you use in the Makefile?
Thanks for reply.. make file parameters are GPU=0 CUDNN=0 CUDNN_HALF=0 OPENCV=0 AVX=0 OPENMP=0 LIBSO=0 Also Linux version I have build with following parameters GPU=0 CUDNN=0 OPENCV=1 DEBUG=0 OPENMP=1 LIBSO=1
@AlexeyAB I have two training running parallel one without gpu and random set =0 in cfg and recently I started with GPU where make file parameters are GPU=1 CUDNN=1 and random =1 set in cfg. but for both training output I am getting same issue i.e high detection threshold as detected Bbox size grows beyond marked object size.
how do check wrong dataset? here I have added half negatives in total 14000 images with only one object class and image sizes vary from 121X39 to 704X576 .
Also smallest size i.e 121X39 is my approximate object size to be detected.
My cfg file for training with GPU is as follows [net] batch=64 subdivisions=64 height=416 width=416 channels=3 momentum=0.9 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue=.1
learning_rate=0.0001 max_batches = 45000 policy=steps steps=100,25000,35000 scales=10,.1,.1
[convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=leaky
[maxpool] size=2 stride=2
[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky
[convolutional] batch_normalize=1 filters=1024 size=3 stride=1 pad=1 activation=leaky
#######
[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky
[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky
[route] layers=-9
[reorg] stride=2
[route] layers=-1,-3
[convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky
[convolutional] size=1 stride=1 pad=1 filters=30 activation=linear
[region] anchors = 11.7425,9.5023, 5.8290,2.0653, 9.7451,5.8569, 3.7540,1.4357, 10.6144,7.2646 bias_match=1 classes=1 coords=4 num=5 softmax=1 jitter=.2 rescore=1
object_scale=5 noobject_scale=1 class_scale=1 coord_scale=1
absolute=1 thresh = .6 random=1
how do check wrong dataset?
Open it in the yolo_mark: https://github.com/AlexeyAB/Yolo_mark
What mAP can you get?
Try to train by using yolov3-tiny.cfg
Thanks a lot.. I will cross check dataset n try to train using yolov3-tiny.cfg
@AlexeyAB Hi I have trained around 5000 iteration, between 1000-2000 iteration I am getting a good bounding box with around 0.8-0.85 % threshold, however, to improve more I continued training I was getting wrong detections with large Bbox than actual expected but same time having high threshold percentage 0.9-0.95% Why this is so? how to avoid it? Please help
Thanks