Open ZooZoo-tc opened 5 years ago
I have same question.
I can help you, are many steps, you can read this link https://github.com/experiencor/keras-yolo2, it is the same with little differences. If you can't understand anyway, tell me and I will create a video for you.
If I use my own dataset to train YOLO v3, the step to label images. From the tutorial: train annotations in VOC format How to make train annotations in VOC format? What's exact VOC format.
There are many options to annotate this I made my tool: https://github.com/kabrau/PyImageRoi (CreateBoundingBoxes) But many people use: https://github.com/tzutalin/labelImg
I can help you, are many steps, you can read this link https://github.com/experiencor/keras-yolo2, it is the same with little differences. If you can't understand anyway, tell me and I will create a video for you.
Hi @kabrau, I have 32x32 size images in my data set. I want to train yolo with whole(32x32 image itself) image... in my case whole image is the bounding box coordinates.... is it possible ? will it work?
Hi @satish4github I don't know, what is the length of an object inside an image? In your case, would not a classifier be better?
Object size is same as image size(32x32) like cifar dataset Classifier would say only object is there or not... But i need the object localization details also while predicting..
@satish4github I not sure, but I think that implementation does not work for you. I'm curious, I don't understand If the object has the same size, why you need localization?
With raccoon data set I got errors, any idea?
python train.py -c config.json
Using TensorFlow backend.
Traceback (most recent call last):
File "train.py", line 280, in
You need to debug this line to see the problem File "/opt/apps/Anaconda3/2019.03/envs/powerai16_ibm/lib/python3.6/json/decoder.py", line 355
I will check...
I not sure, but I think that implementation does not work for you.
Basically i want train model with CIFAR image data set . I want to use the same model for predicting bounding boxes/object localization.
Download link for backend weight not available:
Download pretrained weights for backend at: https://1drv.ms/u/s!ApLdDEW3ut5fgQXa7GzSlG-mdza6
Doesn't matter, fixed.
Annotation file with YOLO v3 format like this is OK?
15 0.927600 0.282065 0.021600 0.035870 15 0.894800 0.274457 0.021600 0.027174 15 0.857200 0.265217 0.024800 0.041304 15 0.822000 0.247826 0.021600 0.036957
When I run training process for raccoon dataset as: python train.py -c config.json I got training error like this:
Epoch 00027: loss did not improve from 5.13942 Epoch 00027: early stopping /opt/apps/Anaconda3/2019.03/envs/powerai16_ibm/lib/python3.6/site-packages/Keras-2.2.4-py3.6.egg/keras/engine/saving.py:310: UserWarning: No training configuration found in save file: the model was not compiled. Compile it manually. warnings.warn('No training configuration found in save file: ' raccoon: 0.9681 mAP: 0.9681
Someone have any idea? My configuration is wrong?
You are getting 0.9681 mAP, which is good.
Training errors when use raccoon dataset: It used to train process work. Any hint? python train.py -c config.json
Using TensorFlow backend. valid_annot_folder not exists. Spliting the trainining set. Seen labels: {}
Given labels: ['raccoon']
Some labels have no annotations! Please revise the list of labels in the config.json.
Traceback (most recent call last):
File "train.py", line 280, in
config.json file looks like: { "model" : { "min_input_size": 352, "max_input_size": 448, "anchors": [10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326], "labels": ["raccoon"] },
"train": {
"train_image_folder": "/home/zli04/2019SummerWork/raccoon_datast-master/images/",
"train_annot_folder": "/home/zli04/2019SummerWork/raccoon_dataset-master/annotations/",
"cache_name": "raccoon_train.pkl",
"train_times": 3,
"batch_size": 16,
"learning_rate": 1e-4,
"nb_epochs": 100,
"warmup_epochs": 3,
"ignore_thresh": 0.5,
"gpus": "0,1",
"grid_scales": [1,1,1],
"obj_scale": 5,
"noobj_scale": 1,
"xywh_scale": 1,
"class_scale": 1,
"tensorboard_dir": "log_raccoon",
"saved_weights_name": "raccoon.h5",
"debug": true
},
"valid": {
"valid_image_folder": "",
"valid_annot_folder": "",
"cache_name": "",
"valid_times": 1
}
}
I try to train with my own dataset with YOLO version like:
15 0.514000 0.254891 0.013600 0.029348
In config.json file, should I need update/change anything? I noticed label option, should I change my label to 15 instead?
When I use annotation tools labelImg to annotate box for my own dataset, should I first change the images size to standard such as 288 x488 for raccoon? My own dataset image size is: 1250 x 920. I noticed Pascal/VOC format has image size information.
Keep image size and bounding boxes same as original image size. There is no need to change image size to any standard size. Data augmentation tool will take care of converting image size to standard size and so on.
On Sat, Jul 27, 2019, 02:03 Zhiyi Li notifications@github.com wrote:
When I use annotation tools labelImg to annotate box for my own dataset, should I first change the images size to standard such as 288 x488 for raccoon? My own dataset image size is: 1250 x 920. I noticed Pascal/VOC format has image size information.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/experiencor/keras-yolo3/issues/194?email_source=notifications&email_token=ABJOVYQRX4VO5Z2GQZU75EDQBNNRFA5CNFSM4H6SLIXKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD25UTHI#issuecomment-515590557, or mute the thread https://github.com/notifications/unsubscribe-auth/ABJOVYTTSHFEHHB6ZUNUURLQBNNRFANCNFSM4H6SLIXA .
How to set up min_input_size, max_input_size in config.json for your own dataset?
How to specify input_size in config.json file? Assume my own image size is 1250 x 720.
There is no need to specify image size in config file. but you need to specify model size in the form of min and max model size.
hello, what are the modifications that must be done in the json file to support multiple classes custom training ?
You are getting 0.9681 mAP, which is good.
If I am getting mAP: 0.0981 is it consider good? I have trained my data against this model.
Accuracy is less. How many input images are there?
On Thu, Oct 31, 2019, 15:33 tejasmagia notifications@github.com wrote:
You are getting 0.9681 mAP, which is good.
If I am getting mAP: 0.0981 is it consider good? I have trained my data against this model.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/experiencor/keras-yolo3/issues/194?email_source=notifications&email_token=ABJOVYQFHMGHXFRANFJPL7LQRKUN5A5CNFSM4H6SLIXKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECXFIIY#issuecomment-548295715, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJOVYVRWKTSKX4GY753NSTQRKUN5ANCNFSM4H6SLIXA .
Around 125 images but still my objects are accurately detected.
On Thu, 31 Oct, 2019, 19:10 look4pritam, notifications@github.com wrote:
Accuracy is less. How many input images are there?
On Thu, Oct 31, 2019, 15:33 tejasmagia notifications@github.com wrote:
You are getting 0.9681 mAP, which is good.
If I am getting mAP: 0.0981 is it consider good? I have trained my data against this model.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/experiencor/keras-yolo3/issues/194?email_source=notifications&email_token=ABJOVYQFHMGHXFRANFJPL7LQRKUN5A5CNFSM4H6SLIXKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECXFIIY#issuecomment-548295715 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/ABJOVYVRWKTSKX4GY753NSTQRKUN5ANCNFSM4H6SLIXA
.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/experiencor/keras-yolo3/issues/194?email_source=notifications&email_token=ADURW3OOMXJG3JVWE2EBRFDQRLN3LA5CNFSM4H6SLIXKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECXZZRQ#issuecomment-548379846, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADURW3NUMM5ZSIOY3P2ONGDQRLN3LANCNFSM4H6SLIXA .
Thank you for providing repo... @experiencor could you give some suggestion about training from scratch with own data set ?