Closed solemnrole closed 7 years ago
@solemnrole May I ask you how did you create imdb file for your custom dataset to train on PVANET?
@VanitarNordic pvanet doest seem to need your orignal custom datasets is lmdb ,and you just need to do is making your dataset follwing VOC style.
@solemnrole
By following the VOC style, each image would have its own corresponding .xml
file, BUT according to this example, for training we should create an IMDB file which contains the information of both training and validation images:
https://github.com/sanghoon/pva-faster-rcnn/tree/master/models/pvanet/example_train
@VanitarNordic that is what you should do! make your each image would have its own corresponding .xml file. the pvanet will transform your VOC style datasets into lmdb!
@solemnrole wow, I already have bunch of images and their annotations in .xml, may I ask you which file in the repository will convert these to an IMDB file?
@solemnrole waiting for you ...
@VanitarNordic i am sorry, pvanet will not transform your datasets into lmdb, see get_minibatch.py for details.
@solemnrole
Yes, actually before training, we have to convert our dada-set to IMDB, as it is one of the input parameters here: https://github.com/sanghoon/pva-faster-rcnn/tree/master/models/pvanet/example_train
i use the train.prototxt in this Catalog,the dataset is VOC07trainval+07test,pretrain model is the test.model,itering 14000 times,the loss is always too large(40.)i dont know why.