Open johnwlambert opened 8 years ago
Hi John, I have the similar need, to use my own dataset with new classes. Please, could you share your experience with expanding the existing dataset? Thank you, David
There was a really useful README for how to train fast-rcnn on your own data from EdisonResearch on GitHub but I can't find it. Thankfully I had saved it at the time as a PDF. Here's a link to it:
https://www.dropbox.com/s/uvd2yhq7ytlc0mf/fast-rcnn-train-readme.pdf?dl=0
The process is only slightly different from faster-rcnn i believe except you don't need to pre-compute any region proposals with selective search and the parameters are a bit different in the imdb derived object. What I did was take a look at, for example, lib/datasets/pascal_voc.py to see what's new in faster vs fast and then modify using this PDF. Best advice I can give but it worked for me.
Bingo! Here is the link: https://github.com/zeyuanxy/fast-rcnn/blob/master/help/train/README.md
On 24 Mar 2016, at 17:49, austinreiter notifications@github.com wrote:
There was a really useful README for how to train fast-rcnn on your own data from EdisonResearch on GitHub but I can't find it. Thankfully I had saved it at the time as a PDF. Here's a link to it:
https://www.dropbox.com/s/uvd2yhq7ytlc0mf/fast-rcnn-train-readme.pdf?dl=0
The process is only slightly different from faster-rcnn i believe except you don't need to pre-compute any region proposals with selective search and the parameters are a bit different in the imdb derived object. What I did was take a look at, for example, lib/datasets/pascal_voc.py to see what's new in faster vs fast and then modify using this PDF. Best advice I can give but it worked for me.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub
Hi,
When I trained my own image database using faster rcnn, there is any error that pops up: 'Selective search data not found at: {}'.format(filename)
I followed the instructions from http://sgsai.blogspot.com/2016/02/training-faster-r-cnn-on-custom-dataset.html. Looks there is one step "selective_search" missing.
How to generate such mat? Another question, since faster-rcnn is end-to-end solution, why still need to have selective search (RCNN solution)? I suppose RPN in faster-rcnn can do the same things.
Thanks. Look forward to your reply.
Hi, That's exactly the point, the RPN of faster-rcnn now replace the selective search performed by matlab in fast-rcnn. And I don't see any selective_search step in the tutorial you found so I'm confused : what raises this error exactly ? You should have called some old script for fast-rcnn.
In fact you can directly launch the training without any script : (example for alt_opt training)
$cd <py-faster-rcnn folder>
$./tools/train_faster_rcnn_alt_opt.py --gpu 0 --net_name <model name> --weights <pretrained .caffemodel> --imdb <dataset name>_train
Thanks deboc. I used python wrapper instead of matlab and I used the similar script of pascal_voc.py from latest git repository (there is also selective search section). Is there a full instruction that I can follow to train own database?
hi all, I have followed the below web instruction for training my own dataset as INRIA, https://github.com/zeyuanxy/fast-rcnn/blob/master/help/train/README.md and found the below problem, :~$ cd fast-rcnn :~/fast-rcnn$ ./tools/train_net.py --gpu 0 --solver models/pascal_voc/VGG_CNN_M_1024/fast-rcnn/solver.prototxt --weights data/faster_rcnn_models/VGG16_faster_rcnn_final.caffemodel --imdb inria_train /home/sarker/anaconda2/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.') Python 2.7.11 |Anaconda custom (64-bit)| (default, Jun 15 2016, 15:21:30) Type "copyright", "credits" or "license" for more information.
IPython 4.2.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details.
In [1]:
when I run the training code on terminal the IPython console is open? I don’t know what is the problem? I am a beginner in this area. Please help me. Thanks in advanced....
Hi @mksarker, You are on the faster-rcnn repo, are you sure to have the good instructions ? I have updated the tutoriel of Xeyuanxy for faster-rcnn, if that can help. It's here.
Thanks @deboc what should I do now? when I am on the faster-rcnn repo? thanks again for your instructions link...
Hii @deboc , Please, I have some questions:
Should the data set be labeled? Or not? And what is the best environment to do the training? in terms of CPE and memory? is it need to use GPU?
I saw on Deboc's github to train faster-rcnn on INRIA dataset, I need to change the factory.py file. In the factory.py, there is a line (lambda split=split: inria(split, inria_devkit_path)) . Is there anyone who knows how the split=split: works ? Thanks in advance
hi, I am following https://huangying-zhan.github.io/2016/09/22/detection-faster-rcnn.html#Training%20on%20new%20dataset for training Faster RCNN on my dataset. But when i give the command ./tools/train_net.py --gpu 0 --weights data/faster_rcnn_models/ZF_faster_rcnn_final.caffemodel --imdb fishclassify_train --cfg experiments/cfgs/config.yml --solver models/fishclassify/solver.prototxt --iter 0
I am getting the following error /py-faster-rcnn/lib/datasets/factory.py in get_imdb(name) 44 """Get an imdb (image database) by name.""" 45 if not __sets.has_key(name): ---> 46 raise KeyError('Unknown dataset: {}'.format(name)) 47 return __sets[name]() 48
KeyError: 'Unknown dataset: fishclassify_train'
Any help on what might be causing this? I had created fishclassify.py and fishclassify_eval.py under lib/datasets as well as modified factory.py
Any help?
@mksarker
I got the same error too.
It seems that there is an indentation issue at end of some file. In my case, the last line of the file: py-faster-rcnn/lib/datasets/MyDataset.py was "from IPython import embed; embed()" which should be one level indented, and then it works perfectly fine for me.
Hi Dr. Girshick, what directory structure would I follow if I were to make my own imdb? I have my own dataset and am a bit confused how to mimic the PASCAL 2007 imdb format. Thank you! John