Open KiyoshiKAWASAKI opened 7 years ago
The error is about not finding necessary input files for training.
Do you train on the IIT-AFF dataset? (or Pascal VOC or your own dataset?).
If you do on IIT-AFF dataset, then download the pre-formated file (I put the link in the readme, under Training section), then everything should be fine. Please note we use "pascal_voc" alias although we're training using the IIT-AFF dataset. The original IIT-AFF format from our dataset website is not ready for training the code.
If you use other dataset, then you'll need to format it as we did. Please refer to section Train AffordanceNet on your data inside the Readme for details.
I am using the IIT-AFF dataset that you provided on google drive, which has been pre-formatted. I am not using my own dataset.
I checked the error message and I found that the error said: AssertionError: Selective search data not found at: Jin_Huang/affordance-net/data/selective_search_data/voc_2012_train.mat
I think this means there is no voc_2012_train.mat file in /affordance-net/data/selective_search_data folder right? And my confusion is that there is no selective_search_data folder in data folder.
The pre-formatted dataset has a data folder, in this folder there are 3 folders as stated in readme, they are: data/cache data/imagenet_models data/VOCdevkit2012
So if this is the case, I don't really understand why the error mentioned selective_search_data folder, which does not exist at all?
Thanks,
It seems something wrong with your configurations. We do not use .mat file (and we don't need it anyway). Currently, your proposal method is not correct: Set proposal method: selective_search
. It should be: Set proposal method: gt
If you modified anything in the code, backup it and try to clone a clean repo and test it again. Please make sure you're calling the right pycaffe path. If the problem persists, please copy and paste the full output from the terminal. I just test it again with my PC and it works fine. This is my output:
~/affordance-net(master)$ ./experiments/scripts/faster_rcnn_end2end.sh 0 VGG16 pascal_voc
+ set -e
+ export PYTHONUNBUFFERED=True
+ PYTHONUNBUFFERED=True
+ GPU_ID=0
+ NET=VGG16
+ NET_lc=vgg16
+ DATASET=pascal_voc
+ array=($@)
+ len=3
+ EXTRA_ARGS=
+ EXTRA_ARGS_SLUG=
+ case $DATASET in
+ TRAIN_IMDB=voc_2012_train
+ TEST_IMDB=voc_2012_val
+ PT_DIR=pascal_voc
+ ITERS=2000000
++ date +%Y-%m-%d_%H-%M-%S
+ LOG=experiments/logs/faster_rcnn_end2end_VGG16_.txt.2017-11-15_09-32-14
+ exec
++ tee -a experiments/logs/faster_rcnn_end2end_VGG16_.txt.2017-11-15_09-32-14
tee: experiments/logs/faster_rcnn_end2end_VGG16_.txt.2017-11-15_09-32-14: No such file or directory
+ echo Logging output to experiments/logs/faster_rcnn_end2end_VGG16_.txt.2017-11-15_09-32-14
Logging output to experiments/logs/faster_rcnn_end2end_VGG16_.txt.2017-11-15_09-32-14
+ ./tools/train_net.py --gpu 0 --solver models/pascal_voc/VGG16/faster_rcnn_end2end/solver.prototxt --weights data/imagenet_models/VGG16.v2.caffemodel --imdb voc_2012_train --iters 2000000 --cfg experiments/cfgs/faster_rcnn_end2end.yml
Called with args:
Namespace(cfg_file='experiments/cfgs/faster_rcnn_end2end.yml', gpu_id=0, imdb_name='voc_2012_train', max_iters=2000000, pretrained_model='data/imagenet_models/VGG16.v2.caffemodel', randomize=False, set_cfgs=None, solver='models/pascal_voc/VGG16/faster_rcnn_end2end/solver.prototxt')
Using config:
{'DATA_DIR': '/home/anguyen/workspace/y_testbox/affordance-net/data',
'DEDUP_BOXES': 0.0625,
'EPS': 1e-14,
'EXP_DIR': 'faster_rcnn_end2end',
'GPU_ID': 0,
'MATLAB': 'matlab',
'MODELS_DIR': '/home/anguyen/workspace/y_testbox/affordance-net/models/coco',
'PIXEL_MEANS': array([[[ 102.9801, 115.9465, 122.7717]]]),
'RNG_SEED': 3,
'ROOT_DIR': '/home/anguyen/workspace/y_testbox/affordance-net',
'TEST': {'BBOX_REG': True,
'HAS_RPN': True,
'MASK_REG': True,
'MAX_SIZE': 1000,
'NMS': 0.3,
'PROPOSAL_METHOD': 'selective_search',
'RPN_MIN_SIZE': 16,
'RPN_NMS_THRESH': 0.7,
'RPN_POST_NMS_TOP_N': 1000,
'RPN_PRE_NMS_TOP_N': 6000,
'SCALES': [600],
'SVM': False,
'TEST_INSTANCE': True},
'TRAIN': {'ASPECT_GROUPING': True,
'BATCH_SIZE': 32,
'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
'BBOX_NORMALIZE_TARGETS': True,
'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
'BBOX_REG': True,
'BBOX_THRESH': 0.5,
'BG_THRESH_HI': 0.5,
'BG_THRESH_LO': 0.0,
'FG_FRACTION': 0.25,
'FG_THRESH': 0.5,
'HAS_RPN': True,
'IMS_PER_BATCH': 1,
'MASK_REG': True,
'MASK_SIZE': 244,
'MAX_SIZE': 1000,
'PROPOSAL_METHOD': 'gt',
'RPN_BATCHSIZE': 256,
'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'RPN_CLOBBER_POSITIVES': False,
'RPN_FG_FRACTION': 0.5,
'RPN_MIN_SIZE': 0,
'RPN_NEGATIVE_OVERLAP': 0.3,
'RPN_NMS_THRESH': 0.7,
'RPN_POSITIVE_OVERLAP': 0.7,
'RPN_POSITIVE_WEIGHT': -1.0,
'RPN_POST_NMS_TOP_N': 2000,
'RPN_PRE_NMS_TOP_N': 12000,
'SCALES': [600],
'SNAPSHOT_INFIX': '',
'SNAPSHOT_ITERS': 10000,
'TRAINING_DATA': 'VOC_2012_train',
'USE_FLIPPED': True,
'USE_PREFETCH': False},
'USE_GPU_NMS': True}
Loaded dataset `voc_2012_train` for training
Set proposal method: gt
Appending horizontally-flipped training examples...
voc_2012_train gt roidb loaded from /home/anguyen/workspace/y_testbox/affordance-net/data/cache/voc_2012_train_gt_roidb.pkl
done
Preparing training data...
done
12368 roidb entries
Output will be saved to `/home/anguyen/workspace/y_testbox/affordance-net/output/faster_rcnn_end2end/voc_2012_train`
Filtered 0 roidb entries: 12368 -> 12368
cfg.TRAIN.BBOX_REG = True
Computing bounding-box regression targets...
bbox target means:
[[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]
[ 0. 0. 0. 0.]]
[ 0. 0. 0. 0.]
bbox target stdevs:
[[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]
[ 0.1 0.1 0.2 0.2]]
[ 0.1 0.1 0.2 0.2]
Normalizing targets
done
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1115 09:32:19.443315 29973 solver.cpp:48] Initializing solver from parameters:
train_net: "models/pascal_voc/VGG16/faster_rcnn_end2end/train.prototxt"
base_lr: 0.001
display: 20
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.0005
stepsize: 150000
snapshot: 0
snapshot_prefix: "vgg16_faster_rcnn"
average_loss: 100
iter_size: 2
I1115 09:32:19.443349 29973 solver.cpp:81] Creating training net from train_net file: models/pascal_voc/VGG16/faster_rcnn_end2end/train.prototxt
I1115 09:32:19.444644 29973 net.cpp:49] Initializing net from parameters:
name: "VGG_ILSVRC_16_layers"
We set the proposal method in the cfgs file: experiments/cfgs/faster_rcnn_end2end.yml
Make sure you have something like this:
EXP_DIR: faster_rcnn_end2end
TRAIN:
HAS_RPN: True
IMS_PER_BATCH: 1
BBOX_NORMALIZE_TARGETS_PRECOMPUTED: True
RPN_POSITIVE_OVERLAP: 0.7
RPN_BATCHSIZE: 256
PROPOSAL_METHOD: gt
BG_THRESH_LO: 0.0
TEST:
HAS_RPN: True
This reply is for original "AssertionError: Selective search data not found". I encountered this problem and found a way to solve it. The main cause of this error is the version of easydict. Here is only my experience, I hope it can solve the problem. Some versions of easydict may have a problem of passing messages (from config.yml to somewhere). Therefore, the solution is to install a correct version of easydict. Firstly, I installed easydict using conda install -c auto easydict according to Easydict::Anaconda. This version is 1.4. Although we set PROPOSAL_METHOD in config.yml to 'gt', which means that RPN is adopted, this easydict-1.4 can't pass the configurations to train_net.py and cause this assertionError: Selective search data not found. Even you manually set PROPOSAL_METHOD in train_net.py, there are still other errors since the whole configuration is wrong! After that, I search for other easydict version by anaconda search -t conda eaysdict and found something called verydeep/easydict which has a version 1.6. So, my solution is to install this verydeep/easydict by conda install -c verydeep easydict. So far there are no further problems and the training is going well.
Thanks a lot @superchenyan !
Hi,
I am trying to train the model with your code, but I met a problem. I am using the command line:
./experiments/scripts/faster_rcnn_end2end.sh 1 VGG16 pascal_voc
And here is the error:
Set proposal method: selective_search Appending horizontally-flipped training examples... voc_2012_train gt roidb loaded from /media/MMVCNYLOCAL_2/MMVC_NY/Jin_Huang/affordance-net/data/cache/voc_2012_train_gt_roidb.pkl Traceback (most recent call last): File "./tools/train_net.py", line 108, in <module> imdb, roidb = combined_roidb(args.imdb_name) File "./tools/train_net.py", line 73, in combined_roidb roidbs = [get_roidb(s) for s in imdb_names.split('+')] File "./tools/train_net.py", line 66, in get_roidb roidb = get_training_roidb(imdb) File "/media/MMVCNYLOCAL_2/MMVC_NY/Jin_Huang/affordance-net/tools/../lib/fast_rcnn/train.py", line 127, in get_training_roidb imdb.append_flipped_images() File "/media/MMVCNYLOCAL_2/MMVC_NY/Jin_Huang/affordance-net/tools/../lib/datasets/imdb.py", line 111, in append_flipped_images boxes = self.roidb[i]['boxes'].copy() File "/media/MMVCNYLOCAL_2/MMVC_NY/Jin_Huang/affordance-net/tools/../lib/datasets/imdb.py", line 67, in roidb self._roidb = self.roidb_handler() File "/media/MMVCNYLOCAL_2/MMVC_NY/Jin_Huang/affordance-net/tools/../lib/datasets/pascal_voc.py", line 145, in selective_search_roidb ss_roidb = self._load_selective_search_roidb(gt_roidb) File "/media/MMVCNYLOCAL_2/MMVC_NY/Jin_Huang/affordance-net/tools/../lib/datasets/pascal_voc.py", line 179, in _load_selective_search_roidb 'Selective search data not found at: {}'.format(filename) AssertionError: Selective search data not found at: /media/MMVCNYLOCAL_2/MMVC_NY/Jin_Huang/affordance-net/data/selective_search_data/voc_2012_train.mat
And I checked the shell, there is a coco option, but when I use
./experiments/scripts/faster_rcnn_end2end.sh 1 VGG16 coco
it shows:IOError: [Errno 2] No such file or directory: 'affordance-net/data/coco/annotations/instances_train2014.json'
I just downloaded the data as instructed in readme, but it seems like there is a dataset issue? Do you know how I can solve the problem?
Thanks,