Closed ChengshuLi closed 6 years ago
Thanks for reporting! We are able to repro the issue. It turns out that RetinaNet uses a different inference code path that is not currently supported by infer_simple.py
. We'll work on fixing that. In the meantime, it should work for inference on the COCO dataset; just not via infer_simple.py
.
@rbgirshick Thanks for your prompt reply.
So you are saying python2 tools/test_net.py
should still work? I will go ahead and try it.
Please let me know when you have fixed this by closing this issue. Thanks.
Meanwhile, am I able to fine-tune on my own dataset and labels using RetinaNet?
Thanks again.
@ChengshuLi: yes, correct, tools/test_net.py
should still work (though note that it will run inference on a dataset---coco_2014_minival by default---and not an arbitrary directory of files). Before we address the root issue, you can probably pretty easily figure out how to hack the code to make inference run on any image you want.
@rbgirshick
Thanks!
I am still a little bit confused about how to set up the COCO dataset.
Here are things that I have done:
coco
|_ coco_train2014
| |_ <im-1-name>.jpg
| |_ ...
| |_ <im-N-name>.jpg
|_ coco_val2014
|_ ...
|_ annotations
|_ instances_train2014.json
|_ ...
coco/annotations
directory referenced above.ln -s /path/to/coco $DETECTRON/lib/datasets/data/coco
python2 tools/test_net.py \
--cfg configs/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml \
TEST.WEIGHTS https://s3-us-west-2.amazonaws.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl \
NUM_GPUS 1
The output looks correct:
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 1/5000 0.649s + 0.025s (eta: 0:56:09)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 11/5000 0.148s + 0.011s (eta: 0:13:12)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 21/5000 0.121s + 0.009s (eta: 0:10:46)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 31/5000 0.122s + 0.009s (eta: 0:10:49)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 41/5000 0.116s + 0.009s (eta: 0:10:19)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 51/5000 0.112s + 0.010s (eta: 0:10:02)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 61/5000 0.110s + 0.009s (eta: 0:09:49)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 71/5000 0.108s + 0.009s (eta: 0:09:35)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 81/5000 0.107s + 0.009s (eta: 0:09:28)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 91/5000 0.106s + 0.009s (eta: 0:09:23)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 101/5000 0.105s + 0.009s (eta: 0:09:18)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 111/5000 0.103s + 0.009s (eta: 0:09:10)
INFO test_engine.py: 179: im_detect: range [1, 5000] of 5000: 121/5000 0.103s + 0.009s (eta: 0:09:08)
But I am not sure where the detection results are stored.
Also, in /path/to/coco
, there are only coco_train2014
and coco_val2014
. Where is coco_2014_minival
then?
After reading the code, I found the detection results are stored in $DETECTRON//test/coco_2014_minival/generalized_rcnn.
Thanks!
Hi @ChengshuLi, we addressed this issue in dd6c661 and you should now be able to use infer.py
and infer_simple.py
with RetinaNet.
@ir413 It can't run successfully:
I0130 11:35:05.469234 21503 net_dag_utils.cc:118] Operator graph pruning prior to chain compute took: 0.000156796 secs
I0130 11:35:05.469467 21503 net_dag.cc:61] Number of parallel execution chains 41 Number of operators = 281
Traceback (most recent call last):
File "tools/infer_simple.py", line 147, in
Hi @moyans, please make sure that you've pulled the latest master.
I also get the same error for retinanet for both infer.py
and infer_simple.py
, even after pulling the latest (10 mins ago).
EDIT: turns out I love Detectron so much I cloned it twice. Both infer.py
and infer_simple.py
work great for retinanet. Thank you so much for fixing this!
yes. it work great for retinanet
Hi, first of all, thanks for releasing all of these wonderful models.
I succeeded in running inference code using Mask RCNN following the tutorial.
But when I tried to run inference code using RetinaNet.
I received the following error:
Any help would be greatly appreciated. Thanks a lot!
For your reference, I will attach the entire output on my terminal: