Closed Tetsujinfr closed 4 years ago
This generally means that your image is not in the val set. Note that preds_filt is the filtered preds where the predictions of images not in coco val 2014 will be removed.
Ok but why would the inference works the first time I run it? the viz.json file has been created correctly that first time.
I feel confused too. Sorry. I think some other issue mentions this problem too.
It s probably something silly somewhere but I am not familiar with the code base so I would have to do quite some digging to identify why I have this behavior starting from the second run onward. I will probably adapt offline your colab notebook rather, it seems it does not exhibit this behavior. Will let you know if I find the root cause.
Thanks for this great work btw, really cool project
Ok so I quickly replicated your colab notebook on my machine locally in a jupyter notebook and it works just fine. I ll try to tweak it further. Obviously this is just the inference part so if folks wants the training part too they need to stick with the full repo code base. I close this for now, I think this one time inference only works situation is maybe specific to my setup.
I had the same problem when i do evaluation, it works the first time but then throws this division by zero error all the time. Have you found a solution to this problem??? thanks for your help.
It's happening on my end as well. Every pretrained resnet model can only run once and after that eval.py would throw the zero division error
tell me if the solution in https://github.com/ruotianluo/ImageCaptioning.pytorch/issues/100 fix it.
If so, I will push a bug fix.
wow thanks for the quick reply! I tried that one but it doesn't seem to fix the problem. The code looks like this but I don't know if that's what you intend it to be!
if self.wrap:
if len(self._index_list) == 0:
return None
wrapped = True
else:
Sorry, the fix in #100 is already there. Interesting, are you using the master?
Do you want to turn --language_eval to 0?
Yes I am using the master. I will try that! Give me a sec. Oh and btw the code I posted is after I added the fix in #100 manually. It wasn't there when I download the master
remove what I asked you to add. That was wrong.
Sorry this may sound a bit silly but by --language_eval
do you mean pass it in to eval.py
as an argument? But I think eval.py
doesn't have this argument option. It has --only_lang_eval
and its default is 0. Thanks for being patient!
I am in the middle of something, but I believe language_eval is an argument, would you mind try it?
Yeah I already tried that but eval.py
still gives the same error. Here is the command I used:
python tools/eval.py --model new_self/model-best.pth --infos_path new_self/infos_fc_nsc-best.pkl --image_folder images/ --num_images 1 --language_eval 0
I am on python 3.7.8, pyTorch 1.6 and Windows 10 if these info help.
I finished all the setup process including what's mentioned in data/README.md
. (I downloaded the cocotalk_att.tar and cocotalk_fc.tar and put them in the designated folder)
Another thing is since I am on Windows I ran into some problems with the script you provided for Stanford coreNLP and google's word2vec. So I manually download them.
Can you show the full error traceback?
loading annotations into memory...
0:00:00.284238
creating index...
index created!
Traceback (most recent call last):
File "tools/eval.py", line 74, in <module>
lang_stats = eval_utils.language_eval(opt.input_json, predictions, n_predict
ions, vars(opt), opt.split)
File "d:\workspace\video-mem\imagecaptioning.pytorch\captioning\utils\eval_uti
ls.py", line 79, in language_eval
mean_perplexity = sum([_['perplexity'] for _ in preds_filt]) / len(preds_fil
t)
ZeroDivisionError: division by zero
Replace line 74,75 with if opt.dump_json == 1:
json.dump(predictions, open('vis/vis.json', 'w'))
Thanks for the fix. Now it can produce the json file but the content of the json file is always the same no matter how I change what's in image_folder
Also it seems like it can only read one picture even if I specify num_images
to other number.
add --force
Ah it works! Thanks for all the help. There are some inaccuracies with the captioning but I guess it's due to the model I am using. I will play with it for a bit, and thanks for the project and being so helpful. It's really cool
Hi all, I am having the same problem. Tried the proposed fix without success. Which folder do you make the image_folder
option point to? I suppose the test2014
.
Hey, I have the same problem my JSON file shows the same captions no matter the images. I used --force as you said but it returned with the following error:
/ImageCaptioning.pytorch$ python3 tools/eval.py --force 1 --num_images -1 Number of predictions before filtering: 10 DataLoaderRaw loading images from folder: blah 57 reading from /home/csio/ImageCaptioning.pytorch/data/dataset_coco.json DataLoaderRaw found 123287 images Traceback (most recent call last): File "tools/eval.py", line 126, in <module> loss, split_predictions, lang_stats = eval_utils.eval_split(model, crit, loader, File "/home/csio/ImageCaptioning.pytorch/captioning/utils/eval_utils.py", line 158, in eval_split data = loader.get_batch(split) File "/home/csio/ImageCaptioning.pytorch/captioning/data/dataloaderraw.py", line 107, in get_batch img = skimage.io.imread(self.files[ri]) File "/home/csio/.local/lib/python3.8/site-packages/skimage/io/_io.py", line 53, in imread img = call_plugin('imread', fname, plugin=plugin, **plugin_args) File "/home/csio/.local/lib/python3.8/site-packages/skimage/io/manage_plugins.py", line 205, in call_plugin return func(*args, **kwargs) File "/home/csio/.local/lib/python3.8/site-packages/skimage/io/_plugins/imageio_plugin.py", line 11, in imread out = np.asarray(imageio_imread(*args, **kwargs)) File "/home/csio/.local/lib/python3.8/site-packages/imageio/v3.py", line 53, in imread with imopen(uri, "r", **plugin_kwargs) as img_file: File "/home/csio/.local/lib/python3.8/site-packages/imageio/core/imopen.py", line 113, in imopen request = Request(uri, io_mode, format_hint=format_hint, extension=extension) File "/home/csio/.local/lib/python3.8/site-packages/imageio/core/request.py", line 247, in __init__ self._parse_uri(uri) File "/home/csio/.local/lib/python3.8/site-packages/imageio/core/request.py", line 407, in _parse_uri raise FileNotFoundError("No such file: '%s'" % fn) FileNotFoundError: No such file:
'/home/csio/ImageCaptioning.pytorch/blah/COCO_val2014_000000391895.jpg'`
this needs the COCO val 2014 but I want to evaluate my own dataset.
can you please help?
Thanks
hi
I got the eval code to run ok the first time on an image, but then when I try to run it again on the same image or on any other image I get the following error message. Is there some buffer that I need to clean up somewhere between each inference run?
Same pattern if I change the pre-trained model used for evaluation, it works the first time but then throws this division by zero error all the time. thanks for your help.