OFA-Sys / OFA

Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Apache License 2.0
2.39k stars 247 forks source link

Captioning: No module named pycocotools.coco #284

Open eugfomitcheva opened 1 year ago

eugfomitcheva commented 1 year ago

I'm trying to run evaluate_caption.sh and inference runs successfully but upon evaluation the script returns Traceback (most recent call last): File "coco_eval.py", line 5, in <module> from pycocotools.coco import COCO ImportError: No module named pycocotools.coco

I have all requirements, including pycocotools installed per the requirements file. Additionally I see pycocotools.coco in the pycocotools repo so I'm a bit stumped as to why I'm getting this error. Any advice on how to resolve is appreciated!

Thank you.

my logs prior to this error being thrown look like this:

2022-11-01 17:00:52 | INFO | ofa.evaluate | loading model(s) from ../../checkpoints/caption_large_best_clean.pt
2022-11-01 17:00:52 | INFO | ofa.evaluate | loading model(s) from ../../checkpoints/caption_large_best_clean.pt
2022-11-01 17:00:54 | INFO | tasks.ofa_task | source dictionary: 59457 types
2022-11-01 17:00:54 | INFO | tasks.ofa_task | target dictionary: 59457 types
2022-11-01 17:00:55 | INFO | tasks.ofa_task | source dictionary: 59457 types
2022-11-01 17:00:55 | INFO | tasks.ofa_task | target dictionary: 59457 types
/home/edf257/.local/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2895.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
/home/edf257/.local/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2895.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
local datafile ../../datasets/caption_data/caption_test.tsv slice_id 1 begin to initialize row_count and line_idx-to-offset mapping
local datafile ../../datasets/caption_data/caption_test.tsv slice_id 0 begin to initialize row_count and line_idx-to-offset mapping
local datafile ../../datasets/caption_data/caption_test.tsv slice_id 1 finished initializing row_count and line_idx-to-offset mapping
file ../../datasets/caption_data/caption_test.tsv slice_id 1 row count 2500 total row count 5000
/home/edf257/.local/lib/python3.8/site-packages/torchvision/transforms/transforms.py:332: UserWarning: Argument 'interpolation' of type int is deprecated since 0.13 and will be removed in 0.15. Please use InterpolationMode enum.
  warnings.warn(
local datafile ../../datasets/caption_data/caption_test.tsv slice_id 0 finished initializing row_count and line_idx-to-offset mapping
file ../../datasets/caption_data/caption_test.tsv slice_id 0 row count 2500 total row count 5000
/home/edf257/.local/lib/python3.8/site-packages/torchvision/transforms/transforms.py:332: UserWarning: Argument 'interpolation' of type int is deprecated since 0.13 and will be removed in 0.15. Please use InterpolationMode enum.
  warnings.warn(
/scratch/edf257/OFA/fairseq/fairseq/search.py:140: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  beams_buf = indices_buf // vocab_size
/scratch/edf257/OFA/fairseq/fairseq/search.py:140: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  beams_buf = indices_buf // vocab_size
/scratch/edf257/OFA/models/sequence_generator.py:698: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  unfin_idx = bbsz_idx // beam_size
/scratch/edf257/OFA/models/sequence_generator.py:698: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  unfin_idx = bbsz_idx // beam_size
2022-11-01 17:01:20 | INFO | fairseq.logging.progress_bar | :     11 / 157 sentences=16
2022-11-01 17:01:20 | INFO | fairseq.logging.progress_bar | :     11 / 157 sentences=16
2022-11-01 17:01:32 | INFO | fairseq.logging.progress_bar | :     21 / 157 sentences=16
2022-11-01 17:01:32 | INFO | fairseq.logging.progress_bar | :     21 / 157 sentences=16
2022-11-01 17:01:44 | INFO | fairseq.logging.progress_bar | :     31 / 157 sentences=16
2022-11-01 17:01:45 | INFO | fairseq.logging.progress_bar | :     31 / 157 sentences=16
2022-11-01 17:01:57 | INFO | fairseq.logging.progress_bar | :     41 / 157 sentences=16
2022-11-01 17:01:57 | INFO | fairseq.logging.progress_bar | :     41 / 157 sentences=16
2022-11-01 17:02:09 | INFO | fairseq.logging.progress_bar | :     51 / 157 sentences=16
2022-11-01 17:02:10 | INFO | fairseq.logging.progress_bar | :     51 / 157 sentences=16
2022-11-01 17:02:22 | INFO | fairseq.logging.progress_bar | :     61 / 157 sentences=16
2022-11-01 17:02:22 | INFO | fairseq.logging.progress_bar | :     61 / 157 sentences=16
2022-11-01 17:02:34 | INFO | fairseq.logging.progress_bar | :     71 / 157 sentences=16
2022-11-01 17:02:34 | INFO | fairseq.logging.progress_bar | :     71 / 157 sentences=16
2022-11-01 17:02:47 | INFO | fairseq.logging.progress_bar | :     81 / 157 sentences=16
2022-11-01 17:02:47 | INFO | fairseq.logging.progress_bar | :     81 / 157 sentences=16
2022-11-01 17:02:59 | INFO | fairseq.logging.progress_bar | :     91 / 157 sentences=16
2022-11-01 17:02:59 | INFO | fairseq.logging.progress_bar | :     91 / 157 sentences=16
2022-11-01 17:03:11 | INFO | fairseq.logging.progress_bar | :    101 / 157 sentences=16
2022-11-01 17:03:12 | INFO | fairseq.logging.progress_bar | :    101 / 157 sentences=16
2022-11-01 17:03:24 | INFO | fairseq.logging.progress_bar | :    111 / 157 sentences=16
2022-11-01 17:03:24 | INFO | fairseq.logging.progress_bar | :    111 / 157 sentences=16
2022-11-01 17:03:36 | INFO | fairseq.logging.progress_bar | :    121 / 157 sentences=16
2022-11-01 17:03:37 | INFO | fairseq.logging.progress_bar | :    121 / 157 sentences=16
2022-11-01 17:03:49 | INFO | fairseq.logging.progress_bar | :    131 / 157 sentences=16
2022-11-01 17:03:49 | INFO | fairseq.logging.progress_bar | :    131 / 157 sentences=16
2022-11-01 17:04:01 | INFO | fairseq.logging.progress_bar | :    141 / 157 sentences=16
2022-11-01 17:04:02 | INFO | fairseq.logging.progress_bar | :    141 / 157 sentences=16
2022-11-01 17:04:13 | INFO | fairseq.logging.progress_bar | :    151 / 157 sentences=16
2022-11-01 17:04:14 | INFO | fairseq.logging.progress_bar | :    151 / 157 sentences=16
2022-11-01 17:04:21 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:2 to store for rank: 1
2022-11-01 17:04:21 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:2 to store for rank: 0
2022-11-01 17:04:21 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
2022-11-01 17:04:21 | INFO | torch.distributed.distributed_c10d | Rank 1: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
logicwong commented 1 year ago

@eugfomitcheva Not sure.. Can you successfully import pycocotools.coco in your python console?

eugfomitcheva commented 1 year ago

Pycocotools imports with the requirements but I don't believe pycocotools.coco is in there. Is it necessary to clone the coco repo?