microsoft / scene_graph_benchmark

image scene graph generation benchmark
MIT License
382 stars 86 forks source link

Issue when attempting to generate image features #31

Open Frederick369 opened 3 years ago

Frederick369 commented 3 years ago

Hi,

Thank you for providing us code for this project. I was trying to generate images features. I attempted to follow both examples from https://github.com/microsoft/scene_graph_benchmark/issues/25 and https://github.com/microsoft/scene_graph_benchmark/issues/7 as follows:

With a directory of 18 images (stored in datasets/test_imgs), I used tools/mini_tsv/demo_tsv.py to generate tsv files (label, hw, linelist) for the corresponding dataset, and stored them in datasets/test/. Since I didn't have any particular labelmap in mind, and I had downloaded the checkpoint for the RelDN model, and its corresponding config file, I used the label map VG-SGG-dicts-vgoi6-clipped.json (I copied this file into the same directory), so that my yaml file is as follows:

datasets/test/test_imgs.yaml img: test_imgs.tsv label: test_imgs_label.tsv hw: test_imgs_hw.tsv label_map: VG-SGG-dicts-vgoi6-clipped.json linelist: test_imgs_linelist.tsv

Then, I made a new yaml file datasets/test/testing.yaml which was the same yaml file as rel_danfeiX_FPN50_reldn.yaml but with DATASETS.TRAIN = ("test/test_imgs.yaml",) and DATASETS.TEST = ("test/test_imgs.yaml",) and ran the command

python -m torch.distributed.launch --nproc_per_node=2 tools/test_sg_net.py --config-file datasets/test/testing.yaml

This ran into the error:

2021-07-16 04:08:02,996 maskrcnn_benchmark.inference INFO: Start evaluation on test/test_imgs.yaml dataset(18 images). INFO:maskrcnn_benchmark.inference:Start evaluation on test/test_imgs.yaml dataset(18 images). 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "tools/test_sg_net.py", line 198, in main() File "tools/test_sg_net.py", line 194, in main run_test(cfg, model, args.distributed, model_name) File "tools/test_sg_net.py", line 73, in run_test save_predictions=cfg.TEST.SAVE_PREDICTIONS, File "/home/f-run/PyCharmProjects/scene_graph_benchmark/maskrcnn_benchmark/engine/inference.py", line 265, in inference predictions = compute_on_dataset(model, data_loader, device, bbox_aug, inference_timer) File "/home/f-run/PyCharmProjects/scene_graph_benchmark/maskrcnn_benchmark/engine/inference.py", line 32, in compute_ondataset for , batch in enumerate(tqdm(data_loader)): File "/home/f-run/.conda/envs/sg_benchmark/lib/python3.7/site-packages/tqdm/std.py", line 1185, in iter for obj in iterable: File "/home/f-run/.conda/envs/sg_benchmark/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/home/f-run/.conda/envs/sg_benchmark/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1085, in _next_data return self._process_data(data) File "/home/f-run/.conda/envs/sg_benchmark/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1111, in _process_data data.reraise() File "/home/f-run/.conda/envs/sg_benchmark/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/f-run/.conda/envs/sg_benchmark/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/home/f-run/.conda/envs/sg_benchmark/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/f-run/.conda/envs/sg_benchmark/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/f-run/PyCharmProjects/scene_graph_benchmark/maskrcnn_benchmark/data/datasets/relation_tsv.py", line 146, in getitem target = self.get_target_from_annotations(annotations, img_size) File "/home/f-run/PyCharmProjects/scene_graph_benchmark/maskrcnn_benchmark/data/datasets/relation_tsv.py", line 78, in get_target_from_annotations target = self.label_loader(annotations['objects'], img_size, remove_empty=False) TypeError: list indices must be integers or slices, not str

This is very strange to me since when I run the command without using my own datasets I don't run into this issue at all. Is there anything that I did incorrectly that could cause this error-- and if so, how can I fix it?

This is my first issue raised so forgive me if this is too much/little info or if this is more suited to Stack Overflow instead.

BigHyf commented 2 years ago

hi, I also met the same problem, may I ask how you solved it later?