Open wangjanice8 opened 3 months ago
Hi, this may be due to the config file used, please try to replace the config file with the one provided in VG150-penet-yolov8m, such as:
--config-file VG150-penet-yolov8m/config.yml
Hi, this may be due to the config file used, please try to replace the config file with the one provided in VG150-penet-yolov8m, such as:
--config-file VG150-penet-yolov8m/config.yml
Hi, Thank you for your helpful response! I have encountered another issue and would appreciate your assistance. While running relation_test_net.py with the following parameters: config_file = "/model/VG150-penet-yolov8m/config.yml" cfg.MODEL.WEIGHT = "/model/VG150-penet-yolov8m/best_model_epoch_2.pth" I encountered a mismatch problem, as shown in the log below: 2024-07-21 17:49:44.049 | DEBUG | sgg_benchmark.utils.model_serialization:align_and_update_state_dicts:56 - NO-MATCHING of current module: roi_heads.relation.predictor.context_layer.W_obj.layers.0.bias of shape (1024,) 2024-07-21 17:49:44.049 | DEBUG | sgg_benchmark.utils.model_serialization:align_and_update_state_dicts:56 - NO-MATCHING of current module: roi_heads.relation.predictor.context_layer.W_obj.layers.0.weight of shape (1024, 200) 2024-07-21 17:49:44.049 | DEBUG | sgg_benchmark.utils.model_serialization:align_and_update_state_dicts:56 - NO-MATCHING of current module: roi_heads.relation.predictor.context_layer.W_obj.layers.1.bias of shape (2048,) 2024-07-21 17:49:44.050 | DEBUG | sgg_benchmark.utils.model_serialization:align_and_update_state_dicts:56 - NO-MATCHING of current module: roi_heads.relation.predictor.context_layer.W_obj.layers.1.weight of shape (2048, 1024) ... Could you please advise on the cause of this issue and how to resolve it? Thank you!
Hi @wangjanice8,
I have just pushed some changes that should solve your problem. The issue comes from a difference in the dimension of the MLP layers in penet. I refactor to the previous dim and now it should work.
Hi @wangjanice8,
I have just pushed some changes that should solve your problem. The issue comes from a difference in the dimension of the MLP layers in penet. I refactor to the previous dim and now it should work.
Hello! Thank you very much for your response and continuous updates. I encountered an issue while running relation_test_net.py. Here is the error traceback:
Traceback (most recent call last):
File "/home/wangzixuan/real_time/SGG-Benchmark-main/tools/relation_test_net.py", line 180, in
KeyError: '2343729'
I have correctly downloaded the VG dataset. Could you please advise on how to resolve this issue?
Thank you very much.
Yeah I don't know why some people are still having this issue because I don't have it. Basically you need to replace the str() by int() and it should work, here: https://github.com/Maelic/SGG-Benchmark/blob/4d396a394eb74079cf3ed041df927581c063f4da/sgg_benchmark/data/datasets/visual_genome.py#L241
Make ti like that:
target.add_field("informative_rels", self.informative_graphs[int(img_info['image_id'])])
Hi, have you resolved the issue about dimension mismatch? I tried everything but still have the issues below?
========== !source activate sgg_benchmark && CUDA_VISIBLE_DEVICES=0 torchrun --master_port 10027 --nproc_per_node=1 tools/relation_test_net.py --config-file "./checkpoints/VG150-penet-yolov8m/config.yml" MODEL.ROI_RELATION_HEAD.USE_GT_BOX False MODEL.ROI_RELATION_HEAD.USE_GT_OBJECT_LABEL False MODEL.ROI_RELATION_HEAD.PREDICTOR CausalAnalysisPredictor MODEL.ROI_RELATION_HEAD.CAUSAL.EFFECT_TYPE TDE MODEL.ROI_RELATION_HEAD.CAUSAL.FUSION_TYPE sum MODEL.ROI_RELATION_HEAD.CAUSAL.CONTEXT_LAYER motifs TEST.IMS_PER_BATCH 1 DTYPE "float16" GLOVE_DIR ./glove MODEL.PRETRAINED_DETECTOR_CKPT ./checkpoints/VG150-penet-yolov8m OUTPUT_DIR ./checkpoints/VG150-penet-yolov8m TEST.CUSTUM_EVAL True TEST.CUSTUM_PATH ./checkpoints/pw_custom_images DETECTED_SGG_DIR ./checkpoints/pw_sgdet_custom_images_yolo_8m
====================
Failures:
Yeah I don't know why some people are still having this issue because I don't have it. Basically you need to replace the str() by int() and it should work, here:
Make ti like that:
target.add_field("informative_rels", self.informative_graphs[int(img_info['image_id'])])
Thank you so much for your previous responses! I am currently working with the yolo-v8-penet network and have successfully completed training and testing using relation_train_net.py and relation_test_net.py. Now, I want to visualize the results using visualize_SGDet.ipynb, but I've run into a problem.
Specifically, I can't find the eval_results.pytorch and visual_info.json files. I'm unsure how these files are generated. I've checked the documentation and relevant scripts but haven't found clear steps to produce these files.
I would greatly appreciate any guidance or advice!
Hi, don't use the visualize_SGDet.ipynb
notebook for inference it is outdated (I forgot to remove it). Use this one instead:
https://github.com/Maelic/SGG-Benchmark/blob/main/demo/SGDET_on_cutom_images.ipynb
You can also try the webcam demo if you want in the same folder: https://github.com/Maelic/SGG-Benchmark/tree/main/demo
是的,我不知道为什么有些人仍然遇到这个问题,因为我没有遇到过。基本上,您需要将 str() 替换为 int(),它应该可以正常工作,如下所示: https://github.com/Maelic/SGG-Benchmark/blob/4d396a394eb74079cf3ed041df927581c063f4da/sgg_benchmark/data/datasets/visual_genome.py#L241
让它像这样:
target.add_field("informative_rels", self.informative_graphs[int(img_info['image_id'])])
非常感谢您之前的回复! 我目前正在使用 yolo-v8-penet 网络,并已使用 relation_train_net.py 和 relation_test_net.py 成功完成训练和测试。现在,我想使用 visualize_SGDet.ipynb 将结果可视化,但遇到了一个问题。
具体来说,我找不到 eval_results.pytorch 和 visual_info.json 文件。我不确定这些文件是如何生成的。 我检查了文档和相关脚本,但没有找到生成这些文件的明确步骤。
我将非常感谢任何指导或建议!
您好!看您的回复,我觉得您应该是中国人,我有一些关于这个项目复现的问题想向您请教,我也是sgg初学者,不知道能否和您进行交流。我的邮箱是1339241893@qq.com,我的微信是:XC-992997,期待得到您的回复,希望与您进行交流学习!
Hello,
First of all, thank you for your hard work on this project.
I am new to Scene Graph Generation (SGG) and have been trying to run the webcam_demo.py script. My configuration is as follows:
config: VG150/e2e_relation_yolov8m.yaml weights: VG150-penet-yolov8m/best_model_epoch_2.pth Firstly, I would like to confirm if this configuration is correct. Secondly, despite this configuration, I am encountering the following runtime error. Could you please guide me on how to resolve this issue? Thank you in advance. Traceback (most recent call last): File "/home/wangzixuan/real_time/SGG-Benchmark-main/demo/webcam_demo.py", line 77, in
main(args)
File "/home/wangzixuan/real_time/SGG-Benchmark-main/demo/webcam_demo.py", line 25, in main
model = SGG_Model(config_path, dict_file, weights, tracking=tracking, rel_conf=rel_conf, box_conf=box_conf)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wangzixuan/real_time/SGG-Benchmark-main/demo/demo_model.py", line 61, in init
self.load_model()
File "/home/wangzixuan/real_time/SGG-Benchmark-main/demo/demo_model.py", line 78, in load_model
self.checkpointer.load(self.model_weights) # 直接加载检查点文件,不需要额外的变量接收返回值
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/wangzixuan/real_time/SGG-Benchmark-main/sgg_benchmark/utils/checkpoint.py", line 67, in load
self._load_model(checkpoint, load_mapping, verbose)
File "/home/wangzixuan/real_time/SGG-Benchmark-main/sgg_benchmark/utils/checkpoint.py", line 107, in _load_model
load_state_dict(self.model, checkpoint.pop("model"), load_mapping, verbose)
File "/home/wangzixuan/real_time/SGG-Benchmark-main/sgg_benchmark/utils/model_serialization.py", line 94, in load_state_dict
model.load_state_dict(model_state_dict)
File "/home/wangzixuan/anaconda3/envs/scene_graph_benchmark/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for GeneralizedYOLO:
size mismatch for roi_heads.relation.predictor.post_emb.weight: copying a param with shape torch.Size([4096, 2048]) from checkpoint, the shape in current model is torch.Size([1024, 512]).
size mismatch for roi_heads.relation.predictor.post_emb.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([1024]).