Open AakashGoyal25 opened 7 months ago
You've chosen to report an unexpected problem or bug. Unless you already know the root cause of it, please include details about it by filling the issue template. The following information is missing: "Instructions To Reproduce the Issue and Full Logs"; "Your Environment";
@AakashGoyal25 trying changing your export-method from caffe2_tracing to tracing.
@RajUpadhyay Even after changing export method to "tracing", I am getting same error mentioned above. Any other suggestion?
@AakashGoyal25 Your error indicates that problem is related to output.yaml.
Just for the peace of mind, can you replace output.yaml with normal mask_rcnn_R_50_FPN_3x.yaml or its faster_rcnn version. Lets narrow down where the cause of the issue.
Also this is probably not the issue but Is your image path correct?
--sample-image /home/aakash/Aakash Workings/HB_Codes/detectron2_working/XYZ/val/images/0e3561a5-am_c1_20231004-094138.jpeg
There is a space /Aakash Workings/
, just copy your image to your current directory and try running it again.
python3 detectron2/tools/deploy/export_model.py --config-file /home/aakash/Aakash_Workings/HB_Codes/detectron2_working/detectron2/configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml --output model.onnx --format onnx --sample-image d2_seal_control_ob_l/train/images/902f32eb-am_c1_20231005-104409.jpeg --export-method caffe2_tracing MODEL.DEVICE cuda MODEL.WEIGHTS output/model_final.pth
As you mentioned I have replaced with COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml and took relative path to the sample image.
Now I am getting:
[02/20 09:54:22 detectron2]: Command line arguments: Namespace(format='onnx', export_method='caffe2_tracing', config_file='/home/aakash/Aakash_Workings/HB_Codes/detectron2_working/detectron2/configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml', sample_image='d2_seal_control_ob_l/train/images/902f32eb-am_c1_20231005-104409.jpeg', run_eval=False, output='model.onnx', opts=['MODEL.DEVICE', 'cuda', 'MODEL.WEIGHTS', 'output/model_final.pth']) [W init.cpp:855] Warning: Use _jit_set_fusion_strategy, bailout depth is deprecated. Setting to (STATIC, 1) (function operator()) [02/20 09:54:24 d2.checkpoint.detection_checkpoint]: [DetectionCheckpointer] Loading from output/model_final.pth ... Skip loading parameter 'roi_heads.box_predictor.cls_score.weight' to the model due to incompatible shapes: (8, 1024) in the checkpoint but (81, 1024) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.cls_score.bias' to the model due to incompatible shapes: (8,) in the checkpoint but (81,) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.bbox_pred.weight' to the model due to incompatible shapes: (28, 1024) in the checkpoint but (320, 1024) in the model! You might want to double check if this is expected. Skip loading parameter 'roi_heads.box_predictor.bbox_pred.bias' to the model due to incompatible shapes: (28,) in the checkpoint but (320,) in the model! You might want to double check if this is expected. Some model parameters or buffers are not found in the checkpoint: roi_heads.box_predictor.bbox_pred.{bias, weight} roi_heads.box_predictor.cls_score.{bias, weight}
Traceback (most recent call last):
File "/home/aakash/Aakash_Workings/HB_Codes/detectron2_working/detectron2/tools/deploy/export_model.py", line 220, in
@AakashGoyal25 I am sorry for requesting this again but can you use tracing instead of caffe_tracing for the same example i.e., default model and relative path.
Yes, it successfully export the onnx file. Thanks ! I can see some light :+1:
But I am still not able to export it with my config.yaml.. Still facing the same issue mentioned in the first comment.
I saved the config.yaml using the below code:
cfg_yaml = cfg.dump() with open(os.path.join(cfg.OUTPUT_DIR, "config.yaml"), 'w') as f: f.write(cfg_yaml)
Maybe you could search a bit more on the internet regarding this. Perhaps some blog or some repo related to detectron2. I do not think this repo is being maintained anymore so, I wish you good luck.
Can you try this way though https://medium.com/innovation-res/detectron2-config-optimizer-lr-scheduler-part-1-4555842e1ea#:~:text=weights%20and%20metrics.-,import%20yaml,-%23%20Dump%20the%20config
@AakashGoyal25 have you been able to successfully export your Faster RCNN Model after all? Did you by chance also try to export it to TensorRT?
@Huxwell..I was able to export it successfully to Onnx. But I didn't try to export to TensorRT.
Thanks. I have realized, I am also able to export to onnx, only failing during onnx graphsurgeon for trt, so sorry for bothering you!
@Huxwell..I was able to export it successfully to Onnx. But I didn't try to export to TensorRT.
HI same question , How to solve it? Looking forward to your reply
I am trying to export my detectron2 model to onnx model but I am getting a KeyError: 'Non-existent config key: DATASETS.VAL'
Command I run:
Error Traceback:
config.yaml
In the config.yaml file, I can see that "val" is there still I am getting this issue. What am I doing wrong?
"Instructions To Reproduce the Issue and Full Logs": config.yaml file is provided and It can be run using the above command.
Environment: It is environemnt is setup using https://detectron2.readthedocs.io/en/latest/tutorials/install.html