SHI-Labs / OneFormer

OneFormer: One Transformer to Rule Universal Image Segmentation, arxiv 2022 / CVPR 2023
https://praeclarumjj3.github.io/oneformer
MIT License
1.41k stars 128 forks source link

RuntimeError: Not implemented on the CPU #18

Closed willpat1213 closed 1 year ago

willpat1213 commented 1 year ago

I have read that similar issue, but the cuda version on my machine is 11.6, and my PyTorch is installed with CUDA 11.3, so it doesn’t feel like it’s caused by the same bug. At the same time, I can run another repo normally in the same environment. image

praeclarumjj3 commented 1 year ago

Hi @willpat1213, thanks for your interest in our work.

Were you able to successfully compile the MSDeformAttn CUDA kernel?

willpat1213 commented 1 year ago

What should I do to verify alone that I can successfully compile the MSDeformAttn CUDA kernel? In fact, I didn’t have similar problems when I ran the demo of the bcnet model (also based on the d2 framework), but I added CUDA_VISIBLE_DEVICES=0 before the command, but it didn’t seem to work when I ran the oneformer model.

praeclarumjj3 commented 1 year ago

Please see the instructions for setting up the CUDA kernel:

# Setup MSDeformAttn
cd oneformer/modeling/pixel_decoder/ops
sh make.sh
cd ../../../..
willpat1213 commented 1 year ago

Thank you for your patient reply! I have solved the previous problem, which was caused by my own reasons, but I encountered another problem when I ran demo.py: the model seems to be unable to reason the results normally, as shown in the figure.

image

The inference image used comes from the coco dataset, and the checkpoint is https://shi-labs.com/projects/oneformer/coco/150_16_swin_l_oneformer_coco_100ep.pth

praeclarumjj3 commented 1 year ago

Could you share the command that you are trying to execute?

willpat1213 commented 1 year ago

Here is my command: infer.sh:

export task=panoptic

CUDA_VISIBLE_DEVICES=0 python ./demo/demo.py --config-file ./configs/ade20k/swin/oneformer_swin_large_bs16_160k.yaml \
  --input ./demo/test_img/000000000139.jpg \
  --output ./demo/result_img \
  --task $task \
  --opts MODEL.IS_TRAIN False MODEL.IS_DEMO True MODEL.WEIGHTS ./model_weight/150_16_swin_l_oneformer_coco_100ep.pth
bash demo/infer.sh
praeclarumjj3 commented 1 year ago

Please try the following command in infer.sh:

export task=panoptic

CUDA_VISIBLE_DEVICES=0 python ./demo/demo.py --config-file ./configs/ade20k/swin/oneformer_swin_large_bs16_160k.yaml \
  --input ./demo/test_img/000000000139.jpg \
  --output ./demo/result_img.png \
  --task $task \
  --opts MODEL.IS_TRAIN False MODEL.IS_DEMO True MODEL.WEIGHTS ./model_weight/150_16_swin_l_oneformer_coco_100ep.pth
willpat1213 commented 1 year ago

I tried to change the config file from ./configs/ade20k/swin/oneformer_swin_large_bs16_160k.yaml to ./configs/coco/swin/oneformer_swin_large_bs16_100ep.yaml to make the config file correspond to the weight model, and this problem was solved. Thank you for your patience in answering these days, and I wish you a smooth work.

result_img