IDEA-Research / Grounded-Segment-Anything

Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
https://arxiv.org/abs/2401.14159
Apache License 2.0
15.11k stars 1.4k forks source link

Mac 12.4, M1, No GPU, run grounding_dino_demo, error:Failed to load custom C++ ops #96

Closed distort5871 closed 1 year ago

distort5871 commented 1 year ago

hi all, i am new in the dp learning, but i am obsessed with this tech, so i try to run the project on my laptop, but i got some problems which i can't solve, please help me, think you for any information you give.

question:

i run follow demo on my macbook, but i got error like this:

command

python grounding_dino_demo.py \
>   --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
>   --grounded_checkpoint groundingdino_swint_ogc.pth \
>   --input_image assets/demo1.jpg \
>   --output_dir "outputs" \
>   --box_threshold 0.3 \
>   --text_threshold 0.25 \
>   --text_prompt "bear" \
>   --device "cuda"

error

/Users//test/aigc/gsa/Grounded-Segment-Anything/GroundingDINO/groundingdino/models/GroundingDINO/ms_deform_attn.py:31: UserWarning: Failed to load custom C++ ops. Running on CPU mode Only!
  warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!")
/Users//miniconda3/envs/gsa/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1678455016227/work/aten/src/ATen/native/TensorShape.cpp:3484.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
final text_encoder_type: bert-base-uncased
Downloading (…)okenizer_config.json: 100%|███| 28.0/28.0 [00:00<00:00, 4.73kB/s]
Downloading (…)lve/main/config.json: 100%|██████| 570/570 [00:00<00:00, 131kB/s]
Downloading (…)solve/main/vocab.txt: 100%|████| 232k/232k [00:00<00:00, 438kB/s]
Downloading (…)/main/tokenizer.json: 100%|████| 466k/466k [00:01<00:00, 425kB/s]
Downloading pytorch_model.bin: 100%|█████████| 440M/440M [03:14<00:00, 2.27MB/s]
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
_IncompatibleKeys(missing_keys=[], unexpected_keys=['label_enc.weight'])
Traceback (most recent call last):
  File "grounding_dino_demo.py", line 158, in <module>
    boxes_filt, pred_phrases = get_grounding_output(
  File "grounding_dino_demo.py", line 87, in get_grounding_output
    model = model.to(device)
  File "/Users//miniconda3/envs/gsa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1145, in to
    return self._apply(convert)
  File "/Users//miniconda3/envs/gsa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/Users//miniconda3/envs/gsa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/Users//miniconda3/envs/gsa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 3 more times]
  File "/Users//miniconda3/envs/gsa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 820, in _apply
    param_applied = fn(param)
  File "/Users//miniconda3/envs/gsa/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/Users//miniconda3/envs/gsa/lib/python3.8/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

my env

image

python=3.8 conda=23.1.0

is any body meet the same problem, or any body can give me some help, information about these?

rentainhe commented 1 year ago

Hello, I'm curious about your PyTorch version, maybe you should install pytorch with GPU support at this time I think~, we will try to update a CPU version for the users

minhhnguyen0312 commented 1 year ago

If your device doesn't have GPU, you should just put --device 'cpu'

distort5871 commented 1 year ago

Hello, I'm curious about your PyTorch version, maybe you should install pytorch with GPU support at this time I think~, we will try to update a CPU version for the users

think you for your reply, and this my torch version and how i get version info,

image

by the way, i install pytorch by following commands:

conda install pytorch torchvision -c pytorch

i think it will be the latest version of pytorch

then, i take notice of you mentioned "we will try to update a CPU version for the users", can i refers to the problem i met is caused by the absence of GPU on my laptop?

distort5871 commented 1 year ago

If your device doesn't have GPU, you should just put --device 'cpu'

think you for your help, i try to use device option with cpu, and i got the same output ! so amazing !

for other people meet same problem, can try to run command like theses:

python grounding_dino_demo.py \
>   --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
>   --grounded_checkpoint groundingdino_swint_ogc.pth \
>   --input_image assets/demo1.jpg \
>   --output_dir "outputs" \
>   --box_threshold 0.3 \
>   --text_threshold 0.25 \
>   --text_prompt "bear" \
>   --device "cpu"

and you will get some wanings, but don't care, you can get grounding_dino_output.jpg in your ./output directory.

Nomiluks commented 8 months ago

The inference time on time on cpu is not good. Is there any to improve inference time on the M1 Macbook machine?

ScubaDiving commented 5 months ago

If your device doesn't have GPU, you should just put --device 'cpu'

think you for your help, i try to use device option with cpu, and i got the same output ! so amazing !

for other people meet same problem, can try to run command like theses:

python grounding_dino_demo.py \
>   --config GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py \
>   --grounded_checkpoint groundingdino_swint_ogc.pth \
>   --input_image assets/demo1.jpg \
>   --output_dir "outputs" \
>   --box_threshold 0.3 \
>   --text_threshold 0.25 \
>   --text_prompt "bear" \
>   --device "cpu"

and you will get some wanings, but don't care, you can get grounding_dino_output.jpg in your ./output directory.

Hi, I'm trying to run the model with --device "cpu" and still getting this error:

UserWarning: Failed to load custom C++ ops. Running on CPU mode Only!
  warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!")

Process finished with exit code 0

The model does not actually run (exit code 0). I'm using Mac M1 and running the model independently through Pycharm.