Hi, I am trying to run the prediction model locally but it always fails due to it always trying to use CUDA, even when GPU usage is disabled. The steps I take to run the model are:
fork and clone the repo
install cog 0.86 (latest)
in cog.yaml set gpu: false
in predict.py set os.environ['BUILD_WITH_CUDA'] = 'false'
run sudo cog run script/download_weights.py
run cog predict
With these settings, it fails on Linux at this line: torch._C._cuda_init()
On macOS with an M1 CPU running Sonoma 14.1.1 the result is:
Running prediction...
Running prediction: cdfc147f-3f63-4134-a3da-4eba6e565015...
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/cog/server/worker.py", line 222, in _predict
for r in result:
File "/usr/local/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 43, in generator_context
response = gen.send(None)
File "/src/predict.py", line 79, in predict
annotated_picture_mask, neg_annotated_picture_mask, mask, inverted_mask = run_grounding_sam(image,
File "/src/grounded_sam.py", line 81, in run_grounding_sam
annotated_frame, detected_boxes = detect(image, image_source, positive_prompt, groundingdino_model)
File "/src/grounded_sam.py", line 37, in detect
boxes, logits, phrases = predict(
File "/src/weights/GroundingDINO/groundingdino/util/inference.py", line 64, in predict
model = model.to(device)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in to
return self._apply(convert)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
[Previous line repeated 3 more times]
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 662, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 985, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
I have also run this on a Dell XPS running Ubuntu 20.04 with NVIDA GPU and the error is:
Running prediction...
Running prediction: fe4b652f-5654-41d4-a984-433764322d16...
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/cog/server/worker.py", line 222, in _predict
for r in result:
File "/usr/local/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 43, in generator_context
response = gen.send(None)
File "/src/predict.py", line 78, in predict
annotated_picture_mask, neg_annotated_picture_mask, mask, inverted_mask = run_grounding_sam(image,
File "/src/grounded_sam.py", line 81, in run_grounding_sam
annotated_frame, detected_boxes = detect(image, image_source, positive_prompt, groundingdino_model)
File "/src/grounded_sam.py", line 37, in detect
boxes, logits, phrases = predict(
File "/src/weights/GroundingDINO/groundingdino/util/inference.py", line 64, in predict
model = [model.to](http://model.to/)(device)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in to
return self._apply(convert)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
[Previous line repeated 3 more times]
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 662, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 985, in convert
return [t.to](http://t.to/)(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 229, in _lazy_init
torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Is there any advice you have for getting this up and running, ideally on macOS? Many thanks in advance!
Hi, I am trying to run the prediction model locally but it always fails due to it always trying to use CUDA, even when GPU usage is disabled. The steps I take to run the model are:
cog
0.86 (latest)gpu: false
os.environ['BUILD_WITH_CUDA'] = 'false'
sudo cog run script/download_weights.py
cog predict
With these settings, it fails on Linux at this line:
torch._C._cuda_init()
On macOS with an M1 CPU running Sonoma 14.1.1 the result is:
I have also run this on a Dell XPS running Ubuntu 20.04 with NVIDA GPU and the error is:
Is there any advice you have for getting this up and running, ideally on macOS? Many thanks in advance!