Open nahidalam opened 1 year ago
Hi, Did you find any solution to this problem?
@csv610 Theres a flag in the Model class that allows you to choose your device. Just set it to 'cpu' for ex: grounding_dino_model = Model(model_config_path=GROUNDING_DINO_CONFIG_PATH, model_checkpoint_path=GROUNDING_DINO_CHECKPOINT_PATH, device='cpu')
Hello, There are two methods "build_sam" and "build_model); One of them as given on some link is as follow from segment_anything import build_sam def load_model(model_config_path, model_checkpoint_path, cpu_only=False): args = SLConfig.fromfile(model_config_path) args.device = "cuda" if not cpu_only else "cpu" model = build_model(args).to("cpu") checkpoint = torch.load(model_checkpoint_path, map_location="cpu") load_res = model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) print(loadres) = model.eval() return model
I had specified to(cpu) on the model but it did not work.
Thanks
On Wednesday, May 31, 2023 at 12:02:03 PM PDT, bdubbs-clarifai ***@***.***> wrote:
@csv610 Theres a flag in the Model class that allows you to choose your device. Just set it to 'cpu' for ex: grounding_dino_model = Model(model_config_path=GROUNDING_DINO_CONFIG_PATH, model_checkpoint_path=GROUNDING_DINO_CHECKPOINT_PATH, device='cpu')
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
appalled to see this problem still not solved yet... Any plan on this?@SlongLiu
appalled to see this problem still not solved yet... Any plan on this?@SlongLiu
The answer is what @bdubbs-clarifai reported a year ago.
More in detail, the Model
class accepts a device
argument, which defaults to cuda
. Setting it to cpu
works as expected.
I was following Roboflow notebook and got below error while predicting with the Grounding DINO model at inference.py
Looks like Grounding DINO is hardcoded to be inferenced with
cuda
so it is not able to do inference on thecpu
Please suggest