Open coong4 opened 1 year ago
There is an option to use local groundingdino in Settings/SegmentAnything.
There is an option to use local groundingdino in Settings/SegmentAnything.
Yeah, first problem can bypass by Use Local, but I still cannot run the Preview, case the CUDA problem. Is there something other I need to do ?
ps. Mac mini M2
Start SAM Processing
Using local groundingdino.
Running GroundingDINO Inference
Initializing GroundingDINO GroundingDINO_SwinT_OGC (694MB)
/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/modeling_utils.py:884: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Initializing SAM to cpu
Traceback (most recent call last):
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1431, in process_api
result = await self.call_function(
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 204, in sam_predict
sam = init_sam_model(sam_model_name)
File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 129, in init_sam_model
sam_model_cache[sam_model_name] = load_sam_model(sam_model_name)
File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/scripts/sam.py", line 80, in load_sam_model
sam = sam_model_registry[model_type](checkpoint=sam_checkpoint_path)
File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 29, in build_sam_hq_vit_l
return _build_sam_hq(
File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 122, in _build_sam_hq
return _load_sam_checkpoint(sam, checkpoint)
File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 67, in _load_sam_checkpoint
state_dict = torch.load(f)
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1024, in load
return _load(opened_zipfile,
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1432, in _load
result = unpickler.load()
File "/opt/homebrew/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pickle.py", line 1213, in load
dispatch[key[0]](self)
File "/opt/homebrew/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/pickle.py", line 1254, in load_binpersid
self.append(self.persistent_load(pid))
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1402, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 1376, in load_tensor
wrap_storage=restore_location(storage, location),
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 391, in default_restore_location
result = fn(storage, location)
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 266, in _cuda_deserialize
device = validate_cuda_device(location)
File "/Users/coong/SD/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/serialization.py", line 250, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
there is a checkbox on the right of model selection - use CPU, if I remember correctly. check it and you should be fine.
there is a checkbox on the right of model selection - use CPU, if I remember correctly. check it and you should be fine.
I always turn on this, and I read the main.py code, at first I think this should be worked, but it seems not
I don't understand why torch.load is trying to load weights to cuda. you may try to force torch.load to load to cpu or some mac device. folow the line at
File "/Users/coong/SD/stable-diffusion-webui/extensions/sd-webui-segment-anything/sam_hq/build_sam_hq.py", line 67, in _load_sam_checkpoint
state_dict = torch.load(f)
there is a checkbox on the right of model selection - use CPU, if I remember correctly. check it and you should be fine.
I feel wire too, the console log showing it Initializing SAM to cpu for sure...
use the method I propose above anyway. It should solve your problem.
use the method I propose above anyway. It should solve your problem.
means I force set to 'cpu' ? how ? forgive me ask that ....
torch.load(model_checkpoint, map_location="cpu")
ooooooooh! finally done thank you ( cry...
I will resolve this issue along with a major update later.
oooooooh! finally done! thannnnk!
dose I modify alright? but why? Im not write any about cuda's gpu's code
torch.load(model_checkpoint, map_location="cpu") (why my reply is missing...anyway... oooooooh! finally done! thannnnk!
dose I modify alright? but why? Im not write any about cuda's gpu's code
USA is switching to standard time from daylight time, so the comment order is quite messed up. I've received all your comments via email.
Your change is correct. you can submit a PR, even if it is forced to cpu. I will most likely not merge, but it can serve me as a reminder to fix it in the major update later.
USA is switching to standard time from daylight time, so the comment order is quite messed up. I've received all your comments via email.
Your change is correct. you can submit a PR, even if it is forced to cpu. I will most likely not merge, but it can serve me as a reminder to fix it in the major update later.
hi, I submitted a PR for this issue
I install the extensions in WebUI by URL, and setup the params in text2img panel, download the SAM Model & GroundingDINO Model, is good so far, till I run the Preview Segmentation, there is the result:
And the console logs, I think there are two issue
1 - git clone error ( but I can visit github on browser AND clone sth. by command line; this can solved by turn on the "local groundingdino" in settings; but I wander why cannot download ) 2 - cannot use torch.cuda ( but I already enabled the "use CPU for SAM" )
and I for sure a newbie ~ hope for response ~