Open RockXeng opened 4 days ago
Hi. The bug hint here may seem obvious. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
I have not tested this model in windows. You may consider whether the SAM-base (~136M) model can be used in Windows or not.
I have resolved it, thanks for your reply. I have run SAM official demo with your model, but I got empty result when I run commend as bellow: processing is success, but result folder is empty:
Is the reason for this error because I entered the wrong image? I test with a 2D TIF image.
Hi, Have you given the prompts? Does this problem also appear when you test the original SAM model?
No,I just instead the original SAM model with your model and run the Original SAM demo
Original demo code as below:
def main(args: argparse.Namespace) -> None:
print("Loading model...")
sam = sam_model_registry[args.model_type](checkpoint=args.checkpoint)
_ = sam.to(device=args.device)
output_mode = "coco_rle" if args.convert_to_rle else "binary_mask"
amg_kwargs = get_amg_kwargs(args)
generator = SamAutomaticMaskGenerator(sam, output_mode=output_mode, **amg_kwargs)
if not os.path.isdir(args.input):
targets = [args.input]
else:
targets = [
f for f in os.listdir(args.input) if not os.path.isdir(os.path.join(args.input, f))
]
targets = [os.path.join(args.input, f) for f in targets]
os.makedirs(args.output, exist_ok=True)
for t in targets:
print(f"Processing '{t}'...")
image = cv2.imread(t)
if image is None:
print(f"Could not load '{t}' as an image, skipping...")
continue
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
masks = generator.generate(image)
base = os.path.basename(t)
base = os.path.splitext(base)[0]
save_base = os.path.join(args.output, base)
if output_mode == "binary_mask":
os.makedirs(save_base, exist_ok=False)
write_masks_to_folder(masks, save_base)
else:
save_file = save_base + ".json"
with open(save_file, "w") as f:
json.dump(masks, f)
print("Done!")
If I want to use your model, should I refer to the code in your defect inference. py file and prepare the data in. mat format? If I want to test my own TIF image, what should I do?
You do not need to convert the data into mat format. I remember my inference. py needs the original image data and prompts as input. When I was an undergraduate, I tested a defect detection program in Windows; there were some bugs that needed to be fixed.
Thus, I have three suggestions: (1) The SAM or MedSAM models are developed more completely; you can use them (the Windows version) to test your own model in Windows to check whether the bug existed or not. (2) If you want to use the DefecSAM model, you can reproduce the results in Linux (the original environment) to make sure the environment is not the reason for causing bugs. (3) If you want to develop your own model, you may need to change the input and data loading ways. It is simple. If you still have problems and want me to help you give more detailed suggestions for your codes to test your images, you can add my WeChat: TorchEcho.
Good Luck.
Describe: 1、Run demo in windows terminal(without CUDA), commend as follows: python scripts/amg.py --checkpoint weights/defect_vit_b.pth --model-type vit_b --input test_images --output output --device cpu
2、Error as follows: Loading model... Traceback (most recent call last): File "scripts/amg.py", line 240, in
main(args)
File "scripts/amg.py", line 199, in main
sam = sam_model_registryargs.model_type
File "c:\users\sheng shu\desktop\ai\research\sam\segment-anything\segment_anything\build_sam.py", line 38, in build_sam_vit_b
return _build_sam(
File "c:\users\sheng shu\desktop\ai\research\sam\segment-anything\segment_anything\build_sam.py", line 105, in _build_sam
state_dict = torch.load(f)
File "C:\Users\sheng shu\AppData\Local\anaconda3\envs\sam\lib\site-packages\torch\serialization.py", line 1025, in load
return _load(opened_zipfile,
File "C:\Users\sheng shu\AppData\Local\anaconda3\envs\sam\lib\site-packages\torch\serialization.py", line 1446, in _load
result = unpickler.load()
File "C:\Users\sheng shu\AppData\Local\anaconda3\envs\sam\lib\site-packages\torch\serialization.py", line 1416, in persistent_load
typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "C:\Users\sheng shu\AppData\Local\anaconda3\envs\sam\lib\site-packages\torch\serialization.py", line 1390, in load_tensor
wrap_storage=restore_location(storage, location),
File "C:\Users\sheng shu\AppData\Local\anaconda3\envs\sam\lib\site-packages\torch\serialization.py", line 390, in default_restore_location
result = fn(storage, location)
File "C:\Users\sheng shu\AppData\Local\anaconda3\envs\sam\lib\site-packages\torch\serialization.py", line 265, in _cuda_deserialize
device = validate_cuda_device(location)
File "C:\Users\sheng shu\AppData\Local\anaconda3\envs\sam\lib\site-packages\torch\serialization.py", line 249, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.