facebookresearch / segment-anything

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Apache License 2.0
47.22k stars 5.59k forks source link

There is a error when I run the command #136

Open avdance opened 1 year ago

avdance commented 1 year ago

I use RTX3090 test the command:

python scripts/amg.py --checkpoint  "D:\segment\sam_vit_l_0b3195.pth" --model-type vit_l --input "D:\segment\photo-bridage.jpg" --output "D:\segment\out" , 

but I get this error:

 "Traceback (most recent call last):
  File "D:\segment\segment-anything\scripts\amg.py", line 238, in <module>
    main(args)
  File "D:\segment\segment-anything\scripts\amg.py", line 221, in main
    masks = generator.generate(image)
  File "D:\ProgramData\miniconda3\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\ProgramData\miniconda3\lib\site-packages\segment_anything\automatic_mask_generator.py", line 163, in generate    mask_data = self._generate_masks(image)
  File "D:\ProgramData\miniconda3\lib\site-packages\segment_anything\automatic_mask_generator.py", line 206, in _generate_masks
    crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
  File "D:\ProgramData\miniconda3\lib\site-packages\segment_anything\automatic_mask_generator.py", line 245, in _process_crop
    batch_data = self._process_batch(points, cropped_im_size, crop_box, orig_size)
  File "D:\ProgramData\miniconda3\lib\site-packages\segment_anything\automatic_mask_generator.py", line 297, in _process_batch
    data.filter(keep_mask)
  File "D:\ProgramData\miniconda3\lib\site-packages\segment_anything\utils\amg.py", line 49, in filter
    self._stats[k] = v[torch.as_tensor(keep, device=v.device)]
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.61 GiB (GPU 0; 24.00 GiB total capacity; 18.41 GiB already allocated; 4.19 GiB free; 18.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
Aryan-Mishra24 commented 1 year ago

The error you're encountering (torch.cuda.OutOfMemoryError) is because your GPU (RTX 3090) is running out of memory while trying to process the image. To resolve this issue, you can try one or more of the following:

Reduce the input image size: Resize the input image to a smaller resolution before running the script. This will reduce the memory required for processing the image.

Reduce the batch size: If you're processing multiple images or running the script with a custom batch size, try reducing the batch size to minimize the memory consumption.

Free up GPU memory: Make sure you don't have any other processes or applications using the GPU memory. Close any unnecessary applications or processes to free up GPU memory.

Use a more efficient model: If you're using the larger SAM model (ViT-L or ViT-H), try using the smaller ViT-B model, which will consume less memory during processing. Note that using a smaller model may result in a decrease in segmentation quality.

Keep in mind that deep learning models, particularly large ones like the SAM model, can consume significant amounts of GPU memory. If you continue to encounter memory issues, you may need to consider using a GPU with more memory or distributing the processing across multiple GPUs.

dheeeraaj commented 1 year ago

ChatGPT generated answer :)