facebookresearch / segment-anything

The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Apache License 2.0
47.53k stars 5.62k forks source link

runing error #125

Open JingJieMa opened 1 year ago

JingJieMa commented 1 year ago

python3 scripts/amg.py --checkpoint ./sam_vit_l_0b3195.pth --input ./input_image/dog.jpg --output ./output_image When I ran this statement, the following error occurred image

FullStackSimon commented 1 year ago

Does the problem still exist when you download and try a different model? sam_vit_h_4b8939.pth for example?

JingJieMa commented 1 year ago

It's still the same error image

JingJieMa commented 1 year ago

Is there a problem with my runtime environment?

FullStackSimon commented 1 year ago

I'm not sure I'm afraid.

The only thing I can see from your command is that you are specifying a filename instead of a folder for your output.

As per the amg.py script: Path to the directory where masks will be output. Output will be either a folder " "of PNGs per image or a single json with COCO-style masks

However I doubt that is the cause of the issue you are experiencing.

If you think its your runtime, perhaps try docker?

This is what works for me...

Dockerfile

# Use the official Python base image
FROM python:3.9

# Set the working directory
WORKDIR /app

# Copy the requirements file into the container
COPY requirements.txt .

RUN apt-get update && \
    apt-get install -y --no-install-recommends libgl1-mesa-glx && \
    rm -rf /var/lib/apt/lists/*

# Install the required Python packages
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the repository code into the container
COPY . .

# Install the 'segment-anything' package
RUN pip install -e .

# Set the entrypoint to a shell to allow user interaction
ENTRYPOINT ["/bin/bash"]

requirements.txt

torch
torchvision
timm
einops
matplotlib
opencv-python
pycocotools
svgwrite
numpy
svgpathtools
FullStackSimon commented 1 year ago

P.s. - I just use the default model type with that model - I notice you specified a model type in your last screenshot. Remove that and see what happens?

This is what works for me

python scripts/amg.py --checkpoint data/sam_vit_h_4b8939.pth --input data/a-room-at-the-beach.jpeg --output data/output
HannaMao commented 1 year ago

Please ensure that you provide the right model-type. In the first screenshot, you were using vit_l without specifying the model-type (the default is vit_h). In the second screenshot, you were using the checkpoint of vit_h but specifying the model-type as vit_b.

JingJieMa commented 1 year ago

Thank you. I tried as you suggested, and it prompted me that there is an issue with the NVIDIA driver. I will try to install the driver. image

JingJieMa commented 1 year ago

I'm not sure I'm afraid.

The only thing I can see from your command is that you are specifying a filename instead of a folder for your output.

As per the amg.py script: Path to the directory where masks will be output. Output will be either a folder " "of PNGs per image or a single json with COCO-style masks

However I doubt that is the cause of the issue you are experiencing.

If you think its your runtime, perhaps try docker?

This is what works for me...

Dockerfile

# Use the official Python base image
FROM python:3.9

# Set the working directory
WORKDIR /app

# Copy the requirements file into the container
COPY requirements.txt .

RUN apt-get update && \
    apt-get install -y --no-install-recommends libgl1-mesa-glx && \
    rm -rf /var/lib/apt/lists/*

# Install the required Python packages
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the repository code into the container
COPY . .

# Install the 'segment-anything' package
RUN pip install -e .

# Set the entrypoint to a shell to allow user interaction
ENTRYPOINT ["/bin/bash"]

requirements.txt

torch
torchvision
timm
einops
matplotlib
opencv-python
pycocotools
svgwrite
numpy
svgpathtools

It's still the same error,Maybe I can really try use Docker image