adithya-s-k / YoloGemma

Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detection and segmentation.
MIT License
77 stars 6 forks source link
gemma paligemma vlm

YoloGemma

YoloGemma is a project showcasing the capabilities of Vision-Language models in performing computer vision tasks such as object detection and segmentation. At the heart of this experiment lies PaliGemma, a state-of-the-art model that bridges the gap between Language and Vision. Through YoloGemma, we aim to explore whether Vision-Language models can match conventional methods of computer vision.

Outputs

YoloGemma generates outputs by processing images and videos to identify and segment objects within them. The results are visualized as annotated images or videos, highlighting detected objects with bounding boxes or segmentation masks.

Detect Big Cat: Detect Small Cat:
Detect Gojo: Detect Short Person:

Installation

To get started with YoloGemma, follow these simple installation steps:

  1. Clone the repository:

    git clone https://github.com/your-username/YoloGemma.git
    cd YoloGemma
  2. Install the required dependencies:

    conda create -n YoloGemma-venv python=3.10
    conda activate YoloGemma-venv
    pip install -e .

How to Run

Model Download

You can download the model by running the following command:

python download.py

This command will download and quantize the model.

YoloGemma provides three main scripts to facilitate various tasks. Below are instructions on how to run each script:

Main Script for Object Detection

python main.py --prompt "Detect 4 people" --vid_path ./people.mp4 --vid_start 1 --vid_end 12 --max_new_tokens 10

Command Line Arguments

Additional Parameters

Example

python main.py --prompt "Detect 4 people" --vid_path ./people.mp4 --vid_start 1 --vid_end 12 --max_new_tokens 10

This command will start the detection process for the prompt "Detect 4 people" on the video located at ./people.mp4, beginning at 1 second and ending at 12 seconds into the video. It will use a maximum of 10 new tokens during processing.

Gradio Interface (Coming Soon)

python demo.py

This command will launch a Gradio interface, providing an interactive web application to perform object detection and segmentation.

Troubleshooting

If you encounter any issues, please ensure that:

For further assistance, please refer to the project's issues page or contact the maintainers.

Acknowledgements

Special thanks to PaliGemma for their groundbreaking work in Vision-Language models, which serves as the foundation for this project. The project was inspired by this repository - loopvlm.


YoloGemma is an exciting experimental step towards the future of vision-language model-based computer vision, blending the strengths of language models with visual understanding.