Labelme is a graphical image annotation tool inspired by http://labelme.csail.mit.edu.
It is written in Python and uses Qt for its graphical interface.
VOC dataset example of instance segmentation.
Other examples (semantic segmentation, bbox detection, and classification).
Various primitives (polygon, rectangle, circle, line, and point).
If you're new to Labelme, you can get started with Labelme Starter, which contains:
There are options:
You need install Anaconda, then run below:
# python3
conda create --name=labelme python=3
source activate labelme
# conda install -c conda-forge pyside2
# conda install pyqt
# pip install pyqt5 # pyqt5 can be installed via pip on python3
pip install labelme
# or you can install everything by conda command
# conda install labelme -c conda-forge
sudo apt-get install labelme
# or
sudo pip3 install labelme
# or install standalone executable from:
# https://github.com/labelmeai/labelme/releases
# or install from source
pip3 install git+https://github.com/labelmeai/labelme
brew install pyqt # maybe pyqt5
pip install labelme
# or install standalone executable/app from:
# https://github.com/labelmeai/labelme/releases
# or install from source
pip3 install git+https://github.com/labelmeai/labelme
Install Anaconda, then in an Anaconda Prompt run:
conda create --name=labelme python=3
conda activate labelme
pip install labelme
# or install standalone executable/app from:
# https://github.com/labelmeai/labelme/releases
# or install from source
pip3 install git+https://github.com/labelmeai/labelme
Run labelme --help
for detail.
The annotations are saved as a JSON file.
labelme # just open gui
# tutorial (single image example)
cd examples/tutorial
labelme apc2016_obj3.jpg # specify image file
labelme apc2016_obj3.jpg -O apc2016_obj3.json # close window after the save
labelme apc2016_obj3.jpg --nodata # not include image data but relative image path in JSON file
labelme apc2016_obj3.jpg \
--labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list
# semantic segmentation example
cd examples/semantic_segmentation
labelme data_annotated/ # Open directory to annotate all images in it
labelme data_annotated/ --labels labels.txt # specify label list with a file
--output
specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.~/.labelmerc
. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the --config
flag.--nosortlabels
flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.git clone https://github.com/labelmeai/labelme.git
cd labelme
# Install anaconda3 and labelme
curl -L https://github.com/wkentaro/dotfiles/raw/main/local/bin/install_anaconda3.sh | bash -s .
source .anaconda3/bin/activate
pip install -e .
Below shows how to build the standalone executable on macOS, Linux and Windows.
# Setup conda
conda create --name labelme python=3.9
conda activate labelme
# Build the standalone executable
pip install .
pip install 'matplotlib<3.3'
pip install pyinstaller
pyinstaller labelme.spec
dist/labelme --version
Make sure below test passes on your environment.
See .github/workflows/ci.yml
for more detail.
pip install -r requirements-dev.txt
ruff format --check # `ruff format` to auto-fix
ruff check # `ruff check --fix` to auto-fix
MPLBACKEND='agg' pytest -vsx tests/
This repo is the fork of mpitid/pylabelme.