gil-uav / semantic-image-segmentation

Semantic segmentation of static objects in orthophotos.
MIT License
2 stars 0 forks source link

DEPRECATED! Development continues here: semantic-image-segmentation-unet

Semantic Image Segmentation

Members : Vegard Bergsvik Øvstegård

Supervisors : Jim Tørresen

Status

Python package Python Package using Conda Codacy Badge DeepSource

Description

This repository aims to implement and produce trained networks in semantic image segmentation for orthopohots. Current network structure is U-net.

Dependencies

Installation

git clone https://github.com/gil-uav/semantic-image-segmentation.git

virtualenv

python3 -m venv env
source env/bin/activate
pip install -r requirements.txt

Conda

conda env create --file environment.yml
conda activate seg

Uninstall Pillow and install Pillow-SIMD:

pip uninstall pillow
pip install pillow-simd

If you have a AVX2-enabled CPU, check with grep avx2 /proc/cpuinfo, you can install Pillow-SIMD with:

pip uninstall pillow
CC="cc -mavx2" pip install -U --force-reinstall pillow-simd

This should have slightly better performance than the SSE4(default) version and much better than the standard Pillow package.

NB! Remember to uninstall and reinstall Pillow-SIMD. In some cases, python might not find the PIL package, however a reinstall fixes this 99% of the time.

Usage

Training

The application fetches some configurations and parameters from a .env file if it exists. Run python train.py --help to see all other arguments. The package is using pytorch-lighting and inherits all its arguments.

The data is expected to be structured like this:

data/
    images/
    masks/

The path do data us set using --dp argument.

Console example

This example stores the checkpoints and logs under the default_root_dir, uses all available GPUs and fetches training data from --dp.

python train.py --default_root_dir=/shared/use/this/ --gpus=-1 --dp=/data/is/here/

.env example

Only these arguments are fetched from .env, the rest must be passed through the CLI.

# Model config
N_CHANNELS=3
N_CLASSES=1
BILINEAR=True

# Hyperparameters
EPOCHS=300 # Epochs
BATCH_SIZE=4 # Batch size
LRN_RATE=0.001 # Learning rate
VAL_PERC=15 # Validation percent
TEST_PERC=15 # Testing percent
IMG_SIZE=512  # Image size
VAL_INT_PER=1 # Validation interval percentage
ACC_GRAD=4 # Accumulated gradients, number = K.
GRAD_CLIP=1.0 # Clip gradients with norm above give value
EARLY_STOP=10 # Early stopping patience(Epochs)

# Other
PROD=False # Turn on or off debugging APIs
DIR_DATA="data/" # Where dataset is stored
DIR_ROOT_DIR="/shared/use/this/" # Where logs and checkpoint will be stored
WORKERS=4 # Number of workers for data- and validation loading
DISCORD_WH=httpsomethingwebhoowawnserisalways42

Performance tips:

Features

ML:

Ease of use:

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

MIT Licence