gesund-ai / gesund

Open source SDK for Gesund.ai platform
MIT License
5 stars 0 forks source link

Validation Metrics Library

Overview

This library provides tools for calculating validation metrics for predictions and annotations in machine learning workflows. It includes a command-line tool for computing and displaying validation metrics.

Documentation could be found here https://gesund-ai.github.io/

Installation

To use this library, ensure you have the necessary dependencies installed in your environment. You can install them via pip:

pip install .

Usage

Command-Line Tool

The primary script for running validation metrics is run_metrics.py. This script calculates validation metrics based on JSON files containing predictions and annotations.

Arguments

Example

Basic Usage:

   run_metrics --annotations test_data/gesund_custom_format/gesund_custom_format_annotations_classification.json --predictions test_data/gesund_custom_format/gesund_custom_format_predictions_classification.json --class_mappings test_data/test_class_mappings.json --problem_type classification --format gesund_custom_format

Example JSON Inputs

The library supports annotations and predictions in the following formats:

The format for Gesund Custom Format is shown below under Example JSON Inputs.

Example Outputs

Console Output

Only the Highlighted Overall Metrics are printed to the console. The output on the consol should look like so:

Validation Metrics:
----------------------------------------
Accuracy:
    Validation: 0.4375
    Confidence_Interval: 0.2656 to 0.6094
----------------------------------------
Micro F1:
    Validation: 0.4375
    Confidence_Interval: 0.2656 to 0.6094
----------------------------------------
Macro F1:
    Validation: 0.4000
    Confidence_Interval: 0.2303 to 0.5697
----------------------------------------
AUC:
    Validation: 0.3996
    Confidence_Interval: 0.2299 to 0.5693
----------------------------------------
Precision:
    Validation: 0.4343
    Confidence_Interval: 0.2625 to 0.6060
----------------------------------------
Sensitivity:
    Validation: 0.4549
    Confidence_Interval: 0.2824 to 0.6274
----------------------------------------
Specificity:
    Validation: 0.4549
    Confidence_Interval: 0.2824 to 0.6274
----------------------------------------
Matthews C C:
    Validation: -0.1089
    Confidence_Interval: 0.0010 to 0.2168
----------------------------------------
----------------------------------------
All Graphs and Plots Metrics saved in JSONs.
----------------------------------------

Output JSON Files

All output JSON files for all graphs and plots will be present in the outputs dir, under the randomly assigned {batch_job_id} dir.

COCO Format

It is to be noted that COCO format is traditionally used for object detection, instance segmentation, and keypoint detection, but it is not designed for image classification. Therefore, we have adapted COCO-like structures for classification tasks.

Sample format can be seen below: