vital-ultrasound / ai-echocardiography-for-low-resource-countries

AI-assisted echocardiography for low-resource countries
1 stars 1 forks source link

Implementation of few normalisation and augmentation techniques for US data #2

Closed mxochicale closed 2 years ago

mxochicale commented 3 years ago

🚀 Feature

Lee 2020 et al in Scientific Reports applied Grey-level co-occurrence matrix features to classify burn depth based on B-mode ultrasound imaging. Such method might be straightforward to prototype and might provide understanding of the frames for different patients and days of echocardiographic views.

Additional context

Screenshot from 2021-10-04 15-16-49

Screenshot from 2021-10-04 15-30-05

Plot from video_channel_measurement.py that might help to compute "Grey-level co-occurrence matrix features ": Figure_1_01NVb-003-001_T101NVb-003-001-echo_mp4

Computational complexity


@Article{Lee2020,
author={Lee, Sangrock
and {Rahul}
and Ye, Hanglin
and Chittajallu, Deepak
and Kruger, Uwe
and Boyko, Tatiana
and Lukan, James K.
and Enquobahrie, Andinet
and Norfleet, Jack
and De, Suvranu},
title={Real-time Burn Classification using Ultrasound Imaging},
journal={Scientific Reports},
year={2020},
month={Apr},
day={02},
volume={10},
number={1},
pages={5829},
abstract={This article presents a real-time approach for classification of burn depth based on B-mode ultrasound imaging. A grey-level co-occurrence matrix (GLCM) computed from the ultrasound images of the tissue is employed to construct the textural feature set and the classification is performed using nonlinear support vector machine and kernel Fisher discriminant analysis. A leave-one-out cross-validation is used for the independent assessment of the classifiers. The model is tested for pair-wise binary classification of four burn conditions in ex vivo porcine skin tissue: (i) 200{\thinspace}{\textdegree}F for 10{\thinspace}s, (ii) 200{\thinspace}{\textdegree}F for 30{\thinspace}s, (iii) 450{\thinspace}{\textdegree}F for 10{\thinspace}s, and (iv) 450{\thinspace}{\textdegree}F for 30{\thinspace}s. The average classification accuracy for pairwise separation is 99{\%} with just over 30 samples in each burn group and the average multiclass classification accuracy is 93{\%}. The results highlight that the ultrasound imaging-based burn classification approach in conjunction with the GLCM texture features provide an accurate assessment of altered tissue characteristics with relatively moderate sample sizes, which is often the case with experimental and clinical datasets. The proposed method is shown to have the potential to assist with the real-time clinical assessment of burn degrees, particularly for discriminating between superficial and deep second degree burns, which is challenging in clinical practice.},
issn={2045-2322},
doi={10.1038/s41598-020-62674-9},
url={https://doi.org/10.1038/s41598-020-62674-9}
}

Motivation

Pitch

Alternatives

mxochicale commented 3 years ago

In the weekly meeting with Alberto on 4Oct2021, we discussed that the use of B-mode image texture analysis is a good initial approach to understand the echocardiography datasets where we can learnt more about the characteristics of our datasets for 4CV and 2CV. That said, the plan is to prototype few functions and play around wigh Grey-level matrices to then go with the use of auto-encoders and self-supervising techniques.

mxochicale commented 3 years ago

Love this tutorial "Image Information | Grayscale co-occurrence matrix" of https://www.youtube.com/watch?v=cq0Br3zB2AU.

See more:

mxochicale commented 2 years ago

Few approaches for MRI data normalisation_and_augmentation:

mxochicale commented 2 years ago

Pytorch has lots of "Illustration of transforms" that might help this ticket.

mxochicale commented 2 years ago

MONAI has nice options for transforms

https://docs.monai.io/en/stable/transforms.html#

mxochicale commented 2 years ago

Consider the following points from https://ntoussaint.github.io/fetalnav/

Screenshot from 2022-04-06 17-24-30

mxochicale commented 2 years ago

Nice source for augmentations: https://albumentations.ai/docs/examples/pytorch_classification/