jlockhar / GLASS-AI

Machine learning tool for analysis of lung adenocarcinoma tumors
Other
4 stars 2 forks source link

Poor result #6

Closed ashipde closed 5 months ago

ashipde commented 5 months ago

Hello. Thank you for developing this software. I tested it recently with some of our mouse lung H&E sections. It doesn't seem to have worked that well. Perhaps you can provide some suggestion to get it to work satisfactorily for my purpose, which is to count and measure areas of tumors of area larger than 0.2 mm2. I am not interested in tumor grade.

The lungs from mice of the oncogenic Kras-driven lung cancer model were formalin-fixed to obtain 5 um-thick, H&E-stained sections that were scanned with 20x objective in the Leica Aperio AT2 scanner at 0.5 um per pixel. Whole-lung regions of the scanned slides were exported as tif files at original or 10% resolution for analysis with the latest GLASS-AI macOS app with two different sets of settings (below).

Attached image shows the result for one mouse lung. Similar results were seen with 3-4 other lungs from 2 different tissue fixing/H&E-staining experiments (data not shown). Tumor segmentation of original-resolution image appears to be worse than the 10% resolution.

Any suggestion will be helpful. Thanks again.

test

%%%%%%% Setting 1 %%%%%%% Append slide summary: true Analysis block size: 40 Normalize staining: true Make preview images: true Use low memory mode: false Force CPU: true Patch skip threshold: 10 Smoothing method: smoothing_none Smoothing diameter: 200 Tumor size threshold: 250 Tumor merge distance: 25 Assign overall grades to each tumor: true Overall tumor grade assignment method: majority Stain normalization alpha: 1.000 Stain normalization beta: 0.100 Stain normalization background: 240.000 Stain normalization tissue threshold: 1.000 Stain normalization hematoxylin: 0.551 0.863 0.339 Stain normalization eosin: 0.171 0.783 0.334 Remove pure colors during normalization: 1.000 Alveoli Color: 0 255 255 Airway Color: 255 0 255 Grade 1 LUAD Color: 0 255 0 Grade 2 LUAD Color: 0 0 255 Grade 3 LUAD Color: 255 255 0 Grade 4 LUAD Color: 255 0 0 Background class Color: 255 255 255 Skipped patch Color: 0 0 0 Grade map output scale: 0.125 Make segmentation image: 1.000 Segmentation image output scale: 0.125 Stain normalize image output scale: 0.125 Make confidence map: 0.000 Confidence map output scale: 0.125 Confidence map output color map: 104.000

%%%%%%% Setting 2 %%%%%%% Append slide summary: true Analysis block size: 40 Normalize staining: true Make preview images: true Use low memory mode: false Force CPU: true Patch skip threshold: 2 Smoothing method: smoothing_hamming Smoothing diameter: 200 Tumor size threshold: 100 Tumor merge distance: 20 Assign overall grades to each tumor: true Overall tumor grade assignment method: highest Overall tumor grade assignment threshold: 0.10 Stain normalization alpha: 1.000 Stain normalization beta: 0.100 Stain normalization background: 240.000 Stain normalization tissue threshold: 1.000 Stain normalization hematoxylin: 0.551 0.863 0.339 Stain normalization eosin: 0.171 0.783 0.334 Remove pure colors during normalization: 1.000 Alveoli Color: 0 255 255 Airway Color: 255 0 255 Grade 1 LUAD Color: 0 255 0 Grade 2 LUAD Color: 0 0 255 Grade 3 LUAD Color: 255 255 0 Grade 4 LUAD Color: 255 0 0 Background class Color: 255 255 255 Skipped patch Color: 0 0 0 Grade map output scale: 0.125 Make segmentation image: 1.000 Segmentation image output scale: 0.125 Stain normalize image output scale: 0.125 Make confidence map: 0.000 Confidence map output scale: 0.125 Confidence map output color map: 104.000

jlockhar commented 5 months ago

I am surprised at how poor those results are based on the apparent quality of your tissue. Do you get better results on the demo images? (Available from here)

My first suggestion would be to use the .svs files directly produced by the scanner rather than exporting the images. Aperio's ImageScope software defaults to embedding their custom color profile in the exported images.

Can you send me one of the images that is giving your this issue so that I can try some of my own troubleshooting? You can email a link to your preferred file sharing service to john.lockhart@moffitt.org

ashipde commented 5 months ago

Thanks for the prompt response. Result with one project demo image looks good (medium.tif).

demo_mediumTif

I do not have the original svs files to share right now, but if it helps, you can test with 3 pairs of original- & 10%-resolution TIF exports (Aperio ImageScope) – the files are at https://drive.google.com/drive/folders/1lJ1f6spOKNW__GkRwyPN5q5q4keUy8aa?usp=sharing. I will add the original/svs files (each a slide scan with two separate whole-lung sections) once I get hold of them.

jlockhar commented 5 months ago

I think that the main issue is the amount of blood that is present in your tissues. GLASS-AI is incorrectly calling these areas as tumors. I tried a several permutations on the stain normalization to try and improve the results with minimal success.

image

This is probably a shortcoming in the training data that we used to make GLASS-AI. Our training slides were taken from lungs of mice before they reached a clinical endpoint (i.e., we prefer to use a defined time elapsed since tumor initiation in our studies) and all of the animals underwent whole-body perfusion with PBS prior to collection. We don't tend to see extensive hemorrhaging in our samples, so these kinds of areas were likely not extensively represented in the training data set. I've seen this issue in some specimens post-mortem or at clinical endpoints, and I think that adding classes for these kinds of 'contaminations' (e.g., hemorrhage, necrosis, lymphoid tissue) is an important part of improving these machine learning tools. As of now, I don't have a timeline for when a version of GLASS-AI that is equipped to recognize these features will be produced, but it is on my to-do list for future development.

Unfortunately, GLASS-AI may not be up to the task for your samples. However, I would strongly recommend checking out QuPath (https://qupath.github.io/) if you need to manually annotate these images. I was able to annotate your slide below in about 5 minutes (potentially faster than GLASS-AI would be able to depending on your computing power). The annotation tools in QuPath make annotating things much easier than tracing them out in ImageScope; the 'Wand' tool in particular can really speed up your annotations.

image

ashipde commented 5 months ago

Thanks for checking. Blood being the reason makes sense. Yes, we have been using QuPath for manual tumor measurement, but an automated tool would have certainly saved us time.