MIAnalyzer / MIA

Microsopic Image Analyzer (MIA)
https://mianalyzer.github.io/
28 stars 1 forks source link

3D support #5

Open SebDBI opened 1 year ago

SebDBI commented 1 year ago

This looks like a very useful software! Are you planning support for 3D images?

nkoerb commented 1 year ago

Hey, in principle z-stacks (3D) are supported. There are 2 options for working with them, having a mask/prediction for every layer in the stack or having a prediction for the whole stack. What is not supported are 3D models (i.e. using 3D convolutions), which could be rather easliy implemented.

SebDBI commented 1 year ago

So, it is possible to annotate 3D images and train a model from these annotations but the model is only processing slice by slice, is that so? Are 3D models on your roadmap? I was also looking for 3D annotations tools, do you plan anything for this?

From: nkoerb @.> Sent: Thursday, September 7, 2023 11:43 AM To: MIAnalyzer/MIA @.> Cc: Sébastien Tosi @.>; Author @.> Subject: Re: [MIAnalyzer/MIA] 3D support (Issue #5)

You don't often get email from @.**@.>. Learn why this is importanthttps://aka.ms/LearnAboutSenderIdentification

Hey, in principle z-stacks (3D) are supported. There are 2 options for working with them, having a mask/prediction for every layer in the stack or having a prediction for the whole stack. What is not supported are 3D models (i.e. using 3D convolutions), which could be rather easliy implemented.

— Reply to this email directly, view it on GitHubhttps://github.com/MIAnalyzer/MIA/issues/5#issuecomment-1709841926, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A5XJHAK2GGIF4R2RVSVJC63XZGJKBANCNFSM6AAAAAA4MEYAY4. You are receiving this because you authored the thread.Message ID: @.**@.>>

nkoerb commented 1 year ago

Hey, so currently there are 2 option: 1) each slide of a 3D stack is processed independently, resulting in a prediction for each slide. The model only processes the information of a single slide at a time. 2) A 3D stack can be used as input and generate a joint prediction for the whole stack. The model processes information from all slides and gives a prediction based on that. The model still uses 2D kernels, which means that no 3D features can be detected (but 2D features from different slides) and the resulting output is 2D.

option 3 which is not implemented would be to use 3D Kernels that can detect 3D information and produces a 3D output.

Basically, as 1) and 2) already exist it is not very much work to implement. Currently option 3 not on the immediate roadmap, probably if more poeple request it or I have a little time left. Sorry.

SebDBI commented 12 months ago

Thank you for the details. It might actually be sufficient for many 3D problems to operate slice by slice.

An important feature though is then to support sparse annotations when training from 3D stacks as fully annotating 3D images can be very time consuming. For this, it would also be very useful that the tools provided to annotate bring some 3D supporr (e.g. 3D wand and interpolation between 2D contours).