FAU-DLM / wsi_processing_pipeline

This library helps with the key pre- and postprocessing steps necessary to use whole-slide images in deep-learning/ai projects.
4 stars 3 forks source link

Using fastai library to find prediction on imagescope data #4

Open Monk5088 opened 2 years ago

Monk5088 commented 2 years ago

Hey everyone, I have trained my model in fastai using the MIDOG 2021 dataset, and my object detection model is working great, I have a minor issue while inferencing the model on totally new two images which are aperio images of .scn format and having aperio imagescope created annotation in XML format. The thing is I trained my model using the fastai input pipeline based on the tutorial notebook provided by the MIDOG challenge team on their website: https://colab.research.google.com/drive/1uQNnpldgypSWX304QrVzvRjN5cKygn0B?usp=sharing#scrollTo=p1sTkpB79Zs8 The notebook is based on the object detection library of fastai by Christian marshal: https://github.com/ChristianMarzahl/ObjectDetection The input pipeline is as follows it takes the bunch of .tiff images and the annotations in MS COCO .json format and then creates the imagedatabunch object of fastai. Now how should I use my new data which consists of only one .scn image and the respective . XML file of the same, which has annotations in the ellipse box (top left corner and top bottom corner coordinates) to be used for inference, and testing of my trained model. Any resource, code snippet, notebook, etc which might be able to help to do this would be really helpful. Thanking you all in advance, Harshit

Monk5088 commented 2 years ago
<Annotations MicronsPerPixel="0.500000">
    <Annotation Id="1" Name="" ReadOnly="0" NameReadOnly="0" LineColorReadOnly="0" Incremental="0" Type="4" LineColor="65280" Visible="1" Selected="1" MarkupImagePath="" MacroName="">
        <Attributes>
            <Attribute Name="Description" Id="0" Value=""/>
        </Attributes>
        <Regions>
            <RegionAttributeHeaders>
                <AttributeHeader Id="9999" Name="Region" ColumnWidth="-1"/>
                <AttributeHeader Id="9997" Name="Length" ColumnWidth="-1"/>
                <AttributeHeader Id="9996" Name="Area" ColumnWidth="-1"/>
                <AttributeHeader Id="9998" Name="Text" ColumnWidth="-1"/>
                <AttributeHeader Id="1" Name="Description" ColumnWidth="-1"/>
            </RegionAttributeHeaders>
            <Region Id="1" Type="2" Zoom="1" Selected="0" ImageLocation="" ImageFocus="0" Length="138.3" Area="1517.4" LengthMicrons="69.2" AreaMicrons="379.3" Text="Mitosis" NegativeROA="0" InputRegionId="0" Analyze="0" DisplayId="1">
                <Attributes/>
                <Vertices>
                    <Vertex X="6595" Y="24009"/>
                    <Vertex X="6642" Y="24051"/>
                </Vertices>
            </Region>
            <Region Id="2" Type="2" Zoom="1" Selected="0" ImageLocation="" ImageFocus="0" Length="119.7" Area="1121.5" LengthMicrons="59.9" AreaMicrons="280.4" Text="Mitosis" NegativeROA="0" InputRegionId="0" Analyze="0" DisplayId="2">
                <Attributes/>
                <Vertices>
                    <Vertex X="7032" Y="26205"/>
                    <Vertex X="7074" Y="26239"/>
                </Vertices>
            </Region>
        </Regions>
        <Plots/>
    </Annotation>
</Annotations>

This is a cropped part of the XML which contains the annotations(I can't disclose the full XML file due to the institute's confidentiality). And the .scn file is the normal WSI image at 20x Zoom and I already have the dimensions of the image with me.