choosehappy / QuickAnnotator

An open-source digital pathology based rapid image annotation tool
BSD 3-Clause Clear License
74 stars 27 forks source link

Problem using different value of num_classes #19

Open darshats opened 2 years ago

darshats commented 2 years ago

Hi, I think I encountered an error when I try to change the predefined unet to my own that does binary segmentation. From what I can gather, during train_ae, the class is compared with itself as the prediction. Since the input image (X) is RGB it expects 3 channel output (prediction) as well. If I try to change the unet to a two class output, I get an error here:

_loss1 = criterion(prediction, X) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 528, in forward return F.mse_loss(input, target, reduction=self.reduction) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/functional.py", line 2928, in mse_loss expanded_input, expanded_target = torch.broadcast_tensors(input, target) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/functional.py", line 74, in broadcast_tensors return _VF.broadcasttensors(tensors) # type: ignore RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1

Is there a better way to plug in a custom unet where n_classes != 3?

Thanks Darshat

choosehappy commented 2 years ago

thanks for the question! let me see if i can suggest something and see if it helps your understanding with your intentions

train_ae by definition does a 3 class training, so that the input is RGB and the output is RGB. this helps initialize the model with reasonable features

during actual training (i.e., supervised training to produce a binary mask), we load the same 3-channel model but then replace the last layer with a 2 layer binary output here:

https://github.com/choosehappy/QuickAnnotator/blob/8a4a9b1bfcf51bc67e3949a990fe524b05606959/train_model.py#L179

Does that help?

On Wed, Feb 9, 2022 at 8:15 PM darshats @.***> wrote:

Hi, I think I encountered an error when I try to change the predefined unet to my own that does binary segmentation. From what I can gather, during train_ae, the class is compared with itself as the prediction. Since the input image (X) is RGB it expects 3 channel output (prediction) as well. If I try to change the unet to a two class output, I get an error here:

_loss1 = criterion(prediction, X) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, *kwargs) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 528, in forward return F.mse_loss(input, target, reduction=self.reduction) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/nn/functional.py", line 2928, in mse_loss expanded_input, expanded_target = torch.broadcast_tensors(input, target) File "/media/App/anaconda3/envs/NN/lib/python3.9/site-packages/torch/functional.py", line 74, in broadcast_tensors return VF.broadcast_tensors(tensors) # type: ignore RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1*

Is there a better way to plug in a custom unet where n_classes != 3?

Thanks Darshat

β€” Reply to this email directly, view it on GitHub https://github.com/choosehappy/QuickAnnotator/issues/19, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACJ3XTAXSGC7332CLG46KADU2MUTZANCNFSM5N7NIEFA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you are subscribed to this thread.Message ID: @.***>

darshats commented 2 years ago

Hi, Thanks for your reply. To give better context, I'm replacing the entire model with a custom resnet50 based encoder unet. So I cant use the model change you mentioned above.

So I dont want to have to run the train_ae script ideally. The problem I've run into is the model created by train_ae script in folder 0 is needed to let the retrain_dl script go forward. I think some other internal db structure are also updated by that script.

So for now I retained the train_ae as is, with the model that comes with the app. In retrain_dl I changed it to ignore output of train_ae completely (looking at folder 0 model).

That got me past this issue. It would have been nice to be able to do this better so we can skip the train_ae cleanly.

Thanks for reply! Darshat

choosehappy commented 2 years ago

Gotcha would another way of doing this that may possibly have been easier is to simply create a new train_ae.py script by taking the old script and deleting all the innards?

this would still result in the DB, and the file structure being updated accordingly so that things downstream would work as expected, but would not actually make/create a model (or you could replace your own model creation process in there)

Would that have been an option?

On Thu, Feb 10, 2022 at 11:56 PM darshats @.***> wrote:

Hi, Thanks for your reply. To give better context, I'm replacing the entire model with a custom resnet50 based encoder unet. So I cant use the model change you mentioned above.

So I dont want to have to run the train_ae script ideally. The problem I've run into is the model created by train_ae script in folder 0 is needed to let the retrain_dl script go forward. I think some other internal db structure are also updated by that script.

So for now I retained the train_ae as is, with the model that comes with the app. In retrain_dl I changed it to ignore output of train_ae completely (looking at folder 0 model).

That got me past this issue. It would have been nice to be able to do this better so we can skip the train_ae cleanly.

Thanks for reply! Darshat

β€” Reply to this email directly, view it on GitHub https://github.com/choosehappy/QuickAnnotator/issues/19#issuecomment-1035927537, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACJ3XTEE3POVSLINEXAZ4R3U2SXI5ANCNFSM5N7NIEFA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>

darshats commented 2 years ago

Gutting the script didnt quite work, if I trigger the training "from base" option, the API call http://localhost:5555/api/resnet2/retrain_dl?frommodelid=0 fails with: {"error":"Deep learning model 0 doesn't exist"} It also causes the superpixel algo to error out, though that is more of a clutter in the logs, since I dont use it. To get past, I had to copy a dummy model in that 0 location.

The other critical functional issue is that I'd prefer making annotations at zoom to get boundaries right. At moment zoom works on the original image, not on the annotation window. How difficult would it be to change that? (Crop is not an option since I dont want to creating roi of differing sizes)

Thanks, Darshat

choosehappy commented 2 years ago

Yes, sorry, when i said gut it, i meant replace it entirely with your own approach which end results in saving a base model.

if my memory serves me correctly, it could be as simple as a single line code that copies an existing model of yours into the "0" directory for subsequent use

its hard to completely remove the need of a base model, since it is also used in the embedding process as well, so there are a number of additional dependencies floating around in the background which glue everything together

bummer about the superpixels, this is one of my favorite parts :) especially as they get better inline with the DL algorithm! that said, we could add a flag line parameter in the config file which disables superpixel generation in the cases where it isn't useful. would that be something of interest?

Regarding the zooming. the annotation window should be at a much higher magnification than the original window, is that the case on your side? can you share a screenshot? I'm not quite understanding what you're asking. its worth nothing though that you can use the google chrome zoom features to increase the webpage size, which for quick annotator also results in the images becoming march larger

On Fri, Feb 11, 2022 at 11:04 PM darshats @.***> wrote:

Gutting the script didnt quite work, if I trigger the training "from base" option, the API call http://localhost:5555/api/resnet2/retrain_dl?frommodelid=0 fails with: {"error":"Deep learning model 0 doesn't exist"} It also causes the superpixel algo to error out, though that is more of a clutter in the logs, since I dont use it. To get past, I had to copy a dummy model in that 0 location.

The other critical functional issue is that I'd prefer making annotations at zoom to get boundaries right. At moment zoom works on the original image, not on the annotation window. How difficult would it be to change that? (Crop is not an option since I dont want to creating roi of differing sizes)

Thanks, Darshat

β€” Reply to this email directly, view it on GitHub https://github.com/choosehappy/QuickAnnotator/issues/19#issuecomment-1037010377, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACJ3XTBWEHIQAYNBSP3OFCTU2XZ73ANCNFSM5N7NIEFA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>

darshats commented 2 years ago

(let me know if I should start a different thread!) Wrt superpixel, yes its restored by copying over the custom model to 0 folder. And I think I will start using it soon - that plus the embedding based selection is a very thoughtful feature πŸ‘

Wrt zooming the annotation window, to explain better, below is screenshot at 125% of browser zoom image Zoom is needed to mark the nuclei boundaries in this pic since they can be very close. Using browser resolution is not bad suggestion, but everything is bigger. Just having the annotation window get magnified would help mark finer details. Especially since you've coded up edge weights in the model :)

choosehappy commented 2 years ago

Thanks for sending this over

I'll admit, i'm a bit confused here, because it look like your "regular" window is actually somehow bigger than your annotation window, which seems weird to me

This is what I expected it to look like:

image

where you can see that the annotation window is a much higher magnification version of the input image, and also would allow for accurate cell level segmentation/annotation

I'm wondering why this wouldn't be true in your case...what are the size of the images you're uploading?

darshats commented 2 years ago

All image I use are 256x256, and patchsize is also 256. I keep them the same because I use the import annotation script and that assigns to train test. If I update an annotation, it will also assign an roi to train test. To keep image sizes for training uniform I set size=256 everywhere.

choosehappy commented 2 years ago

let me think about it and get back to you

notably, in the import script, you should be able to ROIs to either training or testing so perhaps that has some additional flexibility?

On Wed, Feb 23, 2022 at 3:31 AM darshats @.***> wrote:

All image I use are 256x256, and patchsize is also 256. I keep them the same because I use the import annotation script and that assigns to train test. If I update an annotation, it will also assign an roi to train test. To keep image sizes for training uniform I set size=256 everywhere.

β€” Reply to this email directly, view it on GitHub https://github.com/choosehappy/QuickAnnotator/issues/19#issuecomment-1048641736, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACJ3XTCPAKPXA46OO5LPGELU4SZPDANCNFSM5N7NIEFA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>

darshats commented 2 years ago

Thats a good option in the import script. The issue I run into is if I update an imported annotation in the tool, the UI doesnt show if it already belongs to train or test. On saving I have to assign again. This probably affects earlier assignment and skews the ratio.

In any case, the solution I'd like to have is to be able to have a magnified view in the tool. I'm using a wacom pad for annotation to get the boundaries right.

choosehappy commented 2 years ago

i think we should be able to pretty easily give an indication (e.g., change border color) of the annotated ROIS so that it is clear if they are in the training/testing set...I'll make a new issue for this

my javascript is terrible so I've asked the development team to look into it : )

In any case, the solution I'd like to have is to be able to have a magnified view in the tool.

I've also asked them to look into this.

if you're looking for a super quick hack, you can simply resize your 256 x 256 images to say, 2x the size and work on those keeping the same "annotation box area". this way the 256 x 256 image (now 512 x 512) would appear twice as large in the annotation window

appreciate it isn't ideal but is something you can do immediately

choosehappy commented 2 years ago

Coming back to this, just merged in #22 which will give you the ability to zoom : )

Can you please take a look and provide feedback?

darshats commented 2 years ago

Sure, I will try and get back. Thanks!

darshats commented 2 years ago

Hi the zoom works, partially. It would be preferable to have scrollbars on the annotation window, because it quickly goes beyond the boundary of the browser, see screenshot:

QA
choosehappy commented 2 years ago

hmmmm sorry i'm not entirely sure i'm seeing what you're saying. can you please try explaining a bit more?

On Mon, Apr 11, 2022 at 2:33 PM darshats @.***> wrote:

Hi the zoom works, partially. It would be preferable to have scrollbars on the annotation window, because it quickly goes beyond the boundary of the browser, see screenshot: [image: QA] https://user-images.githubusercontent.com/56063876/162740006-281dd65c-5a49-4b1c-bb06-6844bde56fac.png

β€” Reply to this email directly, view it on GitHub https://github.com/choosehappy/QuickAnnotator/issues/19#issuecomment-1094993920, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACJ3XTEUBP3HPKZTVHWBWF3VEQLZNANCNFSM5N7NIEFA . You are receiving this because you commented.Message ID: @.***>