askerlee / segtran

Medical Image Segmentation using Squeeze-and-Expansion Transformers
214 stars 50 forks source link

data problem on Polymorphic Transformers #39

Open kathyliu579 opened 2 years ago

kathyliu579 commented 2 years ago

Hi~please help me figure out some questions.

  1. i found that " python3 train2d.py --task refuge --ds train,valid,test --split all --maxiter 10000 --net unet-scratch " should be " --task fundus", otherwise will report errors.
  2. the data for polyp downloaded from https://github.com/DengPingFan/PraNet (search for "testing data") is not complete. some image is missing, i guess it should include in their training data. But for training data, they have several datasets mixed together into two folders(image and mask)..so should i manually select image to our folders? Could you have a look?

many thanks in advance

kathyliu579 commented 2 years ago

and also i found when i run the 2 th step of fundus. there has some problems:

(torch17) qianying@merig:~/PycharmProjects/segtran-master/code$ python3 train2d.py --split all --maxiter 3000 --task fundus --net unet-scratch --ds train,valid,test --polyformer source --cp ../model/unet-scratch-refuge-train,valid,test-06072104/iter_7000.pth --sourceopt allpoly

Traceback (most recent call last):
  File "train2d.py", line 939, in <module>
    net = VanillaUNet(n_channels=3, num_classes=args.num_classes, 
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/unet2d/unet_model.py", line 32, in __init__
    self.polyformer = Polyformer(feat_dim=64, args=polyformer_args)
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/polyformer.py", line 117, in __init__
    polyformer_layers.append( PolyformerLayer(str(i), config) )
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/polyformer.py", line 24, in __init__
    self.in_ator_trans  = CrossAttFeatTrans(config, name + '-in-squeeze')
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/segtran_shared.py", line 502, in __init__
    self.out_trans  = ExpandedFeatTrans(config,  name)
  File "/home/qianying/PycharmProjects/segtran-master/code/networks/segtran_shared.py", line 344, in __init__
    if not config.use_mince_transformer or config.mince_scales is None:
AttributeError: 'EasyDict' object has no attribute 'use_mince_transformer'

and i found these term show in before.

############## Mince transformer settings ##############
parser.add_argument("--mince", dest='use_mince_transformer', action='store_true',
                    help='Use Mince (Multi-scale) Transformer to save GPU RAM.')
parser.add_argument("--mincescales", dest='mince_scales', type=str, default=None, 
                    help='A list of numbers indicating the mince scales.')
parser.add_argument("--minceprops", dest='mince_channel_props', type=str, default=None, 
                    help='A list of numbers indicating the relative proportions of channels of each scale.')

emmm so what happens?

kathyliu579 commented 2 years ago

ohh i saw the previous issue.

 # if not config.use_mince_transformer or config.mince_scales is None:
        self.num_scales     = 0
        self.mince_scales   = None
        self.mince_channels = None
        # else:
        #     # mince_scales: [1, 2, 3, 4...]
        #     self.mince_scales   = config.mince_scales
        #     self.num_scales     = len(self.mince_scales)
        #     self.mince_channel_props = config.mince_channel_props
        #     self.mince_channel_indices, mince_channel_nums = \
        #         fracs_to_indices(self.feat_dim, self.mince_channel_props)

now i revise it like this. is it right??

and happens some other errors..

File "/home/qianying/PycharmProjects/segtran-master/code/networks/segtran_shared.py", line 506, in __init__
    self.keep_attn_scores = config.use_attn_consist_loss
AttributeError: 'EasyDict' object has no attribute 'use_attn_consist_loss'

how to fix it?

kathyliu579 commented 2 years ago

also have a question on fine tune on "k". A polyformer layer consists of two sub-transformers 1 and 2. Does this paper only finetune the k of sub-transformers 1? cause in the code, i only see:

            for poly_opt_mode in poly_opt_modes:
                if poly_opt_mode == 'allpoly':
                    optimized_params += [ translayers.named_parameters() ]
                elif poly_opt_mode == 'inator':
                    optimized_params += [ translayer.in_ator_trans.named_parameters() for translayer in translayers ]
                elif poly_opt_mode == 'k':
                    optimized_params += [ translayer.in_ator_trans.key.named_parameters()   for translayer in translayers ]
                elif poly_opt_mode == 'v':
                    optimized_params += [ translayer.in_ator_trans.out_trans.first_linear.named_parameters() for translayer in translayers ]
                elif poly_opt_mode == 'q':
                    optimized_params += [ translayer.in_ator_trans.query.named_parameters() for translayer in translayers ]

and the in_ator_trans is the sub-transformers 1, right?

askerlee commented 2 years ago

Thanks for reporting the bug. I've just corrected "refuge" to "fundus". Also I've simplified the polyformer config. Yes you are right. Fine-tuning k is only to finetune the k of sub-transformer 1.

kathyliu579 commented 2 years ago

“Also I've simplified the polyformer config.” so what file i should replace?

in addition may i ask why in training, q k are shared ? and why only finetune k of transformer 1? it reduces computation cost?or improve performance?

Get Outlook for Androidhttps://aka.ms/AAb9ysg


From: askerlee @.> Sent: Thursday, June 9, 2022 5:24:39 AM To: askerlee/segtran @.> Cc: Qianying Liu (PGR) @.>; Author @.> Subject: Re: [askerlee/segtran] data problem on Polymorphic Transformers (Issue #39)

Thanks for reporting the bug. I've just corrected "refuge" to "fundus". Also I've simplified the polyformer config. Yes you are right. Fine-tuning k is only to finetune the k of sub-transformer 1.

— Reply to this email directly, view it on GitHubhttps://github.com/askerlee/segtran/issues/39#issuecomment-1150654240, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AW4EV65PM4EQUSJD3YLLKSDVOFWYPANCNFSM5YEGKXVQ. You are receiving this because you authored the thread.Message ID: @.***>

kathyliu579 commented 2 years ago

in addition, the poly dataset i downloaded has different folders... can you please upload your processed data?

Get Outlook for Androidhttps://aka.ms/AAb9ysg


From: Qianying Liu (PGR) @.> Sent: Thursday, June 9, 2022 9:32:23 AM To: askerlee/segtran @.>; askerlee/segtran @.> Cc: Author @.> Subject: Re: [askerlee/segtran] data problem on Polymorphic Transformers (Issue #39)

“Also I've simplified the polyformer config.” so what file i should replace?

in addition may i ask why in training, q k are shared ? and why only finetune k of transformer 1? it reduces computation cost?or improve performance?

Get Outlook for Androidhttps://aka.ms/AAb9ysg


From: askerlee @.> Sent: Thursday, June 9, 2022 5:24:39 AM To: askerlee/segtran @.> Cc: Qianying Liu (PGR) @.>; Author @.> Subject: Re: [askerlee/segtran] data problem on Polymorphic Transformers (Issue #39)

Thanks for reporting the bug. I've just corrected "refuge" to "fundus". Also I've simplified the polyformer config. Yes you are right. Fine-tuning k is only to finetune the k of sub-transformer 1.

— Reply to this email directly, view it on GitHubhttps://github.com/askerlee/segtran/issues/39#issuecomment-1150654240, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AW4EV65PM4EQUSJD3YLLKSDVOFWYPANCNFSM5YEGKXVQ. You are receiving this because you authored the thread.Message ID: @.***>

askerlee commented 2 years ago

“Also I've simplified the polyformer config.” so what file i should replace?

You can just do a "git pull origin master" to update the code.

in addition may i ask why in training, q k are shared ?

Yes correct. It's explained in the IJCAI paper, page 3:

why only finetune k of transformer 1?

Because empirically when I just fine-tune k of transformer 1, it already performs well. I didn't try to fine-tune both, and I'm not sure how to intuitively understand the benefits of fine-tuning both layers for domain adaptation.

askerlee commented 2 years ago

in addition, the poly dataset i downloaded has different folders... can you please upload your processed data?

You mean polyp? For people in China: https://pan.baidu.com/s/1TuiPyQirN4J2hQfQxMMHkQ?pwd=whjl For people in other countries: https://www.dropbox.com/s/s5v2kotxtvruigp/polyp.tar?dl=0

kathyliu579 commented 2 years ago

Thanks for your help. It works. And may I ask how was “Avg” computed in the tables of your paper? I don't quite understand what it means.

kathyliu579 commented 2 years ago

in addition, can you update the commands of polyp dataset ? i am not sure for the commands of step 3 and 4 (training and test on target domain). can you have a look? if it is right, you can add to the "read me".

python3 train2d.py --task polyp --ds CVC-300 --split train --samplenum 5 --maxiter 1600 --saveiter 40 --net unet-scratch --cp ../model/unet-scratch-polyp-CVC-ClinicDB-train,Kvasir-train-06101057/iter_500.pth --polyformer target --targetopt k --bnopt affine --adv feat --sourceds CVC-ClinicDB-train,Kvasir-train --domweight 0.002 --bs 3 --sourcebs 2 --targetbs 2

especially for the "sourceds ", i am not sure.

python3 test2d.py --gpu 1 --ds CVC-300--split test --samplenum 5 --bs 6 --task polyp –cpdir .. --net unet-scratch --polyformer target --nosave --iters 40-1600,40

especially for the "split".

kathyliu579 commented 2 years ago

Also I have other 2 questiones.

  1. I noticed that only ClinicDB and Kvasir's training set seems to be used for training. the test dataset we donot use?
  2. why for fundus we trained on source data "ds"= train,valid,test, but when train on target domain (step 3), "sourceds" only includes "train", not "train,valid,test"?
nguyenlecong commented 2 years ago

Also I have other 2 questiones.

  1. I noticed that only ClinicDB and Kvasir's training set seems to be used for training. the test dataset we donot use?
  2. why for fundus we trained on source data "ds"= train,valid,test, but when train on target domain (step 3), "sourceds" only includes "train", not "train,valid,test"?

Excuse me, do you have a problem with loss function when training polyformer (source) as shown? image

If not, can you show the loss function and the command line you use? This is my command line: _!python3 train2d.py --task polyp --split all --maxiter 3000 --net unet-scratch --polyformer source --modes 2 --ds CVC-ClinicDB-train,Kvasir-train --cp ../model/unet-scratch-polyp-CVC-ClinicDB-train,Kvasir-train-06111827/iter14000.pth --sourceopt allpoly Thank you!

kathyliu579 commented 2 years ago

Also I have other 2 questiones.

  1. I noticed that only ClinicDB and Kvasir's training set seems to be used for training. the test dataset we donot use?
  2. why for fundus we trained on source data "ds"= train,valid,test, but when train on target domain (step 3), "sourceds" only includes "train", not "train,valid,test"?

Excuse me, do you have a problem with loss function when training polyformer (source) as shown? image

If not, can you show the loss function and the command line you use? This is my command line: _!python3 train2d.py --task polyp --split all --maxiter 3000 --net unet-scratch --polyformer source --modes 2 --ds CVC-ClinicDB-train,Kvasir-train --cp ../model/unet-scratch-polyp-CVC-ClinicDB-train,Kvasir-train-06111827/iter14000.pth --sourceopt allpoly Thank you!

Hi may i ask how you draw the loss fuction? i have not seen it. if you tell, i can check mine.

nguyenlecong commented 2 years ago

Also I have other 2 questiones.

  1. I noticed that only ClinicDB and Kvasir's training set seems to be used for training. the test dataset we donot use?
  2. why for fundus we trained on source data "ds"= train,valid,test, but when train on target domain (step 3), "sourceds" only includes "train", not "train,valid,test"?

Excuse me, do you have a problem with loss function when training polyformer (source) as shown? image If not, can you show the loss function and the command line you use? This is my command line: _!python3 train2d.py --task polyp --split all --maxiter 3000 --net unet-scratch --polyformer source --modes 2 --ds CVC-ClinicDB-train,Kvasir-train --cp ../model/unet-scratch-polyp-CVC-ClinicDB-train,Kvasir-train-06111827/iter14000.pth --sourceopt allpoly Thank you!

Hi may i ask how you draw the loss fuction? i have not seen it. if you tell, i can check mine.

You can load log from directory ../model/*/log/event... by: _from tensorboard.backend.event_processing import event_accumulator ea = eventaccumulator.EventAccumulator(logdir) ea.Reload() loss = pd.DataFrame(ea.Scalars('loss/loss')) There are also _loss/total_celoss and _loss/total_diceloss