Closed ChaoXiang661 closed 2 years ago
@ChongjianGE Hi,I'm tring to use cnn and transformer within a KL model for semantic segmantion, ur peper enlights me a lot, but I'm still confusing, In semantic segmentation usually uses encode-decode archtecture, so where is the pretext using for, is it between the encoder and decoder or after the decoder. and what should the predictor like , still the same for classification
Hi @q671383789 , This is perhaps a misunderstanding. The pretext is only used for the pretraining purpose.
After the pretraining stage, we only adopt the pre-trained encoder for the downstream tasks. That's to say, we initialize the CNN encoder (i.e., the ResNet50 backbone without any predictors or projectors) with the pre-trained weight, and attach a new head (i.e., the MaskRCNN head) to the backbone for the semantic segmentation training.
I know that, that's to say ,whether classification or segmentation the pretraining process is the same?
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2021年11月19日(星期五) 晚上8:15 收件人: @.>; 抄送: "系要听Rock & @.>; @.>; 主题: Re: [ChongjianGE/CARE] Will you update the code for semantic image segmentation next? (Issue #2)
Hi @q671383789 , This is perhaps a misunderstanding. The pretext is only used for the pretraining purpose.
After the pretraining stage, we only adopt the pre-trained encoder for the downstream tasks. That's to say, we initialize the CNN encoder (i.e., the ResNet5 backbone without any predictors or projectors) with the pre-trained weight, and attach a new head (i.e., the MaskRCNN head) to the backbone for the semantic segmentation training.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
@q671383789 Exactly. All the downstream tasks (e.g., classification, detection, and segmentation) share the same pretraining process.
Since there are no further questions, I will close this issue. Please feel free to reopen it when necessary.
Hi @ChaoXiang661,
We use the OpenSelfSup Repo for detection and segmentation evaluation. We will release the trained model for the downstream tasks later. Thanks for your interest in our work.