liutinglt / CE2P

214 stars 41 forks source link

details #14

Open YOKE opened 5 years ago

YOKE commented 5 years ago

Could you please explain the label_rev and how to get them? And how I know if the native CUDA implementation of InPlace-ABN succeeds?Thanks for your reply!

YOKE commented 5 years ago

I can't find the label_rev. Could you help me?@liutinglt @eng100200

eng100200 commented 5 years ago

In the train dataset lable_rev are already provided

eng100200 commented 5 years ago

Please download imagenet pretrained resent-101, label files of edge and the trained models from baidu drive or Google drive, and put it into dataset folder.

YOKE commented 5 years ago

In the train dataset lable_rev are already provided @eng100200 In the baidu drive or Google drive, there are imagenet pretrained resent-101, label files of edge and the trained models, I didn't find the reversed images. And in LIP, I only have train and annotation images. ??I still don't know where they are. Could you tell me more specifically? Thank you very much~~~

liutinglt commented 5 years ago

@BEYBB7 Just flip the labels horizontally. NOTE, for the flipped label, you must swap the left and right label.

YOKE commented 5 years ago

@BEYBB7 Just flip the labels horizontally. NOTE, for the flipped label, you must swap the left and right label.

Have you provided the reversed images? I didn't find them@liutinglt

liutinglt commented 5 years ago

@BEYBB7 I haven't provided, as it's easy to generate by your own.

YOKE commented 5 years ago

@BEYBB7 I haven't provided, as it's easy to generate by your own.

Thank you!Got it

YOKE commented 5 years ago

@liutinglt
What's does the difference between the pretrained model-- LIP_CE2P_train.pth, LIP_CE2P_trainVal_321_681.pth, LIP_CE2P_train_473.pth? Could you explain the mode of training of them?

eng100200 commented 5 years ago

LIP_CE2P_train.pth...model obtained using the training images only

LIP_CE2P_trainVal_321_681.pth ,,,,,model obtained using the only validation images as training images

LIP_CE2P_train_473.pth model after resizing the images to 473 x 473

YOKE commented 5 years ago

Could you tell the detail of your baseline-resnet101 when you implement it? Especially, the part of decoder? @liutinglt

zzw1123 commented 5 years ago

@liutinglt Dear ting, I have read your paper and it is an excellent job! And I have a question about the imagenet pre-trained model, is it downloaded directly from deeplab_v2 project? Or have you modified it?

liutinglt commented 5 years ago

@BEYBB7 The baseline performance is obtained by predicting from layer4 module with 1X1 CONV directly. Remove layer5, edgelayer , layer6, layer7 in models.py, and replace with nn.Conv2d(2048, num_classes, kernel_size=1, padding=0, dilation=1, bias=True)

liutinglt commented 5 years ago

@zzw1123 The pretriained model is converted from deeplab_V2, which has been pretrained on MS-COCO dataset. It's provided in https://github.com/speedinghzl/Pytorch-Deeplab and https://github.com/isht7/pytorch-deeplab-resnet

YOKE commented 5 years ago

@BEYBB7 The baseline performance is obtained by predicting from layer4 module with 1X1 CONV directly. Remove layer5, edgelayer , layer6, layer7 in models.py, and replace with nn.Conv2d(2048, num_classes, kernel_size=1, padding=0, dilation=1, bias=True)

Is there any different setting? I‘can't implement your result -- almost 48. @liutinglt

zzw1123 commented 5 years ago

@BEYBB7 Me, neither. And the mIoU is only 35.47%, what is your result?

YOKE commented 5 years ago

@BEYBB7 Me, neither. And the mIoU is only 35.47%, what is your result?

About 41%

zzw1123 commented 5 years ago

@BEYBB7 Do you use the same parameters as those in the paper?

YOKE commented 5 years ago

@BEYBB7 Do you use the same parameters as those in the paper?

Yes,I don't change any parameter.

zzw1123 commented 5 years ago

@BEYBB7 That is confusing... @liutinglt Could you please help us?

GengDavid commented 5 years ago

@BEYBB7 @zzw1123 I cannot reproduce the results either. BTW, May I know how many GPUs did you use?

zzw1123 commented 5 years ago

@GengDavid When I tried to re-implement the baseline result, I used 5 gpus and the same lr as paper mentioned. How about you?

liutinglt commented 5 years ago

@YOKE @eng100200 @zzw1123 @GengDavid As there are some strange problems with Pytorch 0.3.1, please use the updated code with Pytorch 0.4.1.

YOKE commented 5 years ago

There is an error when I use your new project, could you help me? qq 20190227170728

@liutinglt

liutinglt commented 5 years ago

@YOKE The modules is deleted in this version. Please build the libs following Readme, and use the 'CE2P.py' in 'networks'

YOKE commented 5 years ago

@YOKE The modules is deleted in this version. Please build the libs following Readme, and use the 'CE2P.py' in 'networks'

1 It's the same problem. @liutinglt

YOKE commented 5 years ago

Is there something different and I need to check something? @liutinglt

zzw1123 commented 5 years ago

@liutinglt My code got stucked during forward process. And I found it stops before layer4 in resnet101. Do you know why ?

YOKE commented 5 years ago

@liutinglt My code got stucked during forward process. And I found it stops before layer4 in resnet101. Do you know why ?

Have you met my error--undefined symbol PyInt_FromLong?

liutinglt commented 5 years ago

@YOKE Did you delete the libs/_ext, and rebuild it by yourself?

liutinglt commented 5 years ago

@zzw1123 First, you can try smaller batch size. Or, you can use Pytorch 1.0, just download 'modules' in https://github.com/mapillary/inplace_abn, and rename it with 'libs'.

zzw1123 commented 5 years ago

@liutinglt Thanks and after changing batch size from 24 to 15, it goes well.

zzw1123 commented 5 years ago

@liutinglt Hi, me again. I found that during training process, the Utilization of GPU may be 0% sometimes, is it because of the processing of the images such as scaling and affine transform?

994374821 commented 5 years ago

@liutinglt What's does the difference between the pretrained model-- LIP_CE2P_train.pth, LIP_CE2P_trainVal_321_681.pth, LIP_CE2P_train_473.pth? Could you explain the mode of training of them?

Hi, where did you find the pretrained models of LIP_CE2P_train.pth, LIP_CE2P_trainVal_321_681.pth, LIP_CE2P_train_473.pth ? I can not find a link to download them.

YOKE commented 5 years ago

The project has been updated and the link has been deleted.