xh-liu / HydraPlus-Net

245 stars 65 forks source link

Can you share your code for me? #2

Open LBB-Liuwl opened 7 years ago

LBB-Liuwl commented 7 years ago

Hi,I'm very interested in your results. Can you share your code for me?

xh-liu commented 7 years ago

Hi! Thank you for your interest! This project is done by Caffe framework (https://github.com/BVLC/caffe) with little changes to code. We only changes some of the network structure (e.g., adding attention modules), which can be done by simply modifying prototxt. The details of the framework can be found in the paper, and if you have any questions about the structure please feel free to ask me.

ysc703 commented 7 years ago

Hi! In the paper, it said that the HP-net was trained in a stage-wise fashion. And which loss is used when training the M-net and fine-turning the AF-net? Could you share the Caffe prototxt files ? Thank you!

xh-liu commented 7 years ago

Hi! In each stage of training, we always use weighted sigmoid cross entropy loss, as illustrated in the paper. The weights for positive and negative examples aim to balance the loss of positive and negative samples. We will detail code and prototxt later. Thank you!

ysc703 commented 7 years ago

@xh-liu Thanks!

chl916185 commented 7 years ago

Hi,I'm very interested in your paper. Can you share your code for me? @xh-liu

Li1991 commented 6 years ago

Can you detail code and prototxt? Because what I have done can not reconstruct your result. Thank you very much! @xh-liu

bilipa commented 6 years ago

@Li1991 How do you combine the result?

xh-liu commented 6 years ago

We will give the detailed code later. Thank you for your interest!

ysc703 commented 6 years ago

Hi @xh-liu, will the model or the prototxt file be released resently?

xh-liu commented 6 years ago

I have added some example prototxts in the folder prototxt_example. a0 and a3 are two branches out of total nine branches. You can re-implement the other branches based on it. fusion denotes the whole net for fusing the features from nine branches and the main branch. For computation simplicity, we extract features of the nine branches offline and use the extracted features to train the final fusion layer and classifiers.

Li1991 commented 6 years ago

Hi, after seeing your example prototxts, I found you use a layer called NNInterp, but I can not find it. Could you please offer the original code of this layer? Thank you! @xh-liu

xh-liu commented 6 years ago

@Li1991 I have added the code for nninterp layer in folder 'layers'.

Li1991 commented 6 years ago

Thank you for your kindness. @xh-liu And where is your python layer --FeatureConcatDataLayer?