Jianlong-Fu / Recurrent-Attention-CNN

225 stars 45 forks source link

Implementation in pytorch #15

Open jeong-tae opened 6 years ago

jeong-tae commented 6 years ago

Hi

i am working on implementation to reproduce this paper with pytorch. But stuck in the pre-train a APN network.

Original code doesn't give the details about learning a APN network, step2. Also condition about convergence. if loss fluctuate forever, when should i stop to train?

Anyone progress in reproducing this? Test code are 100% useless to reproduce this results. How can we try RACNN on other public dataset?

If anyone who interested in reproducing this, plz contact me. we can discuss further about training details

Ostnie commented 6 years ago

@jeong-tae Hi, I'm also trying to implemention to reproduce this paper with tensorflow, and I also have some trouble about APN。 For your question, I think we should use earlystopping when we trained.

besides this, I have some doubt about APN, As I understand. The input is a batch of images and we will get a set of points(tx,ty,tl) for the segmentation area, so should we use these three dimensional points to cut the current batch of pictures for training? If so, when can we use the next batch of data ?

jeong-tae commented 6 years ago

@Ostnie i think we use the points to crop the current batch. the points are about current image. so it must be. i am not sure where are you confusing now

actually i did the early stopping for APN pretrain. but... when? loss does not converging well.

Ostnie commented 6 years ago

@jeong-tae As you said, we should cut current image, and send it to the VGG19, then we use it's loss to modify the APN parameters. Then we will get three new points, shall we still repeat the steps before ?

I'm really confused about the loss of APN, I'm not sure how to calculate it. I guess it depends on the classification of VGG19. As the formula 8, loss=rankloss +crossentropyLoss ,is it ?

jeong-tae commented 6 years ago

following the paper, we should repeat two times. The losses are not backpropagated togather. rank loss is for APN, entropy loss for conv/classifier.

As authors said, it should calculated in alternative way

Ostnie commented 6 years ago

@jeong-tae Yes you are right, then I have some doubts about rank loss, is it calculated by the output of the softmax layers in vgg19?I think it is strange because the loss contain some information about it's network's parameters. Can we use vgg's loss to modify the APN? I don't know how to do this, could you plz show me some code about this?

jeong-tae commented 6 years ago

Yes it is. you can use the output of the softmax layer. rank_loss = (pred[i]-pred[i+1] + 0.05).clamp(min = 0) i calculated the loss like this. Why can't we use the loss that contains the network parameters?

i think the purpose of the rank loss is to fill the gap between scales performances. by doing this, APN will propose the more precise region to increase the performance at each scale.

Ostnie commented 6 years ago

When I learned the back propagation algorithm. Loss is not just a number that shows how much the difference between the pred and the truth, it also contains information about the impact of each parameter on the final loss in the network. If we use the loss value of VGG, then the loss does not contain APN information in it, although they share most layers, but the last few A fully connected layer is independent of each other. In other words, if you give me a loss value of VGG and let me back propagation to calculate how to optimize the parameters of APN, I don't think it can be done.

I think I may be wrong, but based on the back propagation algorithm I have deduced, I really can not understand this method.

jeong-tae commented 6 years ago

The rank loss is the gap between VGG1 and VGG2. You can easily imagine the meta-learning that teach the difference between two networks(in this cage VGG1 and VGG2). And the gap is occured in different scales with attention. So APN learn the attention where should we focus. if gap is large enough, the APN will try to reduce that gap by the proposing a attention.

Ostnie commented 6 years ago

@jeong-tae This makes me confused,it seems to be right, but how can I get VGG's loss backpropagation to APN? I can't understand it and it really upset me.

In tensorflow, I don't know how to set APN's loss as VGG‘s loss, could you plz show me how pytorch accomplished this step?

jeong-tae commented 6 years ago

oh, you mean backpropagation for APN? i actually implement the backward code following the caffe code, which is in attention crop layer.

i will finish the code work so soon and make it public. Then you can see the whole process as well!

Ostnie commented 6 years ago

@jeong-tae https://github.com/Charleo85/DeepCar this library may help you, it is written in pytorch.

jeong-tae commented 6 years ago

@Ostnie oh, very nice! thx!

jeong-tae commented 6 years ago

@Ostnie i publish the code and need some helps. If you still interested in implementation with other framework. come to here https://github.com/jeong-tae/RACNN-pytorch and work together.

Ostnie commented 6 years ago

@jeong-tae Oh,great , I will study it soon, but I'm not familiar with pytorch, let's have a try first !

jackshaw commented 5 years ago

Hi,@jeong-tae,I'm trying to reproduce RA-CNN too.I have some doubt about the data preprocessing.In pytorch,the pixesl of images will be rescaled to 0 between 255,which is different from that in caffe.Do you think this difference will inluence the performance ?

jeong-tae commented 5 years ago

@jackshaw hello, jachshaw I am not sure what you mean. Do you mean normalization? or subtract mean? Whatever you do, it will not effect too much... maybe. But actually it influence to performance.

https://stackoverflow.com/questions/4674623/why-do-we-have-to-normalize-the-input-for-an-artificial-neural-network This reply will help you to understand data preprocessing

jackshaw commented 5 years ago

@jeong-tae Thanks very much for your reply. Did you ever tried the available caffe pretrained model?I can only get 74% accuracy far from 85%. I think I must miss some important details when preparing my test data, but I can not figure out what details I've missed. I just resized the shortest side of each image and then converted the resized image to lmdb format.

jeong-tae commented 5 years ago

Nope. i didn't. In pytorch, there is image resize preprocessing that used in the paper. You can easily find it in the pytorch docs.

bluemandora commented 5 years ago

@jeong-tae I think step 2 is something like:

  1. Initialize the network with VGG pre-trained from ImageNet.
  2. Forward propagation the images and get the feature maps after conv5_4.
  3. Find a square(x, y, l) with half length of the original image and maximum the sum of value in corresponding area in the feature map.
  4. Train the APN network(only APN part) with ground truth(x, y, l) and some loss like MSE.
jeong-tae commented 5 years ago

i think so too exactly same! i tried with that way but i can’t reproduce the result. i will soon try again.

lmy418lmy commented 4 years ago

Could you send me the source code with caffe?

flash1803 commented 4 years ago

@jeong-tae I think step 2 is something like:

  1. Initialize the network with VGG pre-trained from ImageNet.
  2. Forward propagation the images and get the feature maps after conv5_4.
  3. Find a square(x, y, l) with half length of the original image and maximum the sum of value in corresponding area in the feature map.
  4. Train the APN network(only APN part) with ground truth(x, y, l) and some loss like MSE.

How can I get the ground truth(x,y,l) ?