littleredxh / DREML

PyTorch implementation of Deep Randomized Ensembles for Metric Learning(ECCV2018)
67 stars 14 forks source link

The performance problem. #2

Closed zhengxiawu closed 5 years ago

zhengxiawu commented 6 years ago

I clone the project and running the code with the same hyperparameters in the paper. The recall@1 on Cub200-2011 is 63.79%, which is far from the result in paper 80.5. P.S. the model is DREML(R,12,48). I wonder it may be caused by the different settings of hyperparameters, so would you give me a detailed parameter setting on CUB200-2011?

littleredxh commented 6 years ago

Your result is correct. I have updated the result in the paper and the website will refresh the paper later.

xialeiliu commented 6 years ago

I did the same experiment with Resnet18 and evaluated the results with function in recall notebook, but I only got R@1 58.2%, Did I miss something?

asanakoy commented 5 years ago

@xialeiliu Did you run with exactly the same hyperparameters as provided in this source code?

littleredxh commented 5 years ago

One thing you can try is to not l2 normalized the feature vector in the line 22 of the Loss.py

BoseungJeong commented 5 years ago

I did same experiment with ResNet18 on CUB200 and CAR196. And also evaluated the results with your notebook. I verified the results on CUB200 is same with your paper, but the results on CAR196 are different with your paper. My result on R@2 is 87.3 but the result on paper is 91.7. I only changed the data directory and then, it well-worked on CUB200 but it didn't work on CAR196. (I also changed line 6 in run.py Data='CAR' to work on CAR196.)
Is there any mistake, let me know.

thanks.

asanakoy commented 5 years ago

@BoseungJeong but did you get R@1=86.0% on CARS196?

Andrewymd commented 5 years ago

I clone the project and running the code with the same hyperparameters in the paper. the result is here: CUB R@1 R@2 R@4 R@8 63.0 73.8 82.2 88.8 CAR R@1 R@2 R@4 R@8 80.3 87.3 92.1 95.4 the performance on CUB is basically consistent with the paper but the R@1 on CAR,i can't get R@1=86.0 Could you give me some details about the experiments?

littleredxh commented 5 years ago

One thing you can try is to not l2 normalized the feature vector in the line 22 of the Loss.py

BoseungJeong commented 5 years ago

@BoseungJeong but did you get R@1=86.0% on CARS196?

I got results like @945984093 . But I already revised the line 22 of the Loss.py then I got right results on CAR196. the L2 normalization doesn't need for training on CAR196.

Andrewymd commented 5 years ago

@BoseungJeong @asanakoy I have ran again, the new results (without l2 normalize)is: CAR R@1 R@2 R@4 R@8 CAR 86.9 91.9 95.1 96.9 @littleredxh the result is nice. but i have a question how to explain the metric learning?

proxy in DREML paper

    self.proxy = torch.eye(N).cuda()

the initial setup means the class center orthogonal. the feature without l2 normalize, it is a Classification problem