wtyhub / LPN

Pytorch implementation of Each Part Matters: Local Patterns Facilitate Cross-view Geo-localization https://arxiv.org/abs/2008.11646
MIT License
75 stars 13 forks source link

Questions for trainning on University-1652 #5

Open nono-zz opened 3 years ago

nono-zz commented 3 years ago

Hi,

I directly trained the model to match images from two views (satellite -> drone) through the 'train.py', but the accuracy remained extremely low (from 0.0000 to 0.0060), and didn't grow with epoch increases.

I changed some of the parameters for training on my computer, the changes are as follow: batchsize = 2 num_workers = 0 inputs2, labels2 are from the 'drone' directory LPN = True

I wonder how I could achieve the accuracy. Are there any preparations upon the dataset needed before training? Thank you very much for your help!

Zhaoxiang

wtyhub commented 3 years ago

Hi, I do not know whether you have set opt.views=2. If you want to training and testing using two view (satellite->drone), you can refer to this link https://github.com/layumi/University1652-Baseline/issues/12#issuecomment-728740042. In short, you need to set opt.views=3 and change the loss weight of street view (street and google) to 0 when training.

Also, if you change the batchsize, you need to re-adjust the learning rate.

nono-zz commented 3 years ago

Hi, Thank you very much for your prompt reply! I changed the opt.view from 2 to 3, and then adjusted the learning rate according to the batchsize. The result turned out to be great!