megvii-research / DPGN

[CVPR 2020] DPGN: Distribution Propagation Graph Network for Few-shot Learning.
MIT License
181 stars 38 forks source link

Node label sequence #12

Closed anhuigzj closed 4 years ago

anhuigzj commented 4 years ago

Hi: In the 5-W 1-S setting, the query set label of each batch during training and testing is [0,1,2,3,4], no scrambling is performed,Will this make the network remember this setting, and the accuracy will increase?In other papers (GNN, relational network) that I read for few shot learning, the labels of the query set are out of order, so I follow this idea of ​​out of order and only use the source code of each batch test query set label randomly scramble, maybe [1,4,2,0,3], [1,2,4,0,3], [2,0,4,3,1], etc., init_edge is also based on the modified label,the sequence generated is still a 10*10 symmetric matrix, and the accuracy value is only about 43%, which is far from the 66.27% accuracy of my source code.I also scrambled during the training and testing phases, and the result was about 43%.What I thought about the graph network at the beginning was that the order of the node labels should have no effect on the accuracy rate, because we made the form of the data into the graph, data structured, and relative, but this huge accuracy difference makes me,I don't quite understand it. Did I set it wrong? thank you very much!

YangLing0818 commented 4 years ago

Hi: In the 5-W 1-S setting, the query set label of each batch during training and testing is [0,1,2,3,4], no scrambling is performed,Will this make the network remember this setting, and the accuracy will increase?In other papers (GNN, relational network) that I read for few shot learning, the labels of the query set are out of order, so I follow this idea of ​​out of order and only use the source code of each batch test query set label randomly scramble, maybe [1,4,2,0,3], [1,2,4,0,3], [2,0,4,3,1], etc., init_edge is also based on the modified label,the sequence generated is still a 10*10 symmetric matrix, and the accuracy value is only about 43%, which is far from the 66.27% accuracy of my source code.I also scrambled during the training and testing phases, and the result was about 43%.What I thought about the graph network at the beginning was that the order of the node labels should have no effect on the accuracy rate, because we made the form of the data into the graph, data structured, and relative, but this huge accuracy difference makes me,I don't quite understand it. Did I set it wrong? thank you very much!

Hi,anhuigzj,

We did not shuffle the label sequence of each meta task for fair comparisons with EGNN, and making a shuffle of label orders can not have a such big impact on test accuracy as you describe. We simply shuffle the label orders in dataloader and run the experiment again. We get the accuracy of 0.668 and the log of best accuracy is as follow: [2020-07-22 06:03:53,324] [main] step : 16100 train_edge_loss : 4.447120189666748 node_acc : 0.8160000443458557 [2020-07-22 06:04:37,059] [main] step : 16200 train_edge_loss : 4.489501476287842 node_acc : 0.8160000443458557 [2020-07-22 06:05:21,510] [main] step : 16300 train_edge_loss : 4.51016902923584 node_acc : 0.8080000281333923 [2020-07-22 06:06:05,420] [main] step : 16400 train_edge_loss : 4.383455753326416 node_acc : 0.8720000386238098 [2020-07-22 06:06:49,117] [main] step : 16500 train_edge_loss : 4.389024257659912 node_acc : 0.8640000224113464 [2020-07-22 06:07:32,899] [main] step : 16600 train_edge_loss : 4.517629623413086 node_acc : 0.8240000605583191 [2020-07-22 06:08:16,541] [main] step : 16700 train_edge_loss : 4.4838104248046875 node_acc : 0.8400000333786011 [2020-07-22 06:09:00,136] [main] step : 16800 train_edge_loss : 4.3946003913879395 node_acc : 0.8800000548362732 [2020-07-22 06:09:44,459] [main] step : 16900 train_edge_loss : 4.487715721130371 node_acc : 0.8640000224113464 [2020-07-22 06:10:27,947] [main] step : 17000 train_edge_loss : 4.3250508308410645 node_acc : 0.9120000600814819 [2020-07-22 06:11:44,688] [main] ------------------------------------ [2020-07-22 06:11:44,689] [main] step : 17000 test_edge_loss : 2.057415969014168 test_node_acc : 0.6452399858534336 [2020-07-22 06:11:44,690] [main] evaluation: total_count=999, accuracy: mean=64.52%, std=8.07%, ci95=0.50% [2020-07-22 06:11:44,690] [main] ------------------------------------ [2020-07-22 06:11:44,696] [main] test_acc : 0.6452399858534336 step : 17000 [2020-07-22 06:11:44,697] [main] test_best_acc : 0.6452399858534336 step : 17000 [2020-07-22 06:12:29,300] [main] step : 17100 train_edge_loss : 4.289996147155762 node_acc : 0.9600000381469727 [2020-07-22 06:13:13,374] [main] step : 17200 train_edge_loss : 4.415626525878906 node_acc : 0.8480000495910645 [2020-07-22 06:13:57,204] [main] step : 17300 train_edge_loss : 4.314781665802002 node_acc : 0.9040000438690186 [2020-07-22 06:14:40,816] [main] step : 17400 train_edge_loss : 4.400538921356201 node_acc : 0.8800000548362732 [2020-07-22 06:15:24,620] [main] step : 17500 train_edge_loss : 4.3740763664245605 node_acc : 0.9360000491142273 [2020-07-22 06:16:08,487] [main] step : 17600 train_edge_loss : 4.3948187828063965 node_acc : 0.8640000224113464 [2020-07-22 06:16:52,578] [main] step : 17700 train_edge_loss : 4.36224889755249 node_acc : 0.8720000386238098 [2020-07-22 06:17:36,184] [main] step : 17800 train_edge_loss : 4.33498477935791 node_acc : 0.9040000438690186 [2020-07-22 06:18:19,908] [main] step : 17900 train_edge_loss : 4.370996952056885 node_acc : 0.9200000166893005 [2020-07-22 06:19:04,020] [main] step : 18000 train_edge_loss : 4.3255295753479 node_acc : 0.9040000438690186 [2020-07-22 06:20:18,294] [main] ------------------------------------ [2020-07-22 06:20:18,295] [main] step : 18000 test_edge_loss : 2.011564675092697 test_node_acc : 0.6681799855232239 [2020-07-22 06:20:18,296] [main] evaluation: total_count=999, accuracy: mean=66.82%, std=8.22%, ci95=0.51% [2020-07-22 06:20:18,296] [main] ------------------------------------ [2020-07-22 06:20:18,299] [main] test_acc : 0.6681799855232239 step : 18000 [2020-07-22 06:20:18,300] [main] test_best_acc : 0.6681799855232239 step : 18000 [2020-07-22 06:21:03,320] [main] step : 18100 train_edge_loss : 4.249937534332275 node_acc : 0.968000054359436 [2020-07-22 06:21:46,991] [main] step : 18200 train_edge_loss : 4.302224636077881 node_acc : 0.9040000438690186 [2020-07-22 06:22:30,529] [main] step : 18300 train_edge_loss : 4.361363410949707 node_acc : 0.8960000276565552 [2020-07-22 06:23:14,082] [main] step : 18400 train_edge_loss : 4.291866302490234 node_acc : 0.9440000653266907 [2020-07-22 06:23:57,701] [main] step : 18500 train_edge_loss : 4.358724117279053 node_acc : 0.8880000710487366 [2020-07-22 06:24:41,330] [main] step : 18600 train_edge_loss : 4.299665927886963 node_acc : 0.9520000219345093 [2020-07-22 06:25:25,334] [main] step : 18700 train_edge_loss : 4.373410701751709 node_acc : 0.8640000224113464 [2020-07-22 06:26:08,712] [main] step : 18800 train_edge_loss : 4.396667003631592 node_acc : 0.8800000548362732 [2020-07-22 06:26:52,477] [main] step : 18900 train_edge_loss : 4.270208835601807 node_acc : 0.9600000381469727 [2020-07-22 06:27:36,350] [main] step : 19000 train_edge_loss : 4.322263240814209 node_acc : 0.9120000600814819 Please check your code of label shuffle and try again.

Yours, DPGN Team

anhuigzj commented 4 years ago

Hi: Thank you very much for your reply. I've been having problems with the modification. I've been confused about this part for a long time. Can you send me the modified dataloder? Thank you very much for your help!