Closed Yitian-Li closed 3 years ago
Hi Yitian,
Thanks for your interest in our work.
After reading this issue, I download the checkpoint from Google Drive again and run the meta-test phase again. My result for miniImageNet 5-way 1-shot is 60.2 ± 1.8
.
As usual, you may directly get the reported results using the default setting. According to the information you provided, I am not sure what is the problem. Please check:
If you have any further questions, feel free to add comments in this issue.
Best, Yaoyao
YaoShen,
I'm calling Yaoshen because I find a issue named "YaoShenNB", hahaha...
Thank you for your prompt reply.
I have checked so many times. I download the code/weights/data again, and I'm very sure I'm using Python2.7 and TensorFlow 1.3.0, but I got the same problem...
(mtl-tf) root@tensorflow-jdzb95ry7:/data1/meta-transfer-learning# python --version
Python 2.7.15
(mtl-tf) root@tensorflow-jdzb95ry7:/data1/meta-transfer-learning# python
Python 2.7.15 | packaged by conda-forge | (default, Mar 5 2020, 14:56:06)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'1.3.0'
I got metaval_accuracies
in meta-transfer-learning/tensorflow/trainer/meta.py, line 301 in numpy array with shape(600,100).
Where 600 is the episode without any problems. Then means = np.mean(metaval_accuracies, 0)
stds = np.std(metaval_accuracies, 0)
is calculated, so that I got the final results with 100 numbers, as shown in the logs above. It's really weird...
And I believe 100 is thetest_base_epoch_num
defined in meta-transfer-learning/tensorflow/main.py, line56
flags.DEFINE_integer('test_base_epoch_num', 100, 'number of inner gradient updates during test.')
Any help is appreciated!
Additionly, The second results is completely same as shown in the first comment.
Hi Yitian,
The issue you mentioned is created by one of my friends :)
It seems you're using the correct Python and TensorFlow version.
I just run the meta-test phase on another device and the performance is as follows,
(array([0.34633273, 0.43499938, 0.48166656, 0.5100002 , 0.5256669 ,
0.5383337 , 0.54600036, 0.55033356, 0.5580002 , 0.5593335 ,
0.56300026, 0.5673335 , 0.5690001 , 0.57333344, 0.57433337,
0.5763335 , 0.57766676, 0.5790001 , 0.5806668 , 0.5816668 ,
0.58266675, 0.5830002 , 0.5820002 , 0.58366686, 0.5850001 ,
0.58500016, 0.5860001 , 0.5863334 , 0.5860001 , 0.58700013,
0.58633345, 0.5870002 , 0.5873335 , 0.58733344, 0.58766675,
0.58766675, 0.5883334 , 0.5883335 , 0.58900017, 0.5896668 ,
0.5896669 , 0.5896669 , 0.5896669 , 0.5900003 , 0.5903336 ,
0.5903336 , 0.59066695, 0.59100026, 0.591667 , 0.591667 ,
0.5923337 , 0.5923337 , 0.5923337 , 0.59266704, 0.59266704,
0.5923337 , 0.5923337 , 0.59200037, 0.59200037, 0.59266704,
0.5923337 , 0.5930004 , 0.5930004 , 0.5923337 , 0.59266704,
0.59300035, 0.59300035, 0.59333366, 0.59400034, 0.59466696,
0.5946669 , 0.5943336 , 0.5943336 , 0.5943336 , 0.59466696,
0.5950003 , 0.5950003 , 0.5940003 , 0.5943336 , 0.59466696,
0.59500027, 0.59500027, 0.59500027, 0.59500027, 0.59466696,
0.59466696, 0.5943336 , 0.5943336 , 0.593667 , 0.593667 ,
0.593667 , 0.593667 , 0.5940003 , 0.5940003 , 0.5940003 ,
0.59433365, 0.594667 , 0.594667 , 0.594667 , 0.5950003 ],
dtype=float32), array([0.01700923, 0.01709469, 0.01719844, 0.01748813, 0.01745903,
0.01750593, 0.0175826 , 0.01755509, 0.01757381, 0.01769064,
0.01770602, 0.01770028, 0.01773155, 0.0175956 , 0.01752002,
0.01744043, 0.01745171, 0.01755977, 0.01760844, 0.01765157,
0.01762179, 0.01763597, 0.01769003, 0.01768832, 0.0177921 ,
0.01776809, 0.01788107, 0.01782295, 0.01771318, 0.01775415,
0.01767867, 0.01768188, 0.01764716, 0.01757444, 0.01758815,
0.01758815, 0.01759102, 0.01766366, 0.0176664 , 0.01771725,
0.01771725, 0.01771725, 0.01769314, 0.01765812, 0.01769555,
0.01767141, 0.01770875, 0.01769777, 0.01777204, 0.01777204,
0.01772592, 0.01772592, 0.01770183, 0.01773887, 0.01773887,
0.01774999, 0.01774999, 0.01773703, 0.01773703, 0.01769068,
0.01770183, 0.01770359, 0.01770359, 0.01770183, 0.01764235,
0.01763112, 0.01763112, 0.01764403, 0.0176939 , 0.01764695,
0.01767112, 0.01765838, 0.01765838, 0.01765838, 0.01762275,
0.01765962, 0.01765962, 0.01766974, 0.01763419, 0.01764695,
0.01768377, 0.01768377, 0.01768377, 0.01768377, 0.01764695,
0.01764695, 0.01765838, 0.01765838, 0.01765691, 0.01765691,
0.01765691, 0.01765691, 0.01766974, 0.01766974, 0.01764557,
0.01768254, 0.01771937, 0.01771937, 0.01771937, 0.01773198],
dtype=float32))
I guess it might be influenced by the random seed. You may try to run the code with another random seed, or reinstall the environment. I am not sure. I think the performance drop should not be that high (e.g. 2%) on your device.
Here is all the package I used to run this project, I hope it is helpful to you:
_libgcc_mutex 0.1 main
backports 1.0 py_2
backports.functools-lru-cache 1.6.1 <pip>
backports.weakref 1.0rc1 py27_0
blas 1.0 mkl
bleach 1.5.0 py27_0
ca-certificates 2019.5.15 1
certifi 2019.6.16 py27_1
cudatoolkit 8.0 3
cudnn 6.0.21 cuda8.0_0
cycler 0.10.0 <pip>
funcsigs 1.0.2 py27_0
futures 3.3.0 py27_0
html5lib 0.9999999 py27_0
intel-openmp 2019.4 243
kiwisolver 1.1.0 <pip>
libedit 3.1.20181209 hc058e9b_0
libffi 3.2.1 hd88cf55_4
libgcc 7.2.0 h69d50b8_2
libgcc-ng 9.1.0 hdf63c60_0
libgfortran-ng 7.3.0 hdf63c60_0
libprotobuf 3.8.0 hd408876_0
libstdcxx-ng 9.1.0 hdf63c60_0
markdown 3.1.1 py27_0
matplotlib 2.2.5 <pip>
mkl 2019.4 243
mkl-service 2.0.2 py27h7b6447c_0
mkl_fft 1.0.14 py27ha843d7b_0
mkl_random 1.0.2 py27hd81dba3_0
mock 3.0.5 py27_0
ncurses 6.1 he6710b0_1
numpy 1.16.4 py27h7e9f1db_0
numpy-base 1.16.4 py27hde5b4d6_0
openssl 1.1.1c h7b6447c_1
pip 19.2.2 py27_0
protobuf 3.8.0 py27he6710b0_0
python 2.7.16 h8b3fad2_4
python-dateutil 2.8.1 <pip>
pytz 2020.1 <pip>
readline 7.0 h7b6447c_5
setuptools 41.0.1 py27_0
six 1.12.0 py27_0
sqlite 3.29.0 h7b6447c_0
subprocess32 3.5.4 <pip>
tensorflow-gpu 1.3.0 0
tensorflow-gpu-base 1.3.0 py27cuda8.0cudnn6.0_1
tensorflow-tensorboard 1.5.1 py27hf484d3e_1
tk 8.6.8 hbc83047_0
werkzeug 0.15.5 py_0
wheel 0.33.4 py27_0
zlib 1.2.11 h7b6447c_3
I'll check the code when I have time, maybe in several weeks if it is possible. If you have any further findings, you may leave comments in this issue.
Best, Yaoyao
Thanks for sharing the code! I'm trying to running the test phase with tensorflow version. After I download the weights provided by you and run following script
mkdir -p ./logs/download_weights
mv ~/downloads/mini-1shot/*.npy ./logs/download_weights
python run_experiment.py TEST_LOAD
I got the log like this: Test accuracies and confidence intervals (array([0.32766578, 0.42566618, 0.48099986, 0.50733316, 0.52399987, 0.5353332 , 0.54933333, 0.5523333 , 0.554 , 0.5576667 , 0.56033343, 0.5623334 , 0.5663333 , 0.5673332 , 0.56933326, 0.5709999 , 0.57166654, 0.57233316, 0.57133317, 0.5713332 , 0.5716666 , 0.5723333 , 0.5723333 , 0.57199985, 0.57299984, 0.5736665 , 0.5746665 , 0.5746665 , 0.5763331 , 0.5766664 , 0.5769998 , 0.5779998 , 0.5773331 , 0.5769998 , 0.57766646, 0.57766646, 0.57833314, 0.57766646, 0.5776665 , 0.57699984, 0.5769999 , 0.5773332 , 0.5773332 , 0.57699984, 0.57699984, 0.5766665 , 0.57599974, 0.57633317, 0.57666653, 0.57666653, 0.57666653, 0.57699984, 0.57699984, 0.5773332 , 0.5776665 , 0.5776665 , 0.57833326, 0.57833326, 0.5773332 , 0.5773332 , 0.5776666 , 0.5779999 , 0.5779999 , 0.57833326, 0.57833326, 0.5786665 , 0.5789999 , 0.5789999 , 0.5799999 , 0.5799999 , 0.5799999 , 0.5799999 , 0.5799999 , 0.5799999 , 0.5799999 , 0.5796666 , 0.5799999 , 0.5799999 , 0.5803333 , 0.5803333 , 0.5803333 , 0.58 , 0.58 , 0.5803333 , 0.58066666, 0.5803333 , 0.5799999 , 0.5796666 , 0.5799999 , 0.5799999 , 0.5799999 , 0.5799999 , 0.5799999 , 0.5799999 , 0.58033323, 0.58033323, 0.58033323, 0.58033323, 0.58033323, 0.5799999 ], dtype=float32), array([0.01674007, 0.01800067, 0.01823004, 0.01856194, 0.0187241 , 0.01853984, 0.01835733, 0.01825729, 0.01848205, 0.0183646 , 0.01835622, 0.01856799, 0.01850209, 0.01847895, 0.0184084 , 0.01839091, 0.01844387, 0.01847349, 0.01842903, 0.01849838, 0.0184901 , 0.01854268, 0.01858866, 0.01864282, 0.01864076, 0.01869269, 0.01875842, 0.01875842, 0.01878373, 0.0188431 , 0.01883437, 0.01880805, 0.01880294, 0.01876625, 0.01872595, 0.01874873, 0.01873097, 0.01886222, 0.01888484, 0.01887964, 0.01890223, 0.01887092, 0.01887092, 0.0189248 , 0.0189248 , 0.0189335 , 0.01892819, 0.01891959, 0.01897854, 0.01897854, 0.0189335 , 0.01894734, 0.01894734, 0.01896118, 0.01895253, 0.01888484, 0.01891236, 0.01891236, 0.01893866, 0.01893866, 0.01897504, 0.01898878, 0.01898878, 0.01900243, 0.01900243, 0.01897114, 0.01896226, 0.01896226, 0.01895796, 0.01895796, 0.01895796, 0.01895796, 0.01895796, 0.01895796, 0.01895796, 0.01894442, 0.01893543, 0.01893543, 0.01890382, 0.01890382, 0.01890382, 0.01893543, 0.01893543, 0.01894893, 0.01896236, 0.01897144, 0.01895796, 0.01894442, 0.01891287, 0.01891287, 0.01891287, 0.01891287, 0.01891287, 0.01891287, 0.01888123, 0.01888123, 0.01888123, 0.01888123, 0.01881329, 0.01884504], dtype=float32))
Question: