Open shortcipher3 opened 6 years ago
Hmm. I tried to reproduce your problem, but the model outputs the same consistently with images I have. Is it possible to have your image?
Some operations of cuDNN do not have deterministic behaviour. https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/#reproducibility Perhaps, this is related to your problem.
Hmm.
I'm using the projects reference image: https://cloud.githubusercontent.com/assets/2062128/26187667/9cb236da-3bd5-11e7-8bcf-7dbd4302e2dc.jpg
@Hakuyume I'm not using cuDNN, I'm using a CPU based model.
I dockerized my setup and also uploaded the modified demo.py. Had to change extensions to upload, should be demo-mod.py,
Are you running on a GPU @yuyu2172 ?
Thanks for your help!
I tested on GPU/CPU on my machine. I ran your script, and I get consistent results. I am building the docker container now.
$ python demo-mod.py 9cb236da-3bd5-11e7-8bcf-7dbd4302e2dc.jpg
[ 6 11 12 14 14]
[0.99953234 0.9766117 0.997649 0.9980629 0.99296147]
[ 6 11 12 14 14]
[0.99953234 0.9766117 0.997649 0.9980629 0.99296147]
[ 6 11 12 14 14]
[0.99953234 0.9766117 0.997649 0.9980629 0.99296147]
My results are looking more consistent today:
[ 6 11 12 14 14]
[0.9991875 0.96595764 0.99568295 0.99677104 0.9895921 ]
[ 6 11 12 14 14]
[0.9994771 0.9520375 0.9942531 0.9980769 0.9920569]
[ 6 11 12 14 14]
[0.9995468 0.9600438 0.99564916 0.99760795 0.9974911 ]
The scores are still slightly different. Thanks for looking into it!
Can i close this issue? @yuyu2172
Repeated predictions give different results, I modified the example demo.py by replacing:
with
Running on an image prints two different results. [ 6 9 11 12 14 14 14] [0.9994035 0.7133383 0.9320137 0.9976526 0.9986652 0.98459226 0.7558696 ] [ 6 11 12 14 14] [0.99914455 0.81991583 0.96614665 0.9929738 0.9864108 ]
Any idea as to why that would be?