Closed rahulranjan07 closed 6 years ago
I'm having the same issue!
Running the run_placesCNN_unified.py
code gives me the following prediction:
0.232 -> beauty_salon
0.204 -> reception
while running the demo
on the Places365 website
gives the following:
0.613 -> reception
Why is this difference so big? I'm using the same wideresnet18_places365.pth.tar
model as shown in the README file.
The weight is converted once due to the upgrade of the pytorch0.4, so there might be some small numeric change in the weight value. I don't think it affects the overall performance of the model.
And which network architecture is used in the online demo model?
Hi @metalbubble , I ran into same issue today. After a closer inspection I found the problem to be this line.
Could you explain why you enforce transition weights to be non-negative? Intuitively this prevents any pooled features from directly diminishing output probabilities (discounting softmax). but is there any specific reason behind this?
Does anyone know why the demo here: http://places2.csail.mit.edu/demo.html and the run_placesCNN_unified.py . I tried testing with the same images mentiones in the README.md file, and the result differs a lot:
According to the README file and places demo website, the result is as follows:
But when I run run_placesCNN_unified.py on my server, I get the following result:
Does anyone knows which model the current demo website is using ?