localminimum / QANet

A Tensorflow implementation of QANet for machine reading comprehension
MIT License
982 stars 310 forks source link

inconsistency in predictions #33

Closed nehaboob closed 6 years ago

nehaboob commented 6 years ago

We have trained QA net for our own question and answers data. But when we run it in demo mode for prediction it is giving different results for the same question.

Some times it picks correct answer for the same question and some time does not, but ideally it should pick the same answer, right ? Any ideas what could be the reason for this behaviour of trained model ?

I have commented out below section from test/demo code:

""" if config.decay < 1.0: sess.run(model.assign_vars) """

localminimum commented 6 years ago

Hi @nehaboob , as far as I'm concerned, there shouldn't be uncertainty at inference time. Could you explain in detail what the issue is? Are the questions exactly identical? (Not that the lower and upper case letters are treated differenty in QANet)

nehaboob commented 6 years ago

yes, questions are exactly identical, but same model is giving different predictions at different runs.

theSage21 commented 6 years ago

Hmm. Dropout could be the cause of this. I'm not able to find where dropout is forced to 0 for demo.

nehaboob commented 6 years ago

Sorry, I was doing some mistake in graph initialisation which was causing different predictions. This can be closed now.