nouhadziri / DialogEntailment

The implementation of the paper "Evaluating Coherence in Dialogue Systems using Entailment"
https://arxiv.org/abs/1904.03371
MIT License
74 stars 5 forks source link

How to use BERT to evaluate directly to my conversation? #6

Closed dxlong2000 closed 2 years ago

dxlong2000 commented 2 years ago

Hi @ehsk , @korymath ,

Thanks for your great work. May I ask if there is any inference scripts so that I can run them to evaluate my generated dialog? Look forward hearing from you soon.

Thanks!

ehsk commented 2 years ago

Hi @dxlong2000,

Unfortunately, we don't have our fine-tuned models anymore. You need to fine-tune BERT yourself first.

Hope this helps!

dxlong2000 commented 2 years ago

Thanks for your reply. I see. Could you mind uploading the inference codes? Like how to load and evaluate a new dialog? Thanks

ehsk commented 2 years ago

Our code supports evaluation. You can find it here. We didn't implement inference where we save the predicted labels for an input data, but it would be quite similar to the evaluation code.

dxlong2000 commented 2 years ago

Hi @ehsk ,

Your evaluation code only reports the eval_accuracy, eval_loss, global_step, and loss. May I ask how can I get the SS scores? Look forward hearing from you soon.

Thanks!

ehsk commented 2 years ago

Hi @dxlong2000,

For Semantic Similarity, take a look at here. You need to write a code like the following:

from dialogentail.semantic_similarity import SemanticSimilarity

ss = SemanticSimilarity()
ss.compute(conversation_history, actual_response, generated_response)

conversation_history is a list of strings and actual_response and generated_response are both strings.

dxlong2000 commented 2 years ago

Hi @ehsk ,

Thanks for your quick response. From my understanding is that let's say the entailment model is trained on the ground-truth response and then we can take that pre-trained model to evaluate on a new conversation without knowing the actual_response, am I correct?

I still see in the computation of ss includes actual_response. In the paper I saw: It measures the distance between the generated response and the utterances in the conversation history but there is no mention of theactual_response. Could you mind clarifying for me?

Thanks a lot!

dxlong2000 commented 2 years ago

I saw you already provided sim_generated_resp. That answered my right above question. Is there any way I can load my above pretrained BERT instead of Elmo?

ehsk commented 2 years ago

Semantic Similarity measures cosine similarity between embedding vectors. An updated version of it would be BERTScore. actual_response is not really necessary. You can pass the same string as generated_response.

If you want to use an entailment model, the coherence metric, here, is what you need:

from dialogentail.coherence import BertCoherence

c = BertCoherence("/path/to/model")
c.compute(conversation_history, actual_response, generated_response)

The constructor argument is the path to a fine-tuned BERT model.