golsun / DialogRPT

EMNLP 2020: "Dialogue Response Ranking Training with Large-Scale Human Feedback Data"
MIT License
335 stars 33 forks source link

Performance issues with DialogRPT + DialoGPT #6

Open pablogranolabar opened 3 years ago

pablogranolabar commented 3 years ago

Hi again @golsun,

I've been working with DialogRPT using DialoGPT-large for dialog generation and have hit some performance issues that aren't present when using just DialoGPT-large. Round trip responses using CPU inference are just a few seconds with gpt2-large but whenever DialogRPT is used with the DialoGPT-large checkpoint, performance grinds to a halt. With GPU inference I can run gpt2-large on a 6GB GPU but with DialogRPT I get OOM. I understand that there are multiple models running with the combination of DialogRPT + DialoGPT which is the obvious culprit, is there any way to serialize execution of the two models to prevent these resource consumption issues?

golsun commented 3 years ago

hi @pablogranolabar ,

I can think of several potential reasons of OOM:

pablogranolabar commented 3 years ago

Hi @golsun, thanks for the quick response!

The two machine idea makes sense, I think I can do that with relative ease if it comes to that.

For the DialogRPT models I am just using updown. So I should ensemble at least updown + human_vs_rand? This application is for a conversational agent that can rerank dialog based on human scoring of the chatbot responses.

golsun commented 3 years ago

yes human_vs_rand (together with updown)should help in that case. if memory is a concern, a low-memory way without using human_vs_rand is to decode response with small top_k or top_p, this should also help the response to be relevant to context. but I guess the performance depends on the scenarios.....

pablogranolabar commented 3 years ago

Hi again @golsun. I'm working on ensembling human_vs_rand with updown per your advice, but I'm unsure of the way to proceed with ensemble.yml. Should human_vs_rand and updown be a part of prior with equal weights? Or should human_vs_rand be prior and with updown conditional? Based on the performance reasons above I'm trying to do this with just a two model ensemble as you suggested.

golsun commented 3 years ago

hi, in this case, I guess a simple way without dealing with ensemble.yml is

# `get_model` and `predict` are functions from score.py
hvm = get_model('restore/human_vs_machine.pth')
updown = get_model('restore/updown.pth')
score_hvm =  predict(hvm, cxt, hyps)
score_updown =  predict(updown, cxt, hyps)
score_overall = np.sqrt(score_updown * score_hvm)   # use this as the final score

I used geometric mean for score_overall, but you can play with some weighted arithmetic mean.