Weigh recent text more heavily in keyword tokenization
We should find a way to weight recent context (eg. more recently asked questions) more heavily when we do keyword tokenization, in an effort to prioritize accurate responses in dialogue-style questions.
Originally posted by @signebedi in https://github.com/signebedi/gptty/issues/25#issuecomment-1486029734