castorini / rank_llm

Repository for prompt-decoding using LLMs (GPT3.5, GPT4, Vicuna, and Zephyr)
http://rankllm.ai
Apache License 2.0
277 stars 36 forks source link

P4-Better estimate the max word length for passage truncating in prompts #52

Open sahel-sh opened 6 months ago

sahel-sh commented 6 months ago

For example:

maybe in a oneline function that could be used both for LRL and rankGPT prompts?

something like: num_words = (context size - num output tokens(current window size) )* 0.75 max_word_length = num_words / (current window size or rank_end - rank_start)

sahel-sh commented 5 months ago

Should Happen after sigir