thunlp / InfLLM

The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory"
MIT License
306 stars 29 forks source link

More generate parameters #5

Closed Minami-su closed 8 months ago

Minami-su commented 8 months ago

Such as top_p,top_k,temperature,repetition_penalty,do_sample,beam_search,etc

guyan364 commented 8 months ago

Hello! The FastChat Chat CLI has now been integrated. You can check the generation parameters in inf_llm/chat.py. Beam Search requires duplication of the KV Cache, which is not supported at the moment.