the-crypt-keeper / LLooM

Experimental LLM Inference UX to aid in creative writing
MIT License
91 stars 8 forks source link

Proper vllm prompt caching (sglang support?) #8

Open the-crypt-keeper opened 4 months ago

the-crypt-keeper commented 4 months ago

I've now implemented vLLM but cant find any way to control prompt caching across requests so it slows down as you get deeper into the generation.

sglang with its Radix cache strategy should actually be PERFECT for our usecase.

Refer specifically to the sglang backend section, does this 'just work faster'?

python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --chat-template llama-2