vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
31.12k stars 4.73k forks source link

Recovery from OOM #3066

Open Ja1Zhou opened 9 months ago

Ja1Zhou commented 9 months ago

I am instantiating an LLM class for local inference. I noticed that when an OOM error happens in vllm.LLM.llm_engine.step() and I capture it, previous requests are not aborted and would mess up with my next call to LLM.generate. I was wondering what is the proper way of recovering from OOM errors during inference?

hmellor commented 3 months ago

@Ja1Zhou did you find a solution for this?

Ja1Zhou commented 3 months ago

@Ja1Zhou did you find a solution for this?

I didn't. Had to make sure that no OOMs would occur.

github-actions[bot] commented 4 days ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!