Open Ja1Zhou opened 9 months ago
@Ja1Zhou did you find a solution for this?
@Ja1Zhou did you find a solution for this?
I didn't. Had to make sure that no OOMs would occur.
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
I am instantiating an LLM class for local inference. I noticed that when an OOM error happens in
vllm.LLM.llm_engine.step()
and I capture it, previous requests are not aborted and would mess up with my next call toLLM.generate
. I was wondering what is the proper way of recovering from OOM errors during inference?