HabanaAI / vllm-fork

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
39 stars 48 forks source link

Remove workaround added to resolve multi-card stall issue #387

Closed SanjuCSudhakaran closed 1 week ago

SanjuCSudhakaran commented 2 weeks ago

This PR removes additional multiprocessing.Process object created as a workaround for resolving multi-card stall issue.