vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
31.06k stars 4.72k forks source link

[Usage]: run fastchat.serve.vllm_worker in WSL ubuntu2204 ,It exited the system #3589

Closed maogeigei closed 1 day ago

maogeigei commented 8 months ago

Your current environment

image image

How would you like to use vllm

I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.

github-actions[bot] commented 1 month ago

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

github-actions[bot] commented 1 day ago

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!