vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
31.12k stars 4.73k forks source link

[Misc]: Nobody reviews my PR #9150

Open CharlesRiggins opened 1 month ago

CharlesRiggins commented 1 month ago

Anything you want to discuss about vllm.

https://github.com/vllm-project/vllm/pull/8797

Before submitting a new issue...

joerunde commented 1 month ago

Hey @CharlesRiggins, from the latest news on the readme there is now a public slack that you can join to collaborate with others:

[2024/10] We have just created a developer slack (slack.vllm.ai) focusing on coordinating contributions and discussing features. Please feel free to join us there!

Might be helpful to ping in there as well when you need a review!