TorchMoE / MoE-Infinity

PyTorch library for cost-effective, fast and easy serving of MoE models.
Apache License 2.0
107 stars 8 forks source link

Question: Support for Continuous Batching and Asynchronous Requests #25

Open Msiavashi opened 5 months ago

Msiavashi commented 5 months ago

Hi. I'm new to this LLM world. I have a few questions regarding the engine. Does it support continuous batching? I'm asking because I'm trying to set a request per second rate and wanted to know if I should implement my own batching strategy or if the framework provides any batching functionalities.

I see from the paper: "Multiple sequences are batched until they either reach a maximum batch size of 16 or a maximum waiting time of one second, both parameters referenced from AlpaServe."

According to this, is there any async version of the engine that allows adding requests at varying rates?

Thank you.

drunkcoding commented 5 months ago

The batch engine is not provided yet, auto-batching which specifies max batch size and max delay is the simplist way of implementation, continuous batching is WIP