Closed Jerry-Kon closed 6 months ago
chatglm has been supported llama backend (hf, vllm) exits bug
add batching function to vllm backend, solved the bugs
Also please solve the conflict.
Can you fix this? I'm waiting for this feature.
close in favor of https://github.com/InftyAI/llmlite/pull/50
chatglm has been supported llama backend (hf, vllm) exits bug