I would like to serve smaller models (e.g facebook/opt-125m) using VLLM on TPU. I can't do this currently because the Pallas backend has the limitation NotImplementedError: Head size must be a multiple of 128. I can't find a reason why this limitation is in place, and it would be great to be able to remove it with a flag or entirely. If my understanding is incorrect and there is a reason to have this limitation in place, please let me know! Thanks for your work on VLLM.
Alternatives
No response
Additional context
No response
Before submitting a new issue...
[X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
🚀 The feature, motivation and pitch
I would like to serve smaller models (e.g facebook/opt-125m) using VLLM on TPU. I can't do this currently because the Pallas backend has the limitation
NotImplementedError: Head size must be a multiple of 128
. I can't find a reason why this limitation is in place, and it would be great to be able to remove it with a flag or entirely. If my understanding is incorrect and there is a reason to have this limitation in place, please let me know! Thanks for your work on VLLM.Alternatives
No response
Additional context
No response
Before submitting a new issue...