Open Nero10578 opened 1 month ago
It's mostly due to the QuIP# kernels. I'll look into extending support to P100s (we used to support them before) tomorrow.
It's mostly due to the QuIP# kernels. I'll look into extending support to P100s (we used to support them before) tomorrow.
Ah I see. So for now it doesn't work only when using Quip# kernels? I was thinking if it was as easy as changing the setup.py and the other quantization would work then it's a non-issue. Just wanted to make sure if it will work at all or if there is a big change in aphrodite as a whole that makes it not work with P100s.
I'm going to put together either a 4xP100 or 4xP40 system to test out the larger models and higher context size models that just came out, so I am just trying to make sure the stuff I want to run on them works first lol. The Tesla P100 are a great deal because they're 16GB cards that has over 2x the bandwidth of the P40 cards. Although if speed is no concern, I guess the P40 are a better deal with 24GBs.
Currently Aphrodite is working great on my 2x3090 so thanks for your work on this project!
I did try myself on the dev branch, but I'm waaaay out of my depth. I got it to build using the runtime and exporting TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX" , but actually trying to load up a model results in "RuntimeError: CUDA error: no kernel image is available for execution on the device". As near as I understand, pytorch does still ship with kernels for the P100, though, so I'm unsure what's going wrong here.
Please check #444. It builds for sm_60, but I haven't tested if it actually runs.
Please check #444. It builds for sm_60, but I haven't tested if it actually runs.
I'm waiting on cards from Ebay but will do try when I get them. Thanks!
🚀 The feature, motivation and pitch
In the setup.py it checks for CUDA 6.1 as a minimum and that requirement is also stated in the readme. Is there a technical reason CUDA 6.0 is not supported? Is it for INT8 support?
I ask this because there is nothing inherently stopping VLLM which Aphrodite is forked from, from working with CUDA 6.0 on the Tesla P100 cards. As can be seen in this discussion: https://github.com/vllm-project/vllm/issues/963#issuecomment-1863147987
Alternatives
No response
Additional context
No response