PygmalionAI / aphrodite-engine

PygmalionAI's large-scale inference engine
https://pygmalion.chat
GNU Affero General Public License v3.0
606 stars 78 forks source link

[Feature]: Is there a reason CUDA 6.1 is the minimum? Would CUDA 6.0 on the P100 not work? #413

Open Nero10578 opened 1 month ago

Nero10578 commented 1 month ago

🚀 The feature, motivation and pitch

In the setup.py it checks for CUDA 6.1 as a minimum and that requirement is also stated in the readme. Is there a technical reason CUDA 6.0 is not supported? Is it for INT8 support?

I ask this because there is nothing inherently stopping VLLM which Aphrodite is forked from, from working with CUDA 6.0 on the Tesla P100 cards. As can be seen in this discussion: https://github.com/vllm-project/vllm/issues/963#issuecomment-1863147987

if _is_cuda() and not compute_capabilities:
    # If TORCH_CUDA_ARCH_LIST is not defined or empty, target all available
    # GPUs on the current machine.
    device_count = torch.cuda.device_count()
    for i in range(device_count):
        major, minor = torch.cuda.get_device_capability(i)
        if major < 6 or (major == 6 and minor < 1):
            raise RuntimeError(
                "GPUs with compute capability below 6.1 are not supported.")
        compute_capabilities.add(f"{major}.{minor}")

Alternatives

No response

Additional context

No response

AlpinDale commented 1 month ago

It's mostly due to the QuIP# kernels. I'll look into extending support to P100s (we used to support them before) tomorrow.

Nero10578 commented 1 month ago

It's mostly due to the QuIP# kernels. I'll look into extending support to P100s (we used to support them before) tomorrow.

Ah I see. So for now it doesn't work only when using Quip# kernels? I was thinking if it was as easy as changing the setup.py and the other quantization would work then it's a non-issue. Just wanted to make sure if it will work at all or if there is a big change in aphrodite as a whole that makes it not work with P100s.

I'm going to put together either a 4xP100 or 4xP40 system to test out the larger models and higher context size models that just came out, so I am just trying to make sure the stuff I want to run on them works first lol. The Tesla P100 are a great deal because they're 16GB cards that has over 2x the bandwidth of the P40 cards. Although if speed is no concern, I guess the P40 are a better deal with 24GBs.

Currently Aphrodite is working great on my 2x3090 so thanks for your work on this project!

dirkson commented 3 weeks ago

I did try myself on the dev branch, but I'm waaaay out of my depth. I got it to build using the runtime and exporting TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5 8.0 8.6 8.9 9.0+PTX" , but actually trying to load up a model results in "RuntimeError: CUDA error: no kernel image is available for execution on the device". As near as I understand, pytorch does still ship with kernels for the P100, though, so I'm unsure what's going wrong here.

AlpinDale commented 2 weeks ago

Please check #444. It builds for sm_60, but I haven't tested if it actually runs.

Nero10578 commented 1 week ago

Please check #444. It builds for sm_60, but I haven't tested if it actually runs.

I'm waiting on cards from Ebay but will do try when I get them. Thanks!