databricks / megablocks

Apache License 2.0
1.17k stars 169 forks source link

Update setup.py to support multiple device capabilities #56

Closed simon-mo closed 9 months ago

simon-mo commented 9 months ago

Currently the setup.py code only target the current device, making it difficult to build wheels that target many architectures. We are facing this problem in distributing vLLM docker images.

This PR adds a block that recognizes the environment variable TORCH_CUDA_ARCH_LIST which will be interpreted by torch's cuda extension to build for multiple architecture.

tgale96 commented 9 months ago

LGTM! Thanks for the contribution!

@mvpatel2000 would you mind verifying this as well?

tgale96 commented 9 months ago

I will merge to unblock you and we can revise later if necessary :)

simon-mo commented 9 months ago

You can verify it by supplying TORCH_CUDA_ARCH_LIST='7.0 7.5 8.0 8.6 8.9 9.0+PTX' (and associating NVCC_THREADS=2, MAX_JOBS=4 to accelerate to the build).

The shared object should contain kernels for different architecture.

cuobjdump megablocks_ops.cpython-310-x86_64-linux-gnu.so

Fatbin elf code:
================
arch = sm_70
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_75
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_80
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_86
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_89
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin elf code:
================
arch = sm_90
code version = [1,7]
host = linux
compile_size = 64bit

Fatbin ptx code:
================
arch = sm_90
code version = [8,3]
host = linux
compile_size = 64bit
compressed
ptxasOptions = -v
tgale96 commented 9 months ago

Excellent, thank you! Will you install from the git repo? Or, would you like me to cut an updated PyPi package?

simon-mo commented 9 months ago

Thanks! I'm just going to install from git repo for now. No need to cut a release.

tgale96 commented 9 months ago

Perfect! I can cut a version as part of fixing https://github.com/vllm-project/vllm/issues/2032 that includes this change as well.