microsoft / DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
Apache License 2.0
34.68k stars 4.04k forks source link

[REQUEST] build prebuilt wheels #5308

Open AlongWY opened 5 months ago

AlongWY commented 5 months ago

I want you can release with prebuilt wheels.

And I have build a github actions to auto build wheels you can use, but i can't build with evoformer because it require real cuda device.

loadams commented 5 months ago

Hi @AlongWY - part of the issue is the wide range of wheels we would need to publish supporting different torch and cuda/rocm/etc versions. We have provided dockerfiles in the past, but the same issue of what matrix to test and support is quite difficult. That's not to say we won't add this in the future, but that choosing what to provide wheels for is difficult as everyone will want their combinations to be supported.

AlongWY commented 5 months ago

Yes, so i compiled with almost all ops, only evoformer anf sparse attention can't compile. And i compiled with the matrix like this, maybe it's a good start to auto build wheels.

matrx:

python-version: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12"] torch-version: ["1.12.1", "1.13.1", "2.0.1", "2.1.2", "2.2.1"] cuda-version: ["11.8.0", "12.1.1"]

build options

TORCH_CUDA_ARCH_LIST="6.1;7.0;7.5;8.0;8.6;8.9;9.0" DS_BUILD_OPS=1 DS_BUILD_SPARSE_ATTN=0 // not support torch 2.0 DS_BUILD_EVOFORMER_ATTN=0 // need real cuda device? may be a bug?

loadams commented 5 months ago

@AlongWY - for the evoformer attention op, you would need to install cutlass and deepspeed-kernels.

But I'm curious what the advantage of a pre-built wheel with the ops installed is, since if the user grabs the wrong one, or has other environment issues, they will hit them regardless of if they use a wheel with pre-built ops or use the wheel off PyPI and the ops are JIT compiled?

AlongWY commented 5 months ago

I installed cutlass and deepspeed-kernels but can't compile, it use the torch.cuda.get_device_properties(0) to detect the real device so it not work.

Advantages of Pre-Built Wheel:

  1. Convenience: No additional compilation steps required for the user.
  2. Performance: Saves the time and resources needed for JIT compilation.
  3. Plug-and-Play: Especially beneficial for users lacking the environment or resources for compilation.

Advantages of JIT-Compiled Wheel:

  1. Flexibility: Adapts to different users' specific environments.
  2. Compatibility: Reduces issues due to system mismatches.
  3. Customization: Tailored for specific system environments, enhancing operational efficiency.
loadams commented 5 months ago

@AlongWY I mostly agree, though pre-built wheels are not helpful to the users if they do not have the underlying hardware to run on, and they are more likely to think they can run with certain features then get errors later if they cannot. Though we've also observed little difference in speed with pre-built wheels vs JIT compilation as well.

However, this does require additional testing and building support, since even the matrix listed above is 50 different wheels. At least for now, it is unlikely that we will have the bandwidth to support this.