microsoft / BitBLAS

BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.
MIT License
190 stars 21 forks source link

Add `torch` as a requirement #22

Closed mgoin closed 2 months ago

mgoin commented 2 months ago

Currently the bitblas package will not run installed from a wheel because torch is not required by the package, when it clearly is needed.

Example reproduction of bug:

> wget https://github.com/microsoft/BitBLAS/releases/download/v0.0.1/bitblas-0.0.1dev0+ubuntu.20.4.cu120-py3-none-any.whl
> pip install bitblas-0.0.1dev0+ubuntu.20.4.cu120-py3-none-any.whl
> python benchmark/operators/benchmark_bitblas_matmul.py
Traceback (most recent call last):
  File "/home/michael/code/BitBLAS/benchmark/operators/benchmark_bitblas_matmul.py", line 4, in <module>
    from bitblas.utils.target_detector import auto_detect_nvidia_target
  File "/home/michael/venvs/bitblas/lib/python3.10/site-packages/bitblas/__init__.py", line 19, in <module>
    from . import gpu  # noqa: F401
  File "/home/michael/venvs/bitblas/lib/python3.10/site-packages/bitblas/gpu/__init__.py", line 7, in <module>
    from .fallback import Fallback  # noqa: F401
  File "/home/michael/venvs/bitblas/lib/python3.10/site-packages/bitblas/gpu/fallback.py", line 28, in <module>
    from ..base import normalize_prim_func, try_inline
  File "/home/michael/venvs/bitblas/lib/python3.10/site-packages/bitblas/base/__init__.py", line 16, in <module>
    from .transform import ApplyDefaultSchedule, ApplyFastTuning
  File "/home/michael/venvs/bitblas/lib/python3.10/site-packages/bitblas/base/transform.py", line 20, in <module>
    from .utils import fast_tune, fast_tune_with_dynamic_range
  File "/home/michael/venvs/bitblas/lib/python3.10/site-packages/bitblas/base/utils.py", line 22, in <module>
    from bitblas.utils import tensor_replace_dp4a, tensor_remove_make_int4
  File "/home/michael/venvs/bitblas/lib/python3.10/site-packages/bitblas/utils/__init__.py", line 4, in <module>
    from .tensor_adapter import tvm_tensor_to_torch  # noqa: F401
  File "/home/michael/venvs/bitblas/lib/python3.10/site-packages/bitblas/utils/tensor_adapter.py", line 6, in <module>
    import torch
ModuleNotFoundError: No module named 'torch'

"Fix" by just manually installing torch:

> pip install torch
> python benchmark/operators/benchmark_bitblas_matmul.py
Time cost is: 0.192 ms
Time cost is: 0.437 ms
Time cost is: 0.148 ms
Time cost is: 0.581 ms
Time cost is: 0.583 ms
...
LeiWang1999 commented 2 months ago

LGTM, Thanks for your contribution! @mgoin