mistralai / mistral-inference

Official inference library for Mistral models
https://mistral.ai/
Apache License 2.0
9.77k stars 874 forks source link

[BUG: AttributeError: module 'torch.library' has no attribute 'custom_op' #222

Open mruhlmannGit opened 2 months ago

mruhlmannGit commented 2 months ago

Python -VV

Python -vv

Pip Freeze

Pip freeze

Reproduction Steps

docker build -t llm-mistral7b .

With the right dockerfile

Expected Behavior

in dockerfile, pytorch version is 2.1.1 and custom_op needs version >=2.4.0. Seems dockerfile is outdated.

Additional Context

docker run -it llm-mistral7b /bin/bash The HF_TOKEN environment variable is not set or empty, not logging to Hugging Face. Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/lib/python3.10/runpy.py", line 110, in _get_module_details import(pkg_name) File "/usr/local/lib/python3.10/dist-packages/vllm/init.py", line 3, in from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs File "/usr/local/lib/python3.10/dist-packages/vllm/engine/arg_utils.py", line 11, in from vllm.config import (CacheConfig, ConfigFormat, DecodingConfig, File "/usr/local/lib/python3.10/dist-packages/vllm/config.py", line 12, in from vllm.model_executor.layers.quantization import QUANTIZATION_METHODS File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/init.py", line 1, in from vllm.model_executor.parameter import (BasevLLMParameter, File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/parameter.py", line 7, in from vllm.distributed import get_tensor_model_parallel_rank File "/usr/local/lib/python3.10/dist-packages/vllm/distributed/init.py", line 1, in from .communication_op import * File "/usr/local/lib/python3.10/dist-packages/vllm/distributed/communication_op.py", line 6, in from .parallel_state import get_tp_group File "/usr/local/lib/python3.10/dist-packages/vllm/distributed/parallel_state.py", line 98, in @torch.library.custom_op("vllm::inplace_all_reduce", mutates_args=["tensor"]) AttributeError: module 'torch.library' has no attribute 'custom_op'

Suggested Solutions

No response