intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.26k stars 1.23k forks source link

[utils] invalidInputError during RuntimeError #11279

Open raj-ritu17 opened 3 weeks ago

raj-ritu17 commented 3 weeks ago

Using ipex-llm docker version for inferencing, but during inference time it experiences errors from util files

below is the log:

------------------------------------------------------------------------------------------------------------------------
          Inferencing ./samples/customer_sku_transformation.txt ...
------------------------------------------------------------------------------------------------------------------------
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
2024-06-10 07:35:29,999 - ipex_llm.utils.common.log4Error - ERROR -

****************************Usage Error************************
intel_extension_for_pytorch has already been automatically imported. Please avoid importing it again!
2024-06-10 07:35:29,999 - ipex_llm.utils.common.log4Error - ERROR -

****************************Call Stack*************************
Traceback (most recent call last):
  File "/workspace/./inference.py", line 4, in <module>
    from utils import LLM
  File "/workspace/utils/__init__.py", line 2, in <module>
    from .llm import LLM
  File "/workspace/utils/llm.py", line 26, in <module>
    from ipex_llm.transformers.qlora import get_peft_model, prepare_model_for_kbit_training
  File "/usr/local/lib/python3.11/dist-packages/ipex_llm/__init__.py", line 34, in <module>
    ipex_importer.import_ipex()
  File "/usr/local/lib/python3.11/dist-packages/ipex_llm/utils/ipex_importer.py", line 101, in import_ipex
    log4Error.invalidInputError(False,
  File "/usr/local/lib/python3.11/dist-packages/ipex_llm/utils/common/log4Error.py", line 32, in invalidInputError
    raise RuntimeError(errMsg)
RuntimeError: intel_extension_for_pytorch has already been automatically imported. Please avoid importing it again!
make: *** [Makefile:162: infer] Error 1
qiyuangong commented 3 weeks ago

Hi @raj-ritu17 This error message is raised due to a duplicate import of intel_extension_for_pytorch.

Please remove import intel_extension_for_pytorch from inference.py .

raj-ritu17 commented 3 weeks ago

@qiyuangong , we are not importing 'intel_extension_for_pytorch' in inference.py, it just need the function from the utils 'LLM' here is our sample code:

import os
import fire

from utils import LLM

def main(
        # model/data params
        base_model: str="",
        peft_model: str=None,
        prompt_template_name: str="",
        quantization: bool = True,
        context_length: int = 2048,
        new_tokens_ratio: float = 1,
        input_path: str="",
        output_path: str=None,
        input_ext: str=None,
        output_ext=None,
        warm_up: bool=False,
        deterministic: bool=True,
        verbose: int=0
    ):

    llm = LLM( base_model=base_model,
               peft_model=peft_model,
               prompt_template_name=prompt_template_name,
               quantization=quantization,
               context_length=context_length,
               verbose=verbose
            )

def main(
        # model/data params
        base_model: str="",
        peft_model: str=None,
        prompt_template_name: str="",
        quantization: bool = True,
        context_length: int = 2048,
        new_tokens_ratio: float = 1,
        input_path: str="",
        output_path: str=None,
        input_ext: str=None,
        output_ext=None,
        warm_up: bool=False,
        deterministic: bool=True,
        verbose: int=0
    ):

    llm = LLM( base_model=base_model,
               peft_model=peft_model,
               prompt_template_name=prompt_template_name,
               quantization=quantization,
               context_length=context_length,
               verbose=verbose
            )

just my thought, I guess it comes from the internal utils function

qiyuangong commented 3 weeks ago

@qiyuangong , we are not importing 'intel_extension_for_pytorch' in inference.py, it just need the function from the utils 'LLM' here is our sample code:

import os
import fire

from utils import LLM

def main(
        # model/data params
        base_model: str="",
        peft_model: str=None,
        prompt_template_name: str="",
        quantization: bool = True,
        context_length: int = 2048,
        new_tokens_ratio: float = 1,
        input_path: str="",
        output_path: str=None,
        input_ext: str=None,
        output_ext=None,
        warm_up: bool=False,
        deterministic: bool=True,
        verbose: int=0
    ):

    llm = LLM( base_model=base_model,
               peft_model=peft_model,
               prompt_template_name=prompt_template_name,
               quantization=quantization,
               context_length=context_length,
               verbose=verbose
            )

def main(
        # model/data params
        base_model: str="",
        peft_model: str=None,
        prompt_template_name: str="",
        quantization: bool = True,
        context_length: int = 2048,
        new_tokens_ratio: float = 1,
        input_path: str="",
        output_path: str=None,
        input_ext: str=None,
        output_ext=None,
        warm_up: bool=False,
        deterministic: bool=True,
        verbose: int=0
    ):

    llm = LLM( base_model=base_model,
               peft_model=peft_model,
               prompt_template_name=prompt_template_name,
               quantization=quantization,
               context_length=context_length,
               verbose=verbose
            )

just my thought, I guess it comes from the internal utils function

Yes. You are right. This error has been fixed.

You can install early version to avoid this error


 pip install --pre --upgrade ipex-llm[xpu]==2.1.0b20240605 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
``
raj-ritu17 commented 3 weeks ago

I got this:

INFO: pip is looking at multiple versions of ipex-llm[xpu] to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement torch==2.1.0a0; extra == "xpu" (from ipex-llm[xpu]) (from versions: 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1)
qiyuangong commented 3 weeks ago

I got this:

INFO: pip is looking at multiple versions of ipex-llm[xpu] to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement torch==2.1.0a0; extra == "xpu" (from ipex-llm[xpu]) (from versions: 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1)

Please upgrade to latest version with previous mentioned fix:

 pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
``
raj-ritu17 commented 3 weeks ago

@qiyuangong, I think somewhere it got break..

case 1: so, if I use old xpu like this: pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

I have below problem:

>>> import torch
>>> from ipex_llm import optimize_model
/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
[2024-06-13 16:11:13,907] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to xpu (auto detect)
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
2024-06-13 16:11:14,110 - ipex_llm.utils.common.log4Error - ERROR -

****************************Usage Error************************
intel_extension_for_pytorch has already been automatically imported. Please avoid importing it again!
2024-06-13 16:11:14,110 - ipex_llm.utils.common.log4Error - ERROR -

****************************Call Stack*************************
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/__init__.py", line 34, in <module>
    ipex_importer.import_ipex()
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/ipex_importer.py", line 70, in import_ipex
    log4Error.invalidInputError(False,
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/common/log4Error.py", line 32, in invalidInputError
    raise RuntimeError(errMsg)

case 2: : with new library like this: pip install --pre --upgrade ipex-llm[xpu]==2.1.0b20240605 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

getting opposite error, doing simple import:

>>> import torch
>>> from ipex_llm import optimize_model
/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
[2024-06-13 16:18:12,304] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to xpu (auto detect)
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
2024-06-13 16:18:12,543 - root - ERROR - ipex_llm will automatically import intel_extension_for_pytorch.
2024-06-13 16:18:12,543 - ipex_llm.utils.common.log4Error - ERROR -

****************************Usage Error************************
Please import ipex_llm before importing                                                 intel_extension_for_pytorch!
2024-06-13 16:18:12,543 - ipex_llm.utils.common.log4Error - ERROR -

****************************Call Stack*************************
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/__init__.py", line 34, in <module>
    ipex_importer.import_ipex()
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/ipex_importer.py", line 65, in import_ipex
    log4Error.invalidInputError(False,
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/common/log4Error.py", line 32, in invalidInputError
    raise RuntimeError(errMsg)
RuntimeError: Please import ipex_llm before importing                                                 intel_extension_for_pytorch!
>>> import ipex_llm
2024-06-13 16:18:29,707 - root - ERROR - ipex_llm will automatically import intel_extension_for_pytorch.
2024-06-13 16:18:29,707 - ipex_llm.utils.common.log4Error - ERROR -

****************************Usage Error************************
Please import ipex_llm before importing                                                 intel_extension_for_pytorch!
2024-06-13 16:18:29,707 - ipex_llm.utils.common.log4Error - ERROR -

****************************Call Stack*************************
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/__init__.py", line 34, in <module>
    ipex_importer.import_ipex()
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/ipex_importer.py", line 65, in import_ipex
    log4Error.invalidInputError(False,
  File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/common/log4Error.py", line 32, in invalidInputError
    raise RuntimeError(errMsg)
RuntimeError: Please import ipex_llm before importing                                                 intel_extension_for_pytorch!
>>>
raj-ritu17 commented 3 weeks ago

@qiyuangong , I was just using these example on arc gpu: https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/PyTorch-Models/Model/llama2

https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/Pipeline-Parallel-Inference

qiyuangong commented 3 weeks ago

If 2.1.0b20240605 and latest version still raise errors, please set this env

export BIGDL_IMPORT_IPEX=0

Or maybe you can try 2.1.0b20240603