Open raj-ritu17 opened 3 weeks ago
Hi @raj-ritu17 This error message is raised due to a duplicate import of intel_extension_for_pytorch.
Please remove import intel_extension_for_pytorch
from inference.py .
@qiyuangong , we are not importing 'intel_extension_for_pytorch' in inference.py, it just need the function from the utils 'LLM' here is our sample code:
import os
import fire
from utils import LLM
def main(
# model/data params
base_model: str="",
peft_model: str=None,
prompt_template_name: str="",
quantization: bool = True,
context_length: int = 2048,
new_tokens_ratio: float = 1,
input_path: str="",
output_path: str=None,
input_ext: str=None,
output_ext=None,
warm_up: bool=False,
deterministic: bool=True,
verbose: int=0
):
llm = LLM( base_model=base_model,
peft_model=peft_model,
prompt_template_name=prompt_template_name,
quantization=quantization,
context_length=context_length,
verbose=verbose
)
def main(
# model/data params
base_model: str="",
peft_model: str=None,
prompt_template_name: str="",
quantization: bool = True,
context_length: int = 2048,
new_tokens_ratio: float = 1,
input_path: str="",
output_path: str=None,
input_ext: str=None,
output_ext=None,
warm_up: bool=False,
deterministic: bool=True,
verbose: int=0
):
llm = LLM( base_model=base_model,
peft_model=peft_model,
prompt_template_name=prompt_template_name,
quantization=quantization,
context_length=context_length,
verbose=verbose
)
just my thought, I guess it comes from the internal utils function
@qiyuangong , we are not importing 'intel_extension_for_pytorch' in inference.py, it just need the function from the utils 'LLM' here is our sample code:
import os import fire from utils import LLM def main( # model/data params base_model: str="", peft_model: str=None, prompt_template_name: str="", quantization: bool = True, context_length: int = 2048, new_tokens_ratio: float = 1, input_path: str="", output_path: str=None, input_ext: str=None, output_ext=None, warm_up: bool=False, deterministic: bool=True, verbose: int=0 ): llm = LLM( base_model=base_model, peft_model=peft_model, prompt_template_name=prompt_template_name, quantization=quantization, context_length=context_length, verbose=verbose ) def main( # model/data params base_model: str="", peft_model: str=None, prompt_template_name: str="", quantization: bool = True, context_length: int = 2048, new_tokens_ratio: float = 1, input_path: str="", output_path: str=None, input_ext: str=None, output_ext=None, warm_up: bool=False, deterministic: bool=True, verbose: int=0 ): llm = LLM( base_model=base_model, peft_model=peft_model, prompt_template_name=prompt_template_name, quantization=quantization, context_length=context_length, verbose=verbose )
just my thought, I guess it comes from the internal utils function
Yes. You are right. This error has been fixed.
You can install early version to avoid this error
pip install --pre --upgrade ipex-llm[xpu]==2.1.0b20240605 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
``
I got this:
INFO: pip is looking at multiple versions of ipex-llm[xpu] to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement torch==2.1.0a0; extra == "xpu" (from ipex-llm[xpu]) (from versions: 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1)
I got this:
INFO: pip is looking at multiple versions of ipex-llm[xpu] to determine which version is compatible with other requirements. This could take a while. ERROR: Could not find a version that satisfies the requirement torch==2.1.0a0; extra == "xpu" (from ipex-llm[xpu]) (from versions: 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1)
Please upgrade to latest version with previous mentioned fix:
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
``
case 1: so, if I use old xpu like this:
pip install --pre --upgrade ipex-llm[xpu] --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
I have below problem:
>>> import torch
>>> from ipex_llm import optimize_model
/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
[2024-06-13 16:11:13,907] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to xpu (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
2024-06-13 16:11:14,110 - ipex_llm.utils.common.log4Error - ERROR -
****************************Usage Error************************
intel_extension_for_pytorch has already been automatically imported. Please avoid importing it again!
2024-06-13 16:11:14,110 - ipex_llm.utils.common.log4Error - ERROR -
****************************Call Stack*************************
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/__init__.py", line 34, in <module>
ipex_importer.import_ipex()
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/ipex_importer.py", line 70, in import_ipex
log4Error.invalidInputError(False,
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/common/log4Error.py", line 32, in invalidInputError
raise RuntimeError(errMsg)
case 2: : with new library like this:
pip install --pre --upgrade ipex-llm[xpu]==2.1.0b20240605 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
getting opposite error, doing simple import:
>>> import torch
>>> from ipex_llm import optimize_model
/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
[2024-06-13 16:18:12,304] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to xpu (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
2024-06-13 16:18:12,543 - root - ERROR - ipex_llm will automatically import intel_extension_for_pytorch.
2024-06-13 16:18:12,543 - ipex_llm.utils.common.log4Error - ERROR -
****************************Usage Error************************
Please import ipex_llm before importing intel_extension_for_pytorch!
2024-06-13 16:18:12,543 - ipex_llm.utils.common.log4Error - ERROR -
****************************Call Stack*************************
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/__init__.py", line 34, in <module>
ipex_importer.import_ipex()
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/ipex_importer.py", line 65, in import_ipex
log4Error.invalidInputError(False,
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/common/log4Error.py", line 32, in invalidInputError
raise RuntimeError(errMsg)
RuntimeError: Please import ipex_llm before importing intel_extension_for_pytorch!
>>> import ipex_llm
2024-06-13 16:18:29,707 - root - ERROR - ipex_llm will automatically import intel_extension_for_pytorch.
2024-06-13 16:18:29,707 - ipex_llm.utils.common.log4Error - ERROR -
****************************Usage Error************************
Please import ipex_llm before importing intel_extension_for_pytorch!
2024-06-13 16:18:29,707 - ipex_llm.utils.common.log4Error - ERROR -
****************************Call Stack*************************
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/__init__.py", line 34, in <module>
ipex_importer.import_ipex()
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/ipex_importer.py", line 65, in import_ipex
log4Error.invalidInputError(False,
File "/home/rajritu/miniforge3/envs/arcFT/lib/python3.11/site-packages/ipex_llm/utils/common/log4Error.py", line 32, in invalidInputError
raise RuntimeError(errMsg)
RuntimeError: Please import ipex_llm before importing intel_extension_for_pytorch!
>>>
@qiyuangong , I was just using these example on arc gpu: https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/PyTorch-Models/Model/llama2
If 2.1.0b20240605 and latest version still raise errors, please set this env
export BIGDL_IMPORT_IPEX=0
Or maybe you can try 2.1.0b20240603
Using ipex-llm docker version for inferencing, but during inference time it experiences errors from util files
below is the log: