intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.25k stars 1.22k forks source link

Unable to import LlamaCpp #11467

Open abhishekkagautam opened 5 days ago

abhishekkagautam commented 5 days ago

Hi, I am unable to import LlamaCpp in IPEX

CODE : from ipex_llm.langchain.llms import LlamaCpp

ERROR Cell In[5], line 1 ----> 1 from ipex_llm.langchain.llms import LlamaCpp

File ~/Simics_AI/xpu_env/lib/python3.10/site-packages/ipex_llm/utils/ipex_importer.py:76, in custom_ipex_import(name, globals, locals, fromlist, level) 72 """ 73 Custom import function to avoid importing ipex again 74 """ 75 if fromlist is not None or '.' in name: ---> 76 return RAW_IMPORT(name, globals, locals, fromlist, level) 77 # Avoid errors in submodule import 78 calling = get_calling_package()

File ~/Simics_AI/xpu_env/lib/python3.10/site-packages/ipex_llm/langchain/llms/init.py:26 23 from typing import Dict, Type 24 from langchain.llms.base import BaseLLM ---> 26 from .bigdlllm import * 27 from .transformersllm import TransformersLLM 28 from .transformerspipelinellm import TransformersPipelineLLM

File ~/Simics_AI/xpu_env/lib/python3.10/site-packages/ipex_llm/utils/ipex_importer.py:76, in custom_ipex_import(name, globals, locals, fromlist, level) 72 """ 73 Custom import function to avoid importing ipex again 74 """ ... 248 def dec(f: Callable[..., Any] | classmethod[Any, Any, Any] | staticmethod[Any, Any]) -> Any:

PydanticUserError: If you use @root_validator with pre=False (the default) you MUST specify skip_on_failure=True. Note that @root_validator is deprecated and should be replaced with @model_validator.

For further information visit https://errors.pydantic.dev/2.8/u/root-validator-pre-skip Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

shane-huang commented 2 days ago

Our ipex-llm.langchain.llm module does not support LlamaCpp integration. You can use TransformersLLM instead. Refer to our langchain examples for CPU at https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/CPU/LangChain, and for GPU at https://github.com/intel-analytics/ipex-llm/tree/main/python/llm/example/GPU/LangChain