intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Apache License 2.0
6.45k stars 1.24k forks source link

AssertionError: daemonic processes are not allowed to have children #10541

Open ywang30intel opened 5 months ago

ywang30intel commented 5 months ago

Trying to get RAG for PVC using https://github.com/intel-analytics/Langchain-Chatchat Didn't see the instructions for Linux but got following errors with below steps on Ubuntu 22.04 + Intel Max1550.

git clone https://github.com/intel-analytics/Langchain-Chatchat cd Langchain-Chatchat pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu pip install --pre --upgrade torchaudio==2.1.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu

pip install -r requirements_bigdl.txt pip install -r requirements_api_bigdl.txt pip install -r requirements_webui.txt

reinstall with right torch version for IPEX pip install --pre --upgrade bigdl-llm[xpu] -f https://developer.intel.com/ipex-whl-stable-xpu

python copy_config_example.py

done edit on configs\model_config.py, change MODEL_ROOT_PATH

source /opt/intel/oneapi/setvars.sh python warmup.py python startup.py -a ... 当前Embbedings模型: bge-large-en-v1.5 @ xpu ==============================Langchain-Chatchat Configuration==============================

2024-03-24 23:06:03,983 - startup.py[line:700] - INFO: 正在启动服务: 2024-03-24 23:06:03,983 - startup.py[line:701] - INFO: 如需查看 llm_api 日志,请前往 /home/ywang30/Langchain-Chatchat/logs /home/ywang30/miniconda3/envs/chatchat/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式 加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-03-24 23:06:07 | ERROR | stderr | INFO: Started server process [1080194] 2024-03-24 23:06:07 | ERROR | stderr | INFO: Waiting for application startup. 2024-03-24 23:06:07 | ERROR | stderr | INFO: Application startup complete. 2024-03-24 23:06:07 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000/ (Press CTRL+C to quit) 2024-03-24 23:06:07 | ERROR | stderr | /home/ywang30/miniconda3/envs/chatchat/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpeg or libpng installed before building torchvision from source? 2024-03-24 23:06:07 | ERROR | stderr | warn( 2024-03-24 23:06:08 | ERROR | stderr | Process model_worker - Llama-2-7b-chat-hf: 2024-03-24 23:06:08 | ERROR | stderr | Traceback (most recent call last): 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap 2024-03-24 23:06:08 | ERROR | stderr | self.run() 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/multiprocessing/process.py", line 108, in run 2024-03-24 23:06:08 | ERROR | stderr | self._target(*self._args, self._kwargs) 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/Langchain-Chatchat/startup.py", line 434, in run_model_worker 2024-03-24 23:06:08 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, kwargs) 2024-03-24 23:06:08 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/Langchain-Chatchat/startup.py", line 176, in create_model_worker_app 2024-03-24 23:06:08 | ERROR | stderr | from bigdl.llm.serving.fastchat.bigdl_worker import app, BigDLLLMWorker 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/site-packages/bigdl/llm/init.py", line 32, in 2024-03-24 23:06:08 | ERROR | stderr | ipex_importer.import_ipex() 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/site-packages/bigdl/llm/utils/ipex_importer.py", line 59, in import_ipex 2024-03-24 23:06:08 | ERROR | stderr | import intel_extension_for_pytorch as ipex 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/site-packages/intel_extension_for_pytorch/init.py", line 122, in 2024-03-24 23:06:08 | ERROR | stderr | from . import _dynamo 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/site-packages/intel_extension_for_pytorch/_dynamo/init.py", line 5, in 2024-03-24 23:06:08 | ERROR | stderr | from torch._inductor import codecache # noqa 2024-03-24 23:06:08 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 1437, in 2024-03-24 23:06:08 | ERROR | stderr | AsyncCompile.warm_pool() 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/site-packages/torch/_inductor/codecache.py", line 1376, in warm_pool 2024-03-24 23:06:08 | ERROR | stderr | pool._adjust_process_count() 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/concurrent/futures/process.py", line 767, in _adjust_process_count 2024-03-24 23:06:08 | ERROR | stderr | self._spawn_process() 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/concurrent/futures/process.py", line 785, in _spawn_process 2024-03-24 23:06:08 | ERROR | stderr | p.start() 2024-03-24 23:06:08 | ERROR | stderr | File "/home/ywang30/miniconda3/envs/chatchat/lib/python3.11/multiprocessing/process.py", line 118, in start 2024-03-24 23:06:08 | ERROR | stderr | assert not _current_process._config.get('daemon'), 2024-03-24 23:06:08 | ERROR | stderr | AssertionError: daemonic processes are not allowed to have children

shane-huang commented 5 months ago

Current our guide is only for Windows install. Running on Linux is a bit different. We're preparing a Linux guide and that should solve your problem. Once it's ready we'll let you know.

ywang30intel commented 5 months ago

I confirm the issue has been fixed now with the latest code. there is a top_k issue which expected to be fixed in today's nightly build, to solve below error: raise ValueError(f"top_k has to be a strictly positive integer, but is {top_k}") ValueError: top_k has to be a strictly positive integer, but is -1

One sighting is, as oneAPI 2024.1 get released, by default the oneAPI 2024.1 will be installed which is not compatible with the current ipex 2.1.10 release. The oneAPI 2024.0 should be installed and the 2024.0 environment should be set for the current ipex 2.1.10. The command line to launch the startup: source /opt/intel/oneapi/compiler/2024.0/env/vars.sh source /opt/intel/oneapi/mkl/2024.0/env/vars.sh export USE_XETLA=OFF export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1 export no_proxy='localhost,127.0.0.1' export BIGDL_IMPORT_IPEX=0 python startup.py -a

Otherwise if using 2024.1 release, the following error will be reported: libtorch_cpu.so: undefined symbol: iJIT_NotifyEvent

Oscilloscope98 commented 5 months ago

The daemonic processes as well as top_k issues have been fixed in our latest Langchain-Chatchat repo. Please build the environment again based on the updated requirements list, and ipex-llm>=2.1.0b20240327 :)

We are adding the startup guide for Linux users very soon.

Please let us know for any further problems :)