intel / AI-Playground

AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.
MIT License
215 stars 32 forks source link

Development environment fails to use LLM: ipex_llm can't import xe_addons #71

Closed dnoliver closed 1 month ago

dnoliver commented 1 month ago

Describe the bug

Trying to use the LLM functionality from on the tip of the main branch fails. The issue I am seeing in the log is similar to this: https://github.com/intel-analytics/ipex-llm/issues/12143 The answer of the maintainer at https://github.com/intel-analytics/ipex-llm/issues/12143#issuecomment-2381073069 is that ipex-llm currently supports Pytorch 2.1. This commit https://github.com/intel/AI-Playground/commit/25ef470fa12dcf7a3b752c08992445651919cb22#diff-8c59cb1e702fdc89c58380621c48e48ae5b7afc1b7ddde1c0b54d030d979f016R27 updated the Pytorch version to 2.3.

To Reproduce

Steps to reproduce the behavior:

  1. Deploy the development environment
  2. Go to the Answer tab, use the default model Microsoft Phy3
  3. Send a query to the model

Expected behavior

Model should load and reply

Screenshots

image

Log:

[ai-backend]: 2024-10-02 10:45:41,977 - INFO - Converting the current model to sym_int4 format......

[ai-backend]: C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\init.py:452: UserWarning: Initializing zero-element tensors is a no-op
  warnings.warn("Initializing zero-element tensors is a no-op")

[ai-backend]: Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

[ai-backend]: Traceback (most recent call last):
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\service\llm_biz.py", line 69, in stream_chat_generate
    model.generate(**args)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\ipex_llm\transformers\lookup.py", line 92, in generate
    return original_generate(self,
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\ipex_llm\transformers\speculative.py", line 109, in generate
    return original_generate(self,
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\ipex_llm\transformers\pipeline_parallel.py", line 281, in generate
    return original_generate(self,
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\transformers\generation\utils.py", line 1575, in generate
    result = self._sample(
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\transformers\generation\utils.py", line 2697, in _sample
    outputs = self(
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)

[ai-backend]:   File "C:\Users\Nicolas Oliver\.cache\huggingface\modules\transformers_modules\microsoft___Phi_3_mini_4k_instruct\modeling_phi3.py", line 1243, in forward
    outputs = self.model(
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\ipex_llm\transformers\models\phi3.py", line 282, in model_forward
    return origin_model_forward(
  File "C:\Users\Nicolas Oliver\.cache\huggingface\modules\transformers_modules\microsoft___Phi_3_mini_4k_instruct\modeling_phi3.py", line 1121, in forward
    layer_outputs = decoder_layer(
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\.cache\huggingface\modules\transformers_modules\microsoft___Phi_3_mini_4k_instruct\modeling_phi3.py", line 839, in forward
    hidden_states = self.input_layernorm(hidden_states)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Nicolas Oliver\Downloads\AI-Playground\env\lib\site-packages\ipex_llm\transformers\models\phi3.py", line 339, in phi3_rms_norm_forward
    import xe_addons
ImportError: DLL load failed while importing xe_addons: The specified procedure could not be found.

Environment (please complete the following information):

Additional context

Add any other context about the problem here.

Nuullll commented 1 month ago

Thanks for reporting this. Could you please try the dev branch? https://github.com/intel/AI-Playground/commit/40009650c7dde9d95f84514c1f8910adc8876d28 downgraded IPEX to 2.1.40 for MTL.

dnoliver commented 1 month ago

Yeah same as in https://github.com/intel/AI-Playground/issues/70 Using the dev branch with downgraded versions worked fine

Nuullll commented 1 month ago

Fixed by #68