intel / intel-extension-for-transformers

⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
Apache License 2.0
2.06k stars 201 forks source link

Error: Python setup.py egg_info did not run successfully #1656

Closed rohitpreddy07 closed 4 days ago

rohitpreddy07 commented 5 days ago

I am following the tutorial https://intel.github.io/intel-extension-for-pytorch/llm/llama3/xpu/ to run Llama 3 models locally, however I am getting the following error while setting up the environment and running the command pip install -v . .

Traceback (most recent call last):
    File "<string>", line 2, in <module>
    File "<pip-setuptools-caller>", line 34, in <module>
    File "C:\Users\rohit\intel-extension-for-transformers\setup.py", line 14, in <module>
      from intel_extension_for_transformers.tools.utils import get_gpu_family
    File "C:\Users\rohit\intel-extension-for-transformers\intel_extension_for_transformers\tools\utils.py", line 21, in <module>
      import torch
    File "C:\Users\rohit\miniconda3\envs\llm\lib\site-packages\torch\__init__.py", line 139, in <module>
      raise err
  OSError: [WinError 126] The specified module could not be found. Error loading "C:\Users\rohit\miniconda3\envs\llm\lib\site-packages\torch\lib\backend_with_compiler.dll" or one of its dependencies.
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> See above for output.

  note: This error originates from a subprocess, and is likely not a problem with pip.
  full command: 'C:\Users\rohit\miniconda3\envs\llm\python.exe' -c '
  exec(compile('"'"''"'"''"'"'
  # This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
  #
  # - It imports setuptools before invoking setup.py, to enable projects that directly
  #   import from `distutils.core` to work with newer packaging standards.
  # - It provides a clear error message when setuptools is not installed.
  # - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
  #   setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
  #     manifest_maker: standard file '"'"'-c'"'"' not found".
  # - It generates a shim setup.py, for handling setup.cfg-only projects.
  import os, sys, tokenize

  try:
      import setuptools
  except ImportError as error:
      print(
          "ERROR: Can not execute `setup.py` since setuptools is not available in "
          "the build environment.",
          file=sys.stderr,
      )
      sys.exit(1)

  __file__ = %r
  sys.argv[0] = __file__

  if os.path.exists(__file__):
      filename = __file__
      with tokenize.open(__file__) as f:
          setup_py_code = f.read()
  else:
      filename = "<auto-generated setuptools caller>"
      setup_py_code = "from setuptools import setup; setup()"

  exec(compile(setup_py_code, filename, "exec"))
  '"'"''"'"''"'"' % ('"'"'C:\\Users\\rohit\\intel-extension-for-transformers\\setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' egg_info --egg-base 'C:\Users\rohit\AppData\Local\Temp\pip-pip-egg-info-upvjd4iu'
  cwd: C:\Users\rohit\intel-extension-for-transformers\
  Preparing metadata (setup.py) ... error
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

I'm not sure if its an error with pip as I ran pip install --upgrade setuptools , which was a popular solution upon researching this issue, with no avail. Please look into this issue.

a32543254 commented 5 days ago

I believe your env does not correctly install the torch.

you can simply run "import torch" in your python env. then you can see the error.

rohitpreddy07 commented 5 days ago

Issue resolved it seems to be an issue with setting the environment variable.

However I am still unable to run the run_generation_gpu_woq_for_llama.py script.

2024-07-05 17:35:38,878 - datasets - INFO - PyTorch version 2.1.0a0+git04048c2 available.
C:\Users\rohit\miniconda3\envs\llm\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 911/911 [00:00<00:00, 181kB/s]
C:\Users\rohit\miniconda3\envs\llm\lib\site-packages\huggingface_hub\file_download.py:157: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\rohit\.cache\huggingface\hub\models--Qwen--Qwen-7B-Chat. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
  warnings.warn(message)
Traceback (most recent call last):
  File "C:\Users\rohit\intel-extension-for-pytorch\examples\gpu\inference\python\llm\run_generation_gpu_woq_for_llama.py", line 132, in <module>
    config = AutoConfig.from_pretrained(
  File "C:\Users\rohit\miniconda3\envs\llm\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1051, in from_pretrained
    trust_remote_code = resolve_trust_remote_code(
  File "C:\Users\rohit\miniconda3\envs\llm\lib\site-packages\transformers\dynamic_module_utils.py", line 620, in resolve_trust_remote_code
    raise ValueError(
ValueError: Loading Qwen/Qwen-7B-Chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.

Do you have an idea on which config file I have to execute to be able to run the script or am I misinterpreting the issue?

rohitpreddy07 commented 4 days ago

Issue resolved.