haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.28k stars 2.12k forks source link

[Usage] Installation on Windows - async_io dependency #529

Open 9of9 opened 11 months ago

9of9 commented 11 months ago

Describe the issue

Issue: When installing requirements on windows, installation fails due to not being able to pre-compile async_io

Command:

pip install -e .

Log:

Obtaining file:///D:/AI/LLaVA
  Installing build dependencies ... done
  Checking if build backend supports build_editable ... done
  Getting requirements to build editable ... done
  Installing backend dependencies ... done
  Preparing editable metadata (pyproject.toml) ... done
Collecting einops (from llava==1.1.0)
  Obtaining dependency information for einops from https://files.pythonhosted.org/packages/29/0b/2d1c0ebfd092e25935b86509a9a817159212d82aa43d7fb07eca4eeff2c2/einops-0.7.0-py3-none-any.whl.metadata
  Using cached einops-0.7.0-py3-none-any.whl.metadata (13 kB)
Collecting fastapi (from llava==1.1.0)
  Obtaining dependency information for fastapi from https://files.pythonhosted.org/packages/4d/d2/3ad038a2365fefbac19d9a046cab7ce45f4c7bfa81d877cbece9707de9ce/fastapi-0.103.2-py3-none-any.whl.metadata
  Using cached fastapi-0.103.2-py3-none-any.whl.metadata (24 kB)
Collecting gradio==3.35.2 (from llava==1.1.0)
  Obtaining dependency information for gradio==3.35.2 from https://files.pythonhosted.org/packages/50/70/ed0ba0fb5c3b1cb2e481717ad190056a4c9a0ef2f296b871e10375b2ab83/gradio-3.35.2-py3-none-any.whl.metadata
  Using cached gradio-3.35.2-py3-none-any.whl.metadata (15 kB)
Collecting markdown2[all] (from llava==1.1.0)
  Obtaining dependency information for markdown2[all] from https://files.pythonhosted.org/packages/f1/98/61276a753f078dd2f3171c9a69fd3f451d220e806b2b1cdca41b8e368b0f/markdown2-2.4.10-py2.py3-none-any.whl.metadata
  Using cached markdown2-2.4.10-py2.py3-none-any.whl.metadata (2.0 kB)
Requirement already satisfied: numpy in d:\anaconda3\envs\llava\lib\site-packages (from llava==1.1.0) (1.26.0)
Requirement already satisfied: requests in d:\anaconda3\envs\llava\lib\site-packages (from llava==1.1.0) (2.31.0)
Collecting sentencepiece (from llava==1.1.0)
  Using cached sentencepiece-0.1.99-cp310-cp310-win_amd64.whl (977 kB)
Collecting tokenizers>=0.12.1 (from llava==1.1.0)
  Obtaining dependency information for tokenizers>=0.12.1 from https://files.pythonhosted.org/packages/92/02/15556b80450301d2ef014bc598df4352bfb39631c5fcff758d8e0ac9f065/tokenizers-0.14.1-cp310-none-win_amd64.whl.metadata
  Using cached tokenizers-0.14.1-cp310-none-win_amd64.whl.metadata (6.8 kB)
Collecting torch==2.0.1 (from llava==1.1.0)
  Using cached torch-2.0.1-cp310-cp310-win_amd64.whl (172.3 MB)
Collecting torchvision==0.15.2 (from llava==1.1.0)
  Using cached torchvision-0.15.2-cp310-cp310-win_amd64.whl (1.2 MB)
Collecting uvicorn (from llava==1.1.0)
  Obtaining dependency information for uvicorn from https://files.pythonhosted.org/packages/79/96/b0882a1c3f7ef3dd86879e041212ae5b62b4bd352320889231cc735a8e8f/uvicorn-0.23.2-py3-none-any.whl.metadata
  Using cached uvicorn-0.23.2-py3-none-any.whl.metadata (6.2 kB)
Collecting wandb (from llava==1.1.0)
  Obtaining dependency information for wandb from https://files.pythonhosted.org/packages/1c/5e/0362fa88679852c7fd3ac85ee5bd949426c4a51a61379010d4089be6d7ac/wandb-0.15.12-py3-none-any.whl.metadata
  Using cached wandb-0.15.12-py3-none-any.whl.metadata (9.8 kB)
Collecting shortuuid (from llava==1.1.0)
  Using cached shortuuid-1.0.11-py3-none-any.whl (10 kB)
Collecting httpx==0.24.0 (from llava==1.1.0)
  Using cached httpx-0.24.0-py3-none-any.whl (75 kB)
Collecting deepspeed==0.9.5 (from llava==1.1.0)
  Using cached deepspeed-0.9.5.tar.gz (809 kB)
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [15 lines of output]
      test.c
      LINK : fatal error LNK1181: cannot open input file 'aio.lib'
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\valkozin\AppData\Local\Temp\pip-install-s9q4kp7v\deepspeed_1f2978aea66844a2af764a83fd024764\setup.py", line 163, in <module>
          abort(f"Unable to pre-compile {op_name}")
        File "C:\Users\valkozin\AppData\Local\Temp\pip-install-s9q4kp7v\deepspeed_1f2978aea66844a2af764a83fd024764\setup.py", line 51, in abort
          assert False, msg
      AssertionError: Unable to pre-compile async_io
      DS_BUILD_OPS=1
       [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
       [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
       [WARNING]  One can disable async_io with DS_BUILD_AIO=0
       [ERROR]  Unable to pre-compile async_io
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

As async_io/libaio are dependent on Linux, these dependencies seem to make it impossible to run LLaVA natively on Windows.

haotian-liu commented 10 months ago

Please check the latest doc for Windows. I tested on my Windows 11 PC and it works now for 16-bit inference. Quantization will be supported later.

https://github.com/haotian-liu/LLaVA/blob/main/docs/Windows.md