Closed jackylee1 closed 1 year ago
The same issue
More information is as follows: openmim 0.3.7 depends on Click nltk 3.8.1 depends on click typer 0.3.0 depends on click<7.2.0 and >=7.1.1 black 23.3.0 depends on click>=8.0.0
Which Python version is you use?
We use Python 3.8 and spacy==3.5.1
The same issue
More information is as follows: openmim 0.3..7 depends on Click nltk 3.8.1 depends on click typer 0.3..0 depends on click<7.2.0 and >=7.1.1 black 23.3.0 depends on click>=8.0.0
In ChatVideo, typer is 0.7.0
3.10.6
---Original--- From: "Yinan @.> Date: Fri, Apr 21, 2023 14:44 PM To: @.>; Cc: @.**@.>; Subject: Re: [OpenGVLab/Ask-Anything] ERROR: Cannot install -rrequirements.txt (line 11), detectron2 and spacy because these packageversions have conflicting dependencies. (Issue #4)
Which Python version is you use?
We use Python 3.8 and spacy==3.5.1
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
ubuntu 22.04 LTS (newly installed) python 3.10.6 I modified the requirements.txt spacy==3.0.9 -> spacy ==3.5.1
and new issue en_core_web_sm 3.0.0 depends 3.0.0=<spacy < 3.1.0
ubuntu 22.04 LTS (newly installed) python 3.10.6 I modified the requirements.txt spacy==3.0.9 -> spacy ==3.5.1
and new issue en_core_web_sm 3.0.0 depends 3.0.0=<spacy < 3.1.0
I upload my conda environment in https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat/environment.yaml @Pseudoking @jackylee1
(base) PS D:\Ask-Anything\video_chat_with_StableLM> conda env create -f environment.yaml Collecting package metadata (repodata.json): done Solving environment: failed
ResolvePackageNotFound:
ERROR: Ignored the following versions that require a different python version: 0.0.100 Requires-Python >=3.8.1,<4.0; 0.0.101 Requires-Python >=3.8.1,<4.0; 0.0.101rc0 Requires-Python >=3.8.1,<4.0; 0.0.102 Requires-Python >=3.8.1,<4.0; 0.0.102rc0 Requires-Python >=3.8.1,<4.0; 0.0.103 Requires-Python >=3.8.1,<4.0; 0.0.104 Requires-Python >=3.8.1,<4.0; 0.0.105 Requires-Python >=3.8.1,<4.0; 0.0.106 Requires-Python >=3.8.1,<4.0; 0.0.107 Requires-Python >=3.8.1,<4.0; 0.0.108 Requires-Python >=3.8.1,<4.0; 0.0.109 Requires-Python >=3.8.1,<4.0; 0.0.110 Requires-Python >=3.8.1,<4.0; 0.0.111 Requires-Python >=3.8.1,<4.0; 0.0.112 Requires-Python >=3.8.1,<4.0; 0.0.113 Requires-Python >=3.8.1,<4.0; 0.0.114 Requires-Python >=3.8.1,<4.0; 0.0.115 Requires-Python >=3.8.1,<4.0; 0.0.116 Requires-Python >=3.8.1,<4.0; 0.0.117 Requires-Python >=3.8.1,<4.0; 0.0.118 Requires-Python >=3.8.1,<4.0; 0.0.119 Requires-Python >=3.8.1,<4.0; 0.0.120 Requires-Python >=3.8.1,<4.0; 0.0.121 Requires-Python >=3.8.1,<4.0; 0.0.122 Requires-Python >=3.8.1,<4.0; 0.0.123 Requires-Python >=3.8.1,<4.0; 0.0.124 Requires-Python >=3.8.1,<4.0; 0.0.125 Requires-Python >=3.8.1,<4.0; 0.0.126 Requires-Python >=3.8.1,<4.0; 0.0.127 Requires-Python >=3.8.1,<4.0; 0.0.128 Requires-Python >=3.8.1,<4.0; 0.0.129 Requires-Python >=3.8.1,<4.0; 0.0.130 Requires-Python >=3.8.1,<4.0; 0.0.131 Requires-Python >=3.8.1,<4.0; 0.0.132 Requires-Python >=3.8.1,<4.0; 0.0.133 Requires-Python >=3.8.1,<4.0; 0.0.134 Requires-Python >=3.8.1,<4.0; 0.0.135 Requires-Python >=3.8.1,<4.0; 0.0.136 Requires-Python >=3.8.1,<4.0; 0.0.137 Requires-Python >=3.8.1,<4.0; 0.0.138 Requires-Python >=3.8.1,<4.0; 0.0.139 Requires-Python >=3.8.1,<4.0; 0.0.140 Requires-Python >=3.8.1,<4.0; 0.0.141 Requires-Python >=3.8.1,<4.0; 0.0.142 Requires-Python >=3.8.1,<4.0; 0.0.143 Requires-Python >=3.8.1,<4.0; 0.0.144 Requires-Python >=3.8.1,<4.0; 0.0.145 Requires-Python >=3.8.1,<4.0; 0.0.146 Requires-Python >=3.8.1,<4.0; 0.0.28 Requires-Python >=3.8.1,<4.0; 0.0.29 Requires-Python >=3.8.1,<4.0; 0.0.30 Requires-Python >=3.8.1,<4.0; 0.0.31 Requires-Python >=3.8.1,<4.0; 0.0.32 Requires-Python >=3.8.1,<4.0; 0.0.33 Requires-Python >=3.8.1,<4.0; 0.0.34 Requires-Python >=3.8.1,<4.0; 0.0.35 Requires-Python >=3.8.1,<4.0; 0.0.36 Requires-Python >=3.8.1,<4.0; 0.0.37 Requires-Python >=3.8.1,<4.0; 0.0.38 Requires-Python >=3.8.1,<4.0; 0.0.39 Requires-Python >=3.8.1,<4.0; 0.0.40 Requires-Python >=3.8.1,<4.0; 0.0.41 Requires-Python >=3.8.1,<4.0; 0.0.42 Requires-Python >=3.8.1,<4.0; 0.0.43 Requires-Python >=3.8.1,<4.0; 0.0.44 Requires-Python >=3.8.1,<4.0; 0.0.45 Requires-Python >=3.8.1,<4.0; 0.0.46 Requires-Python >=3.8.1,<4.0; 0.0.47 Requires-Python >=3.8.1,<4.0; 0.0.48 Requires-Python >=3.8.1,<4.0; 0.0.49 Requires-Python >=3.8.1,<4.0; 0.0.50 Requires-Python >=3.8.1,<4.0; 0.0.51 Requires-Python >=3.8.1,<4.0; 0.0.52 Requires-Python >=3.8.1,<4.0; 0.0.53 Requires-Python >=3.8.1,<4.0; 0.0.54 Requires-Python >=3.8.1,<4.0; 0.0.55 Requires-Python >=3.8.1,<4.0; 0.0.56 Requires-Python >=3.8.1,<4.0; 0.0.57 Requires-Python >=3.8.1,<4.0; 0.0.58 Requires-Python >=3.8.1,<4.0; 0.0.59 Requires-Python >=3.8.1,<4.0; 0.0.60 Requires-Python >=3.8.1,<4.0; 0.0.61 Requires-Python >=3.8.1,<4.0; 0.0.63 Requires-Python >=3.8.1,<4.0; 0.0.64 Requires-Python >=3.8.1,<4.0; 0.0.65 Requires-Python >=3.8.1,<4.0; 0.0.66 Requires-Python >=3.8.1,<4.0; 0.0.67 Requires-Python >=3.8.1,<4.0; 0.0.68 Requires-Python >=3.8.1,<4.0; 0.0.69 Requires-Python >=3.8.1,<4.0; 0.0.70 Requires-Python >=3.8.1,<4.0; 0.0.71 Requires-Python >=3.8.1,<4.0; 0.0.72 Requires-Python >=3.8.1,<4.0; 0.0.73 Requires-Python >=3.8.1,<4.0; 0.0.74 Requires-Python >=3.8.1,<4.0; 0.0.75 Requires-Python >=3.8.1,<4.0; 0.0.76 Requires-Python >=3.8.1,<4.0; 0.0.77 Requires-Python >=3.8.1,<4.0; 0.0.78 Requires-Python >=3.8.1,<4.0; 0.0.79 Requires-Python >=3.8.1,<4.0; 0.0.80 Requires-Python >=3.8.1,<4.0; 0.0.81 Requires-Python >=3.8.1,<4.0; 0.0.82 Requires-Python >=3.8.1,<4.0; 0.0.83 Requires-Python >=3.8.1,<4.0; 0.0.84 Requires-Python >=3.8.1,<4.0; 0.0.85 Requires-Python >=3.8.1,<4.0; 0.0.86 Requires-Python >=3.8.1,<4.0; 0.0.87 Requires-Python >=3.8.1,<4.0; 0.0.88 Requires-Python >=3.8.1,<4.0; 0.0.89 Requires-Python >=3.8.1,<4.0; 0.0.90 Requires-Python >=3.8.1,<4.0; 0.0.91 Requires-Python >=3.8.1,<4.0; 0.0.92 Requires-Python >=3.8.1,<4.0; 0.0.93 Requires-Python >=3.8.1,<4.0; 0.0.94 Requires-Python >=3.8.1,<4.0; 0.0.95 Requires-Python >=3.8.1,<4.0; 0.0.96 Requires-Python >=3.8.1,<4.0; 0.0.97 Requires-Python >=3.8.1,<4.0; 0.0.98 Requires-Python >=3.8.1,<4.0; 0.0.99 Requires-Python >=3.8.1,<4.0; 0.0.99rc0 Requires-Python >=3.8.1,<4.0 ERROR: Could not find a version that satisfies the requirement langchain==0.0.101 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.0.20, 0.0.21, 0.0.22, 0.0.23, 0.0.24, 0.0.25, 0.0.26, 0.0.27) ERROR: No matching distribution found for langchain==0.0.101
here are my conda environment in Windows 11 with python3.8 @jackylee1 :
name: py38
channels:
- msys2
- defaults
dependencies:
- ca-certificates=2023.01.10=haa95532_0
- libffi=3.4.2=hd77b12b_6
- libpython=2.1=py38_0
- m2w64-binutils=2.25.1=5
- m2w64-bzip2=1.0.6=6
- m2w64-crt-git=5.0.0.4636.2595836=2
- m2w64-gcc=5.3.0=6
- m2w64-gcc-ada=5.3.0=6
- m2w64-gcc-fortran=5.3.0=6
- m2w64-gcc-libgfortran=5.3.0=6
- m2w64-gcc-libs=5.3.0=7
- m2w64-gcc-libs-core=5.3.0=7
- m2w64-gcc-objc=5.3.0=6
- m2w64-gmp=6.1.0=2
- m2w64-headers-git=5.0.0.4636.c0ad18a=2
- m2w64-isl=0.16.1=2
- m2w64-libiconv=1.14=6
- m2w64-libmangle-git=5.0.0.4509.2e5a9a2=2
- m2w64-libwinpthread-git=5.0.0.4634.697f757=2
- m2w64-make=4.1.2351.a80a8b8=2
- m2w64-mpc=1.0.3=3
- m2w64-mpfr=3.1.4=4
- m2w64-pkg-config=0.29.1=2
- m2w64-toolchain=5.3.0=7
- m2w64-tools-git=5.0.0.4592.90b8472=2
- m2w64-windows-default-manifest=6.4=3
- m2w64-winpthreads-git=5.0.0.4634.697f757=2
- m2w64-zlib=1.2.8=10
- msys2-conda-epoch=20160418=1
- openssl=1.1.1t=h2bbff1b_0
- pip=23.0.1=py38haa95532_0
- python=3.8.16=h6244533_3
- sqlite=3.41.2=h2bbff1b_0
- vc=14.2=h21ff451_1
- vs2015_runtime=14.27.29016=h5e58377_2
- wheel=0.38.4=py38haa95532_0
- pip:
- absl-py==1.4.0
- accelerate==0.18.0
- addict==2.4.0
- aiofiles==23.1.0
- aiohttp==3.8.4
- aiosignal==1.3.1
- altair==4.2.2
- antlr4-python3-runtime==4.9.3
- anyio==3.6.2
- async-timeout==4.0.2
- attrs==23.1.0
- bitsandbytes==0.38.1
- blis==0.7.9
- boto3==1.26.117
- botocore==1.29.117
- braceexpand==0.1.7
- cachetools==5.3.0
- catalogue==2.0.8
- certifi==2022.12.7
- charset-normalizer==3.1.0
- click==7.1.2
- colorama==0.4.6
- contourpy==1.0.7
- cycler==0.11.0
- cymem==2.0.7
- cython==0.29.34
- dataclasses-json==0.5.7
- decord==0.6.0
- detectron2==0.6
- einops==0.6.1
- en-core-web-sm==3.0.0
- entrypoints==0.4
- fairscale==0.4.4
- fastapi==0.95.1
- ffmpy==0.3.0
- filelock==3.12.0
- fonttools==4.39.3
- frozenlist==1.3.3
- fsspec==2023.4.0
- future==0.18.3
- google-auth==2.17.3
- google-auth-oauthlib==1.0.0
- gradio==3.27.0
- gradio-client==0.1.3
- greenlet==2.0.2
- grpcio==1.54.0
- h11==0.14.0
- httpcore==0.17.0
- httpx==0.24.0
- huggingface-hub==0.13.4
- idna==3.4
- imageio==2.27.0
- imageio-ffmpeg==0.4.8
- importlib-resources==5.12.0
- jinja2==3.1.2
- jmespath==1.0.1
- joblib==1.2.0
- jsonschema==4.17.3
- kiwisolver==1.4.4
- langchain==0.0.101
- linkify-it-py==2.0.0
- lvis==0.5.3
- markdown==3.4.3
- markdown-it-py==2.2.0
- markupsafe==2.1.2
- marshmallow==3.19.0
- marshmallow-enum==1.5.1
- matplotlib==3.7.1
- mdit-py-plugins==0.3.3
- mdurl==0.1.2
- mmcv==2.0.0
- mmengine==0.7.2
- model-index==0.1.11
- multidict==6.0.4
- murmurhash==1.0.9
- mypy-extensions==1.0.0
- nltk==3.8.1
- numpy==1.24.2
- oauthlib==3.2.2
- omegaconf==2.3.0
- openai==0.27.4
- opencv-python==4.7.0.72
- openmim==0.3.7
- ordered-set==4.1.0
- orjson==3.8.10
- packaging==23.1
- pandas==2.0.0
- pathy==0.10.1
- pillow==9.5.0
- pkgutil-resolve-name==1.3.10
- preshed==3.0.8
- protobuf==4.22.3
- psutil==5.9.5
- pyasn1==0.5.0
- pyasn1-modules==0.3.0
- pycocotools-windows==2.0.0.2
- pydantic==1.8.2
- pydeprecate==0.3.1
- pydub==0.25.1
- pyparsing==3.0.9
- pyrsistent==0.19.3
- python-multipart==0.0.6
- pytorch-lightning==1.5.10
- pytz==2023.3
- pyyaml==6.0
- regex==2023.3.23
- requests==2.28.2
- requests-oauthlib==1.3.1
- rich==13.3.4
- rsa==4.9
- s3transfer==0.6.0
- sacremoses==0.0.53
- scipy==1.10.0
- semantic-version==2.10.0
- sentencepiece==0.1.98
- setuptools==59.5.0
- simplet5==0.1.4
- six==1.16.0
- smart-open==6.3.0
- sniffio==1.3.0
- spacy==3.0.9
- spacy-legacy==3.0.12
- sqlalchemy==1.4.47
- srsly==2.4.6
- starlette==0.26.1
- tabulate==0.9.0
- tenacity==8.2.2
- tensorboard==2.12.2
- tensorboard-data-server==0.7.0
- tensorboard-plugin-wit==1.8.1
- termcolor==2.2.0
- thinc==8.0.17
- timm==0.4.12
- tokenizers==0.13.3
- tomli==2.0.1
- toolz==0.12.0
- torch==1.13.1
- torchmetrics==0.11.4
- torchvision==0.14.1
- tqdm==4.65.0
- transformers==4.16.2
- typer==0.3.2
- typing-extensions==4.5.0
- typing-inspect==0.8.0
- tzdata==2023.3
- uc-micro-py==1.0.1
- urllib3==1.26.15
- uvicorn==0.21.1
- wasabi==0.10.1
- webdataset==0.2.48
- websockets==11.0.2
- werkzeug==2.2.3
- wget==3.2
- yapf==0.33.0
- yarl==1.8.2
- zipp==3.15.0
prefix: C:\Users\pjlab\anaconda3\envs\py38
langchain
I am using Windows 11 and python3.8.16; langchian==0.0.101 exists.
俺也一样,出现各种依赖冲突
made the python3.8.16 and install all except the last two en_core_web_sm and detectron2
made the python3.8.16 and install all except the last two en_core_web_sm and detectron2
Thanks for your feedback! I fixed the requirements.txt
and added the instructions for detectron2 and en_core_web_sm in the installation part.
NFO: pip is looking at multiple versions of wget to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of fairscale to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of transformers to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of timm to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of
The conflict is caused by: The user requested transformers==4.28.1 simplet5 0.1.4 depends on transformers==4.16.2 The user requested transformers==4.28.1 simplet5 0.1.3 depends on transformers==4.10.0 The user requested transformers==4.28.1 simplet5 0.1.2 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.1.1 depends on transformers==4.8.2 The user requested transformers==4.28.1 simplet5 0.1.0 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.0.9 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.0.7 depends on transformers==4.6.1
To fix this you could try to:
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts (videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM>
哎
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' Collecting git+https://github.com/facebookresearch/detectron2.git Cloning https://github.com/facebookresearch/detectron2.git to c:\users\administrator\appdata\local\temp\pip-req-build-x6pu3ck0 Running command git clone --filter=blob:none --quiet https://github.com/facebookresearch/detectron2.git 'C:\Users\Administrator\AppData\Local\Temp\pip-req-build-x6pu3ck0' fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054 fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21084 ms: Timed out error: unable to read sha1 file of .clang-format (39b1b3d603ed0cf6b7f94c9c08067f148f35613f) fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054 error: unable to read sha1 file of .github/CONTRIBUTING.md (9bab709cae689ba3b92dd52f7fbcc0c6926f4a38) fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21122 ms: Timed out
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' Collecting git+https://github.com/facebookresearch/detectron2.git Cloning https://github.com/facebookresearch/detectron2.git to c:\users\administrator\appdata\local\temp\pip-req-build-x6pu3ck0 Running command git clone --filter=blob:none --quiet https://github.com/facebookresearch/detectron2.git 'C:\Users\Administrator\AppData\Local\Temp\pip-req-build-x6pu3ck0' fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054 fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21084 ms: Timed out error: unable to read sha1 file of .clang-format (39b1b3d603ed0cf6b7f94c9c08067f148f35613f) fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': OpenSSL SSL_read: Connection was reset, errno 10054 error: unable to read sha1 file of .github/CONTRIBUTING.md (9bab709cae689ba3b92dd52f7fbcc0c6926f4a38) fatal: unable to access 'https://github.com/facebookresearch/detectron2.git/': Failed to connect to github.com port 443 after 21122 ms: Timed out
Try to export your proxy for git
NFO: pip is looking at multiple versions of wget to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of fairscale to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of transformers to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of timm to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of
to determine which version is compatible with other requirements. This could take a while. INFO: pip is looking at multiple versions of langchain to determine which version is compatible with other requirements. This could take a while. ERROR: Cannot install -r requirements.txt (line 6) and transformers==4.28.1 because these package versions have conflicting dependencies. The conflict is caused by: The user requested transformers==4.28.1 simplet5 0.1.4 depends on transformers==4.16.2 The user requested transformers==4.28.1 simplet5 0.1.3 depends on transformers==4.10.0 The user requested transformers==4.28.1 simplet5 0.1.2 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.1.1 depends on transformers==4.8.2 The user requested transformers==4.28.1 simplet5 0.1.0 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.0.9 depends on transformers==4.6.1 The user requested transformers==4.28.1 simplet5 0.0.7 depends on transformers==4.6.1
To fix this you could try to:
- loosen the range of package versions you've specified
- remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts (videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM>
哎
You can loosen the transformers
version in requirements.txt
if you only install video_chat.
Try to export your proxy for git
不行,哎真是气死人啊
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python setup.py build --force develop
C:\Users\Administrator\miniconda3\envs\videochat\python.exe: can't open file 'D:\askmeany\ask-anything\video_chat_with_StableLM\setup.py': [Errno 2] No such file or directory
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> cd detectron2
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2> python setup.py build --force develop
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2\setup.py", line 11, in
(videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> python setup.py build --force develop C:\Users\Administrator\miniconda3\envs\videochat\python.exe: can't open file 'D:\askmeany\ask-anything\video_chat_with_StableLM\setup.py': [Errno 2] No such file or directory (videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM> cd detectron2 (videochat) PS D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2> python setup.py build --force develop Traceback (most recent call last): File "D:\askmeany\ask-anything\video_chat_with_StableLM\detectron2\setup.py", line 11, in
from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\utils__init__.py", line 4, in from .throughput_benchmark import ThroughputBenchmark File "C:\Users\Administrator \miniconda3\envs\videochat\lib\site-packages\torch\utils\throughput_benchmark.py", line 2, in import torch._C ModuleNotFoundError: No module named 'torch._C' 真的会吐血
reinstall your pytorch, doyou have GPU in your machine?
Yes
[INFO] initialize InternVideo model success!
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\app.py", line 33, in
以上卸载重装 pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Traceback (most recent call last):
File "D:\askmeany\ask-anything\video_chat_with_StableLM\app.py", line 36, in
raise KeyError(key)
KeyError: 'gpt_neox'
which transforms version are you used? In chat_video_with_stablelm
and chat_video_with_moss
you should install latest version 4.28.1
.
i put project in d driver,why the model download to c?
(videochat) PS D:\askmeany\Ask-Anything\video_chat_with_StableLM> python app.py load checkpoint from pretrained_models/tag2text_swin_14m.pth [INFO] initialize caption model success! Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Drop path rate: 0.0 No L_MHRA: True Double L_MHRA: True Use checkpoint: False Checkpoint number: [0] Drop path rate: 0.0 Drop path rate: 0.0 Drop path rate: 0.0 Drop path rate: 0.0 [INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Downloading (…)l-00003-of-00004.bin: 100%|████████████████████████████████████████| 9.75G/9.75G [14:30<00:00, 11.2MB/s] Downloading (…)l-00004-of-00004.bin: 100%|████████████████████████████████████████| 2.45G/2.45G [03:37<00:00, 11.3MB/s] Downloading shards: 100%|███████████████████████████████████████████████████████████████| 4/4 [18:09<00:00, 272.36s/it] Loading checkpoint shards: 25%|██████████████▎ | 1/4 [00:16<00:48, 16.12s/it] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │ │ y:442 in load_state_dict │ │ │ │ 439 │ │ │ ) │ │ 440 │ │ return safe_load_file(checkpoint_file) │ │ 441 │ try: │ │ ❱ 442 │ │ return torch.load(checkpoint_file, map_location="cpu") │ │ 443 │ except Exception as e: │ │ 444 │ │ try: │ │ 445 │ │ │ with open(checkpoint_file) as f: │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\serialization.py:797 in │ │ load │ │ │ │ 794 │ │ │ # If we want to actually tail call to torch.jit.load, we need to │ │ 795 │ │ │ # reset back to the original position. │ │ 796 │ │ │ orig_position = opened_file.tell() │ │ ❱ 797 │ │ │ with _open_zipfile_reader(opened_file) as opened_zipfile: │ │ 798 │ │ │ │ if _is_torchscript_zip(opened_zipfile): │ │ 799 │ │ │ │ │ warnings.warn("'torch.load' received a zip file that looks like a To │ │ 800 │ │ │ │ │ │ │ │ " dispatching to 'torch.jit.load' (call 'torch.jit.loa │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\serialization.py:283 in │ │ init │ │ │ │ 280 │ │ 281 class _open_zipfile_reader(_opener): │ │ 282 │ def init(self, name_or_buffer) -> None: │ │ ❱ 283 │ │ super().init(torch._C.PyTorchFileReader(name_or_buffer)) │ │ 284 │ │ 285 │ │ 286 class _open_zipfile_writer_file(_opener): │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\modeling_utils.p │
│ y:446 in load_state_dict │
│ │
│ 443 │ except Exception as e: │
│ 444 │ │ try: │
│ 445 │ │ │ with open(checkpoint_file) as f: │
│ ❱ 446 │ │ │ │ if f.read(7) == "version": │
│ 447 │ │ │ │ │ raise OSError( │
│ 448 │ │ │ │ │ │ "You seem to have cloned a repository without having git-lfs ins │
│ 449 │ │ │ │ │ │ "git-lfs and run git lfs install
followed by git lfs pull
in │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\codecs.py:322 in decode │
│ │
│ 319 │ def decode(self, input, final=False): │
│ 320 │ │ # decode input (taking the buffer into account) │
│ 321 │ │ data = self.buffer + input │
│ ❱ 322 │ │ (result, consumed) = self._buffer_decode(data, self.errors, final) │
│ 323 │ │ # keep undecoded input until the next call │
│ 324 │ │ self.buffer = data[consumed:] │
│ 325 │ │ return result │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 128: invalid start byte
During handling of the above exception, another exception occurred:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in
In huggingface, the default cache directory is ~/.cache/huggingface/
. Change the cache location by setting the shell environment variable, TRANSFORMERS_CACHE to another directory:
export TRANSFORMERS_CACHE="/path/to/another/directory"
or
change https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat_with_StableLM/stablelm.py#L30 to
self.m = AutoModelForCausalLM.from_pretrained(
"stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16, cache_dir='./')).cuda()
self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b", cache_dir='./')
这个模型跟 官方项目huggingface上下下来的模型大小怎么好像不一样
Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
Drop path rate: 0.0
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:07<00:00, 16.99s/it]
Downloading (…)neration_config.json: 100%|████████████████████████████████████████████| 111/111 [00:00<00:00, 55.5kB/s]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in param_applied
, so we have to use │
│ 818 │ │ │ # with torch.no_grad():
│
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in
我需要多大内存呢?8g不够吗
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in param_applied
, so we have to use │
│ 818 │ │ │ # with torch.no_grad():
│
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in
8G都不行吗?
[INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │ │ │ │ 33 dense_caption_model.initialize_model() │ │ 34 print("[INFO] initialize dense caption model success!") │ │ 35 │ │ ❱ 36 bot = StableLMBot() │ │ 37 │ │ 38 def inference(video_path, input_tag, progress=gr.Progress()): │ │ 39 │ data = loadvideo_decord_origin(video_path) │ │ │ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init │ │ │ │ 27 class StableLMBot: │ │ 28 │ def init(self): │ │ 29 │ │ print(f"Starting to load the model to memory") │ │ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │ │ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │ │ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │ │ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in cuda │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │ │ 0 in _apply │ │ │ │ 817 │ │ │ # track autograd history of
param_applied
, so we have to use │ │ 818 │ │ │ #with torch.no_grad():
│ │ 819 │ │ │ with torch.no_grad(): │ │ ❱ 820 │ │ │ │ param_applied = fn(param) │ │ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │ │ 822 │ │ │ if should_use_set_data: │ │ 823 │ │ │ │ param.data = param_applied │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF8G都不行吗?
In chat_video, GPU memory should be at least 12G. StableLM and MOSS may use more GPU memory. We only test them in 80G A100.
[INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │ │ │ │ 33 dense_caption_model.initialize_model() │ │ 34 print("[INFO] initialize dense caption model success!") │ │ 35 │ │ ❱ 36 bot = StableLMBot() │ │ 37 │ │ 38 def inference(video_path, input_tag, progress=gr.Progress()): │ │ 39 │ data = loadvideo_decord_origin(video_path) │ │ │ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init │ │ │ │ 27 class StableLMBot: │ │ 28 │ def init(self): │ │ 29 │ │ print(f"Starting to load the model to memory") │ │ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │ │ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │ │ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │ │ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in cuda │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │ │ 0 in _apply │ │ │ │ 817 │ │ │ # track autograd history of
param_applied
, so we have to use │ │ 818 │ │ │ #with torch.no_grad():
│ │ 819 │ │ │ with torch.no_grad(): │ │ ❱ 820 │ │ │ │ param_applied = fn(param) │ │ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │ │ 822 │ │ │ if should_use_set_data: │ │ 823 │ │ │ │ param.data = param_applied │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF 8G都不行吗?In chat_video, GPU memory should be at least 12G. StableLM and MOSS may use more GPU memory. We only test them in 80G A100.
.。。。。。。。。。
In our test, you can try commenting T5 in video_chat for easy use in 8G GPU memory.
According to https://github.com/Stability-AI/StableLM/issues/17 , StableLM with 3B parameters needs to use a GPU with at least 12G memory. Further, MOSS with 16B parameters needs more video memory
这几个项目有的要api,有的要高配硬件,minigpt4也是需要api吗,。?moss的话是需要什么条件运行呢,。?我用过stablelm本地配置是跑得起来的,多了个视频检索就需要这么高配置了啊
minigpt4 does not require an API key. How much GPU memory is occupied by stableLM? Video plugins take about 7GB.
My gpu is rtx3060ti 8g. I am able to run the base version.but not succeed in your project
---Original--- From: "Yinan @.> Date: Sun, Apr 23, 2023 15:34 PM To: @.>; Cc: @.**@.>; Subject: Re: [OpenGVLab/Ask-Anything] ERROR: Cannot install -rrequirements.txt (line 11), detectron2 and spacy because these packageversions have conflicting dependencies. (Issue #4)
[INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:48<00:00, 12.04s/it] ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in │ │ │ │ 33 dense_caption_model.initialize_model() │ │ 34 print("[INFO] initialize dense caption model success!") │ │ 35 │ │ ❱ 36 bot = StableLMBot() │ │ 37 │ │ 38 def inference(video_path, input_tag, progress=gr.Progress()): │ │ 39 │ data = loadvideo_decord_origin(video_path) │ │ │ │ D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py:30 in init │ │ │ │ 27 class StableLMBot: │ │ 28 │ def init(self): │ │ 29 │ │ print(f"Starting to load the model to memory") │ │ ❱ 30 │ │ self.m = AutoModelForCausalLM.from_pretrained( │ │ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │ │ 32 │ │ self.tok = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b") │ │ 33 │ │ self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, d │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in cuda │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:79 │ │ 7 in _apply │ │ │ │ 794 │ │ │ 795 │ def _apply(self, fn): │ │ 796 │ │ for module in self.children(): │ │ ❱ 797 │ │ │ module._apply(fn) │ │ 798 │ │ │ │ 799 │ │ def compute_should_use_set_data(tensor, tensor_applied): │ │ 800 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:82 │ │ 0 in _apply │ │ │ │ 817 │ │ │ # track autograd history of param_applied, so we have to use │ │ 818 │ │ │ # with torch.no_grad(): │ │ 819 │ │ │ with torch.no_grad(): │ │ ❱ 820 │ │ │ │ param_applied = fn(param) │ │ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │ │ 822 │ │ │ if should_use_set_data: │ │ 823 │ │ │ │ param.data = param_applied │ │ │ │ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │ │ 5 in │ │ │ │ 902 │ │ Returns: │ │ 903 │ │ │ Module: self │ │ 904 │ │ """ │ │ ❱ 905 │ │ return self._apply(lambda t: t.cuda(device)) │ │ 906 │ │ │ 907 │ def ipu(self: T, device: Optional[Union[int, device]] = None) -> T: │ │ 908 │ │ r"""Moves all model parameters and buffers to the IPU. │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB (GPU 0; 8.00 GiB total capacity; 6.93 GiB already allocated; 0 bytes free; 7.13 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
8G都不行吗?
In chat_video, GPU memory should be at least 12G. StableLM and MOSS may use more GPU memory. We only test them in 80G A100.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
In StableLM_base_7B, which uses 13G VRAM if in 8-bit quantization.
Maybe you can consider to change https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat_with_StableLM/stablelm.py#L33 to
self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, device=-1)
Strange。i remember ican run with 8g
---Original--- From: "Yinan @.> Date: Sun, Apr 23, 2023 21:34 PM To: @.>; Cc: @.**@.>; Subject: Re: [OpenGVLab/Ask-Anything] ERROR: Cannot install -rrequirements.txt (line 11), detectron2 and spacy because these packageversions have conflicting dependencies. (Issue #4)
In StableLM_base_7B, which uses 13G VRAM if in 8-bit quantization.
Maybe you can consider to change https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat_with_StableLM/stablelm.py#L33 to
self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, device=-1)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>
What does it mean?still failed
---Original--- From: "Yinan @.> Date: Sun, Apr 23, 2023 21:34 PM To: @.>; Cc: @.**@.>; Subject: Re: [OpenGVLab/Ask-Anything] ERROR: Cannot install -rrequirements.txt (line 11), detectron2 and spacy because these packageversions have conflicting dependencies. (Issue #4)
In StableLM_base_7B, which uses 13G VRAM if in 8-bit quantization.
Maybe you can consider to change https://github.com/OpenGVLab/Ask-Anything/blob/main/video_chat_with_StableLM/stablelm.py#L33 to
self.generator = pipeline('text-generation', model=self.m, tokenizer=self.tok, device=-1)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: @.***>
same
[INFO] initialize InternVideo model success!
[INFO] initialize dense caption model success!
Starting to load the model to memory
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:56<00:00, 14.05s/it]
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\askmeany\Ask-Anything\video_chat_with_StableLM\app.py:36 in param_applied
, so we have to use │
│ 818 │ │ │ # with torch.no_grad():
│
│ 819 │ │ │ with torch.no_grad(): │
│ ❱ 820 │ │ │ │ param_applied = fn(param) │
│ 821 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 822 │ │ │ if should_use_set_data: │
│ 823 │ │ │ │ param.data = param_applied │
│ │
│ C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py:90 │
│ 5 in
│ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │
In this line, you could try to put stablelm in CPU by removing .cuda()
.
Drop path rate: 0.0 Drop path rate: 0.0 Drop path rate: 0.0 Drop path rate: 0.0 [INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:05<00:00, 16.47s/it] Sucessfully loaded the model to the memory Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True
in launch()
.
but i can not open the link
open in 127.0.0.1:7860,but when i ask question ,shows error below
[INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:58<00:00, 14.64s/it] Sucessfully loaded the model to the memory Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True
in launch()
.
[04/24 02:13:41] asyncio ERROR: Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。
Second 1:a close up image of a man playing an instrument.
Second 2:a close up image of a man playing an instrument.
Second 3:a close up image of a man playing an instrument.
Second 4:a close up image of a man playing an instrument.
Second 5:a close up image of a man playing an instrument.
Second 6:a close up image of a man playing an instrument.
Second 7:a close up image of a man playing an instrument.
Second 8:a close up image of a man playing an instrument.
Second 9:a close up image of a man playing an instrument.
Second 10:a close up of a person playing a guitar.
Second 1 : the hand of a woman,a woman in the photo.
Second 6 : the hand of a person,a black shirt on a person,a white bracelet.
Setting pad_token_id
to eos_token_id
:0 for open-end generation.
Traceback (most recent call last):
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\routes.py", line 401, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\blocks.py", line 1302, in process_api
result = await self.call_function(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\blocks.py", line 1025, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, args)
File "D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py", line 98, in run_text
output = self.generate(history)
File "D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py", line 75, in generate
result = self.generator(text, max_new_tokens=1024, num_return_sequences=1, num_beams=1, do_sample=True,
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\text_generation.py", line 209, in call
return super().call(text_inputs, kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1109, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1116, in run_single
model_outputs = self.forward(model_inputs, forward_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1015, in forward
model_outputs = self._forward(model_inputs, forward_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\text_generation.py", line 251, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, generate_kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\generation\utils.py", line 1485, in generate
return self.sample(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\generation\utils.py", line 2524, in sample
outputs = self(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 662, in forward
outputs = self.gpt_neox(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 553, in forward
outputs = layer(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 321, in forward
self.input_layernorm(hidden_states),
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Setting pad_token_id
to eos_token_id
:0 for open-end generation.
Traceback (most recent call last):
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\routes.py", line 401, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\blocks.py", line 1302, in process_api
result = await self.call_function(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\gradio\blocks.py", line 1025, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, args)
File "D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py", line 98, in run_text
output = self.generate(history)
File "D:\askmeany\Ask-Anything\video_chat_with_StableLM\stablelm.py", line 75, in generate
result = self.generator(text, max_new_tokens=1024, num_return_sequences=1, num_beams=1, do_sample=True,
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\text_generation.py", line 209, in call
return super().call(text_inputs, kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1109, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1116, in run_single
model_outputs = self.forward(model_inputs, forward_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\base.py", line 1015, in forward
model_outputs = self._forward(model_inputs, forward_params)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\pipelines\text_generation.py", line 251, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, generate_kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, *kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\generation\utils.py", line 1485, in generate
return self.sample(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\generation\utils.py", line 2524, in sample
outputs = self(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 662, in forward
outputs = self.gpt_neox(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 553, in forward
outputs = layer(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\transformers\models\gpt_neox\modeling_gpt_neox.py", line 321, in forward
self.input_layernorm(hidden_states),
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
return F.layer_norm(
File "C:\Users\Administrator\miniconda3\envs\videochat\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
This is because this operator does not support fp16 in pytorch of cpu, it is recommended that you change
│ 31 │ │ │ "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda() │
to torch_dtype='auto'
Drop path rate: 0.0 [INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:48<00:00, 27.02s/it] Sucessfully loaded the model to the memory Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True
in launch()
.
Second 1:a close up image of a man playing an instrument.
Second 2:a close up image of a man playing an instrument.
Second 3:a close up image of a man playing an instrument.
Second 4:a close up image of a man playing an instrument.
Second 5:a close up image of a man playing an instrument.
Second 6:a close up image of a man playing an instrument.
Second 7:a close up image of a man playing an instrument.
Second 8:a close up image of a man playing an instrument.
Second 9:a close up image of a man playing an instrument.
Second 10:a close up of a person playing a guitar.
Second 1 : the hand of a woman,a woman in the photo.
Second 6 : the hand of a person,a black shirt on a person,a white bracelet.
Setting pad_token_id
to eos_token_id
:0 for open-end generation.
wait for quite long without return anything,still waitting
Drop path rate: 0.0 [INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:48<00:00, 27.02s/it] Sucessfully loaded the model to the memory Running on local URL: http://0.0.0.0:7860
To create a public link, set
share=True
inlaunch()
. Second 1:a close up image of a man playing an instrument. Second 2:a close up image of a man playing an instrument. Second 3:a close up image of a man playing an instrument. Second 4:a close up image of a man playing an instrument. Second 5:a close up image of a man playing an instrument. Second 6:a close up image of a man playing an instrument. Second 7:a close up image of a man playing an instrument. Second 8:a close up image of a man playing an instrument. Second 9:a close up image of a man playing an instrument. Second 10:a close up of a person playing a guitar. Second 1 : the hand of a woman,a woman in the photo. Second 6 : the hand of a person,a black shirt on a person,a white bracelet.Setting
pad_token_id
toeos_token_id
:0 for open-end generation.wait for quite long without return anything,still waitting
Due to this will use your CPU to run StableLM, it's slow.
Drop path rate: 0.0 [INFO] initialize InternVideo model success! [INFO] initialize dense caption model success! Starting to load the model to memory Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [01:48<00:00, 27.02s/it] Sucessfully loaded the model to the memory Running on local URL: http://0.0.0.0:7860 To create a public link, set
share=True
inlaunch()
. Second 1:a close up image of a man playing an instrument. Second 2:a close up image of a man playing an instrument. Second 3:a close up image of a man playing an instrument. Second 4:a close up image of a man playing an instrument. Second 5:a close up image of a man playing an instrument. Second 6:a close up image of a man playing an instrument. Second 7:a close up image of a man playing an instrument. Second 8:a close up image of a man playing an instrument. Second 9:a close up image of a man playing an instrument. Second 10:a close up of a person playing a guitar. Second 1 : the hand of a woman,a woman in the photo. Second 6 : the hand of a person,a black shirt on a person,a white bracelet. Settingpad_token_id
toeos_token_id
:0 for open-end generation. wait for quite long without return anything,still waittingDue to this will use your CPU to run StableLM, it's slow.
so will it answer finally?did i need to do some changes?
Yes, but I haven't run StableLM on the CPU, so I don't know how slow it is.
Yes, but I haven't run StableLM on the CPU, so I don't know how slow it is.
succed,but result not good,thank you
the problem keep showing