Closed vanakema closed 2 years ago
This seems to be a common problem when the initial install didn't complete for whatever reason. Recommended solutions are to run "conda env update -f environment.yaml" or "pip install -e ." If neither of these works, then the hack is to take dream.py out of the scripts directory, put it in the top-level stable-diffusion directory, and run it from there.
It's possible that something did get broken in the latest update, and I'd be interested in hearing from other Colab and Docker users whether this is a new problem that appeared. There have been so many updates over the past 48 hours (in a mostly good feature-adding way) that I wouldn't be surprised.
I had previously tried pip install -e .
in my docker container but it didn't help 😢
The fix did work for colab though.
I'll try my docker container on one of the existing releases
I retried -e
on my docker container just to make sure I didn't do something big dumb, but same result
Well, it got you 50% of the way there...
Some other things to try:
sys.path.append('/path/to/stable-diffusion')
(replace with the actual path to the stable-diffusion directory)
Let me know if you find a working solution. This has probably been the most elusive of bugs in the fork, and I wish I could get to the bottom of it.
Lincoln
On Wed, Aug 31, 2022 at 11:16 PM Mark V @.***> wrote:
I retried -e on my docker container just to make sure I didn't do something big dumb, but same result
— Reply to this email directly, view it on GitHub https://github.com/lstein/stable-diffusion/issues/272#issuecomment-1233687078, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA3EVKIWF2A26TPJUH6EEDV4AN2HANCNFSM6AAAAAAQB5MCAA . You are receiving this because you commented.Message ID: @.***>
--
Lincoln Stein
Head, Adaptive Oncology, OICR
Senior Principal Investigator, OICR
Professor, Department of Molecular Genetics, University of Toronto
Tel: 416-673-8514
Cell: 416-817-8240
@.***
*E*xecutive Assistant
Michelle Xin
Tel: 647-260-7927
@. @.>*
Ontario Institute for Cancer Research
MaRS Centre, 661 University Avenue, Suite 510, Toronto, Ontario, Canada M5G 0A3
Collaborate. Translate. Change lives.
This message and any attachments may contain confidential and/or privileged information for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this message in error, please contact the sender and delete all copies. Opinions, conclusions or other information contained in this message may not be that of the organization.
Using the latest release tag in my docker container allowed the initializing message to show in my docker container, but then it failed with ModuleNotFoundError: No module named 'clip'
which is very strange too. I can see it in the src
folder. I tried rerunning pip install -r requirements.txt
but no dice. Tried adding the sd path to my PYTHONPATH but no luck either for the clip
issue. Adding sys.path.append('/path/to/stable-diffusion')
to the top of the dream.py didn't help with the clip
issue.
Adding export PYTHONPATH="/app"
helped me get it to show the initialization message on main
! Progress!
Btw thanks for all your work on this. Very cool enhancements to the original repo
Doing export PYTHONPATH="/app:/app/src"
fixed the clip import issue!
Hit the same issue with
Loading model from models/ldm/stable-diffusion-v1/model.ckpt
Traceback (most recent call last):
File "scripts/dream.py", line 545, in <module>
main()
File "scripts/dream.py", line 91, in main
t2i.load_model()
File "/app/ldm/simplet2i.py", line 536, in load_model
model = self._load_model_from_config(config, self.weights)
File "/app/ldm/simplet2i.py", line 589, in _load_model_from_config
model = instantiate_from_config(config.model)
File "/app/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config['target'])(
File "/app/ldm/util.py", line 99, in get_obj_from_str
return getattr(importlib.import_module(module, package=None), cls)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/app/ldm/models/diffusion/ddpm.py", line 39, in <module>
from ldm.models.autoencoder import (
File "/app/ldm/models/autoencoder.py", line 6, in <module>
from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer
ModuleNotFoundError: No module named 'taming'
Had to add /app/src/taming-transformers
to the PYTHONPATH too. This is such a weird bug
Something which should be automatically modifying the path isn't. Unfortunately, given the complexity of your environment, it's really hard to say what (candidates are: conda, pip, pipenv, colab itself... I don't really understand how docker is involved in this, but the any paths set in the container's Dockerfile might have an issue)
Unfortunately, given the complexity of your environment, it's really hard to say what (candidates are: conda, pip, pipenv, colab itself... I don't really understand how docker is involved in this, but the any paths set in the container's Dockerfile might have an issue)
The environment isn't actually that complex. Not using conda and not using pipenv (that's part of why I use docker). Not using Colab; Colab was just a separate environment I tried running to see if it was an env issue.
I'll push to my fork later tonight to let you guys see the Dockerfile and what commands I run to use it.
In case this helps anyone else, I resolved this on an M1 mac by just installing Anaconda from https://www.anaconda.com/
In case this helps anyone else, I resolved this on an M1 mac by just installing Anaconda from https://www.anaconda.com/
Yeah that makes some sense, I had a hunch that it had something to do with the way that modules in Anaconda environments are handled. The reason I'm still confused is because the colab notebook just uses pipenv, not conda, and it works fine after running 'pip install -e .'
I use pip-only on baremetal Win 11/RTX 2070 8G VRAM in a pew environment (which under the covers is a standard virtualenv) - I have no issues.
@vanakema; Can you explain your exact environment? You've mentioned Google Colab, Docker, WSL2, GPU passthrough, conda, pip... What is your host environment (OS & ver, GPU & VRAM)? In which exact environment are you trying to actually run the 'dream' script?
Sure. It's just what I said in my first post, though I understand the confusion of the two separate environments. Sorry for the confusion.
Primary environment tested in: Windows 10, 2080ti 12gb -> Docker for Windows w/ WSL2 integration and GPU passthrough -> Ubuntu Docker container No special python environment manager, just straight pip. Python 3.8.13 Pip 22.2.2
Secondary environment tested in as a sanity check: Google Colab Free
I am not using conda. I mentioned not using conda because it's specifically mentioned in the readme, so I wanted to call out that the lack of using conda was a suspect of mine, but the Colab environment (and yours) proves otherwise.
However, ultimately the host env should have no affect on python module import issues in a docker container
Let me know if there's any other pieces of my env you want more details on.
Hey @tildebyte, I was reviewing your "easy peasy" Windows guide just now and am wondering whether the requirements.txt in the distribution should just be replaced by the one in your guide?
Lincoln
On Thu, Sep 1, 2022 at 6:42 PM tildebyte @.***> wrote:
I use pip-only on baremetal Win 11/RTX 2070 8G VRAM in a pew environment (which under the covers is a standard virtualenv) - I have no issues.
@vanakema https://github.com/vanakema; Can you explain your exact environment? You've mentioned Google Colab, Docker, WSL2, GPU passthrough, conda, pip... What is your host environment (OS & ver, GPU & VRAM)? In which exact environment are you trying to actually run the 'dream' script?
— Reply to this email directly, view it on GitHub https://github.com/lstein/stable-diffusion/issues/272#issuecomment-1234859889, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAA3EVKVOUETIPTNRWFS5HDV4EWMHANCNFSM6AAAAAAQB5MCAA . You are receiving this because you commented.Message ID: @.***>
--
Lincoln Stein
Head, Adaptive Oncology, OICR
Senior Principal Investigator, OICR
Professor, Department of Molecular Genetics, University of Toronto
Tel: 416-673-8514
Cell: 416-817-8240
@.***
*E*xecutive Assistant
Michelle Xin
Tel: 647-260-7927
@. @.>*
Ontario Institute for Cancer Research
MaRS Centre, 661 University Avenue, Suite 510, Toronto, Ontario, Canada M5G 0A3
Collaborate. Translate. Change lives.
This message and any attachments may contain confidential and/or privileged information for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this message in error, please contact the sender and delete all copies. Opinions, conclusions or other information contained in this message may not be that of the organization.
Honestly, I think you’d be better off running on native Windows using the instructions from the wiki.
The only “invasive” (i.e. outside of a Python virtual environment) things you’d have to install would be Python 3.10, Git for Windows, Windows Terminal, and Powershell 7. The latter 2 are things which Windows should have anyway, IMO (and probably will be, before too long).
whether the requirements.txt in the distribution should just be replaced by the one in your guide?
I have no objection, but I did (slightly sneakily) base on Python 3.10 (I had to update torch and numpy, and maybe one other package (diff should tell)).
Point being; if we replace the main reqs file, we should probably require 3.10 for all platforms (it seems like the Mac users are already on 3.9 or 3.10), and for conda (which I personally have not tested past 3.8)
The original problem happens because conda
adds the current path to PYTHONPATH
. You can fix the error with:
PYTHONPATH=. python scripts/dream.py --full_precision
🤨
Has anyone gotten this to work in the current Colab? Whenever I try to run it it will act like various parts aren't installed, such as pytorch_lightning even though I'll manually install that. It seems inconsistent too, but I'm new to Colab, was just curious if it is in a working state right now because if I follow it straight through currently it's always missing something (Sometimes seems to be missing Conda, and if I try to install that I can, but then manually running conda env update -f environment.yaml takes forever
Has anyone gotten this to work in the current Colab? Whenever I try to run it it will act like various parts aren't installed, such as pytorch_lightning even though I'll manually install that. It seems inconsistent too, but I'm new to Colab, was just curious if it is in a working state right now because if I follow it straight through currently it's always missing something (Sometimes seems to be missing Conda, and if I try to install that I can, but then manually running conda env update -f environment.yaml takes forever
I got it to work just the other day. The key was to run the pip install commands again from the xterm in the last step if you have issues with missing does.
Also I don't think the colab environment should be using conda. It uses pipenv by default
Thanks I do that but it still seems to break, just can't get pytorch running.
If I look at the requirements installing by rerunning the install of the requirements its giving
[pipenv.exceptions.InstallError]: Building wheels for collected packages: pytorch
[pipenv.exceptions.InstallError]: Building wheel for pytorch (setup.py): started
[pipenv.exceptions.InstallError]: Building wheel for pytorch (setup.py): finished with status 'error'
[pipenv.exceptions.InstallError]: Running setup.py clean for pytorch
[pipenv.exceptions.InstallError]: Failed to build pytorch
[pipenv.exceptions.InstallError]: Installing collected packages: pytorch, pyasn1, einops, commonmark, antlr4-python3-runtime, watchdog, tzdata, tornado, toolz, toml, tensorboard-data-server, semver, rsa, python-dateutil, pympler, pygments, pyDeprecate, pyasn1-modules, protobuf, pillow, omegaconf, oauthlib, numpy, multidict, MarkupSafe, importlib-metadata, imageio-ffmpeg, grpcio, future, fsspec, frozenlist, filelock, entrypoints, decorator, charset-normalizer, cachetools, blinker, backports.zoneinfo, async-timeout, absl-py, yarl, werkzeug, validators, torchmetrics, rich, requests-oauthlib, pytz-deprecation-shim, pyarrow, pudb, pandas, opencv-python-headless, opencv-python, markdown, kornia, jinja2, imageio, huggingface-hub, google-auth, aiosignal, tzlocal, transformers, torch-fidelity, pydeck, google-auth-oauthlib, altair, aiohttp, tensorboard, streamlit, imgaug, test-tube, pytorch-lightning, albumentations
[pipenv.exceptions.InstallError]: Running setup.py install for pytorch: started
[pipenv.exceptions.InstallError]: Running setup.py install for pytorch: finished with status 'error'
[pipenv.exceptions.InstallError]: error: subprocess-exited-with-error
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × python setup.py bdist_wheel did not run successfully.
[pipenv.exceptions.InstallError]: │ exit code: 1
[pipenv.exceptions.InstallError]: ╰─> See above for output.
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This error originates from a subprocess, and is likely not a problem with pip.
[pipenv.exceptions.InstallError]: ERROR: Failed building wheel for pytorch
[pipenv.exceptions.InstallError]: error: subprocess-exited-with-error
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × Running setup.py install for pytorch did not run successfully.
[pipenv.exceptions.InstallError]: │ exit code: 1
[pipenv.exceptions.InstallError]: ╰─> See above for output.
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This error originates from a subprocess, and is likely not a problem with pip.
[pipenv.exceptions.InstallError]: error: legacy-install-failure
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: × Encountered error while trying to install package.
[pipenv.exceptions.InstallError]: ╰─> pytorch
[pipenv.exceptions.InstallError]:
[pipenv.exceptions.InstallError]: note: This is an issue with the package mentioned above, not pip.
[pipenv.exceptions.InstallError]: hint: See above for output from the failure.
If I manually try to install pytorch_lightning which is the part the script thinks is missing it seems like that installs fine pip install pytorch_lightning==1.4.2 However the script still doesn't see it,
Regarding just pytorch though I can't get past the wheel error, and I wonder if it is because the Colab is on Python 3.7 instead of 3.8 pip install pytorch Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting pytorch Downloading pytorch-1.0.2.tar.gz (689 bytes) Building wheels for collected packages: pytorch Building wheel for pytorch (setup.py) ... error ERROR: Failed building wheel for pytorch Running setup.py clean for pytorch Failed to build pytorch Installing collected packages: pytorch Running setup.py install for pytorch ... error ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-gd6gxjtg/pytorch_001a255cf34a4a138c34df1be6554342/setup.py'"'"'; file='"'"'/tmp/pip-install-gd6gxjtg/pytorch_001a255cf34a4a138c34df1be6554342/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-aqmnn8ab/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/pytorch Check the logs for full command output. /content/stable-diffusion#
Just haven't been able to figure it out yet but if anyone has run this colab successfully let me know what commands you had to put in to do so. Thanks.
Oh interesting. I got that legacy command error before, and it was when I tried to use python 3.10 instead of 3.8. This was in my own local docker env though so I’m not sure if that is your prob here. I’m pretty sure my Colab was running 3.8.x and it worked fine. I’m not quite sure why yours is running 3.7. However, I am pretty sure that it has something to do with your python version
Hey @Bendito999 @vanakema, I made the colab notebook, I just noticed that it is failing, this is because the repo owners changed where the tag release-1.09 was pointing when I made the notebook... I made it checkout to this tag since there were a lot of changes being made on real time and sometimes it worked correctly and other times it didn't, so to avoid errors I decided to use an stable tag.
Based on my analysis, this is where it is pointing now, and where it was then. | Tag | release-1.09 |
---|---|---|
Current pointer | 81cbcb9 | |
Previous pointer | 4d72644 |
Let me know if i'm right @tildebyte @lstein
In the previous pointer commit there were changes made to the requirements.txt file which installed all dependencies correctly. Newer tags have issues with the dependencies which are still not resolved. You could checkout to the previous commit that tag 1.09 was using.
Use this notebook in my fork where i do what I explain, it is working as it used to... artmen1516/stable-diffusion/blob/colab-notebook-fix/Stable_Diffusion_AI_Notebook.ipynb
I don't want to open a PR with this as a solution, since is not the best to use an specific commit, I will work to make it work with the lastest release, but in the meantime, this is a workaround to use the notebook.
Note: I didn't use conda since I understand that conda takes a lot more memory and hence installation time, and the point with the notebook is to have something relatively fast to test the repo.
@artmen1516;
In the previous pointer commit there were changes made to the requirements.txt file which installed all dependencies correctly
Yes, that is the case - I don't know how that happened. Your advice for checking out a different commit is sound. Note that we're still trying to figure out what 'requirements.txt' should contain, and recently in f4004f6 (merged to 'development') split it per-platform (so for e.g. for Colab it would be 'requirements-lin.txt')
@tildebyte Nice to hear, that's a good approach, I'm going to test the notebook with the requirements-lin and give you feedback, hope it merges to main soon.
Also, it would be nice to have static releases instead of tags that can change.
https://github.com/lstein/stable-diffusion/releases/tag/release-1.13
That's the plan going forward.
My PR #422 was just merged to development branch, until development is merged to main, colab shortcut will redirect to previous version, please use this link in the meantime.
The fixed colab works well thank you!
When trying to run the
dream.py
file on the currentmain
branch, I get the following error:Environment: Google Colab & Docker (WSL2 + GPU passthrough) Python 3.8.13
The error that was outputted above came from the google colab environment on the main branch (removed line that checked out a specific tag).
I've been working in a docker container and thought maybe it had something to do with not using conda, but I tried the colab notebook in the Colab env, and removed the line to checkout the specific tag, and encountered the same error, so I am fairly certain it's an issue with the current branch rather than an env issue.
When I ran the same line on
tags/release-1.09
, it showed the initialization message