Closed uselessgames closed 1 year ago
I would say to try using 'cmd' instead of Powershell and see if that helps.
What are you using to run the commands? Powershell? cmd?
I just tried here with Powershell 7.3.0-preview (download link), and it worked.
What are you using to run the commands? Powershell? cmd?
powershell 7. have also tried cmd and git bash before.
Please try the Powershell 7.3 preview?
tried with 7.3 preview to no avail. I still have error
C:\Program: The term 'C:\Program' is not recognized as a name of a cmdlet, function, script file, or executable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
You can see log https://pastebin.com/T95keFWA i added python and shell versions for you confirmation at the end.
@uselessgames;
(This is mostly copy-pasta from #748)
I downloaded a Windows 11 Developer VM from Microsoft, turned on virtualization on my laptop, fired up the VM (this is the closest I can get to a truly "clean" install), and ran through the install myself. I did need to make a couple of tweaks (mostly the repo name thing, but nothing which could cause or fix what you're seeing; also added the correct up-to-date requirements back in), but ultimately I did not encounter the path issue you're seeing.
For what it's worth -- I installed this via python/pip directly.
EDIT: I'm using cmd, you can use PowerShell if you like it. Also, I've done this exact sequence with Pyhon 3.9 and 3.10 on two different systems with identical hardware. It should really Just Work(TM)
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
python -m venv invokeai-venv
invokeai-venv\Scripts\Activate.bat
pip install -r requirements-win.txt
python scripts\preload_models.py
mkdir models\ldm\stable-diffusion-v1
mklink /h models\ldm\stable-diffusion-v1\model.ckpt C:\Code\git\models\sd-1.4\model.ckpt
python scripts\dream.py
The mklink command is probably just me - I use NTFS Hardlinks (same as a Linux hard link) so I can play with six or so different SD repos without having six or so different checkpoint files on my disk. Native Windows stuff, don't be scared. ANYWAY even using Copy instead of mklink -- Install Python, install Git, run those commands, done.
To start it again later;
cd <repodir>
invokeai-venv\Scripts\Activate.bat
python scripts\dream.py
I installed this via python/pip directly
Yup, this is literally exactly what pew does under the covers - it's literally just a shell-script wrapper around the standard stuff (except that it always puts the venvs in the standard place; '\~/.virtualenvs' under Windows, '\~/.local/share/virtualenvs' under Linux).
+1 for mklink, I have one copy in Google Drive so I can use the same one locally and in Colab :D
Not sure if this is related to my install method but I get an error when attempting Textual Inversion per Textual Inversion
Summoning checkpoint.
Traceback (most recent call last):
File "C:\Code\git\InvokeAI\main.py", line 944, in <module>
trainer.fit(model, data)
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 553, in fit
self._run(model)
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 864, in _run
self.accelerator.setup_environment()
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\pytorch_lightning\accelerators\gpu.py", line 30, in setup_environment
super().setup_environment()
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 76, in setup_environment
self.training_type_plugin.setup_environment()
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\pytorch_lightning\plugins\training_type\ddp.py", line 166, in setup_environment
self.setup_distributed()
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\pytorch_lightning\plugins\training_type\ddp.py", line 249, in setup_distributed
self.init_ddp_connection()
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\pytorch_lightning\plugins\training_type\ddp.py", line 319, in init_ddp_connection
torch.distributed.init_process_group(
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 602, in init_process_group
default_pg = _new_process_group_helper(
File "C:\Code\git\InvokeAI\invokeai-venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 727, in _new_process_group_helper
raise RuntimeError("Distributed package doesn't have NCCL " "built in")
RuntimeError: Distributed package doesn't have NCCL built in
Fixed via switching to gloo
implementation; run before invoking app;
set PL_TORCH_DISTRIBUTED_BACKEND=gloo
Not sure if this is related to my install method but I get an error when attempting Textual Inversion per Textual Inversion
Known issue on Windows when training
will try again in the coming days, cheers.
Hi @uselessgames , did you get this sorted?
@psychedelicious no such luck.
tried following instruction as provided above https://github.com/invoke-ai/InvokeAI/issues/745#issuecomment-1257289283
the screen says numpy is not installed, even though it is... log for reference https://pastebin.com/1VVtaRr2
@uselessgames
It looks like you aren't activating the environment - nothing will work without doing that. I think it may help you to understand a bit more about how this all works.
The standard way of installing complex project like InvokeAI is using a virtual environment. It's like an isolated container for all of the different parts that make up this fairly complex project. There are many versions of those parts, but for InvokeAI, you need specific versions. Other apps on your computer may need different versions of those same parts. Using a virtual environment lets you safely install just the parts/versions needed, without overwriting any other versions of those parts on your computer - that could prevent other apps from working.
Once you install the environment, you need to activate it every time you want to do anything in the environment - like run the InvokeAI web app or command line interface. Activating the env is telling the computer "hey I wanna run invokeai, please load up all the components for it". Then, invokeai has everything it needs ready to go. If you do not activate, invokeai can't run bc its dependencies are not loaded.
Hopefully this helps make sense of the process of installing and running invokeai.
So, to fix things.
The log indicates you did successfully install the environment, but you missed at least the activation step. That step needs pew to work - it's this command: pew workon invokeai
I suggest doing the guide again, exactly as it is written. pew is definitely still supported, the guide is correct.
One tip: Press the Tab key when entering commands, it will try to autocomplete the command for you. So you can write "pyt" and press tab until it autocompletes "python". It works for file paths, too. And you can press shift + tab to go backwards if you pass up the thing you are looking for. This helps prevent spelling mistakes.
Hope this helps.
It looks like you aren't activating the environment - nothing will work without doing that. I think it may help you to understand a bit more about how this all works.
The standard way of installing complex project like InvokeAI is using a virtual environment. It's like an isolated container for all of the different parts that make up this fairly complex project. There are many versions of those parts, but for InvokeAI, you need specific versions. Other apps on your computer may need different versions of those same parts. Using a virtual environment lets you safely install just the parts/versions needed, without overwriting any other versions of those parts on your computer - that could prevent other apps from working.
Once you install the environment, you need to activate it every time you want to do anything in the environment - like run the InvokeAI web app or command line interface. Activating the env is telling the computer "hey I wanna run invokeai, please load up all the components for it". Then, invokeai has everything it needs ready to go. If you do not activate, invokeai can't run bc its dependencies are not loaded.
Hopefully this helps make sense of the process of installing and running invokeai.
yes i know about tab and basic knowledge of venv's. i also thought the comment (https://github.com/invoke-ai/InvokeAI/issues/745#issuecomment-1257289283 which i said i was folowing) didnt show to activate but thought it was different, after all there are multiple instructions.
The log indicates you did successfully install the environment, but you missed at least the activation step. That step needs pew to work - it's this command: pew workon invokeai
I suggest doing the guide again, exactly as it is written. pew is definitely still supported, the guide is correct.
yes to confirm i am now following these instructions https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install step 7.iv still cannot activate pew, i even attempt to start it manually, as originally reported in this bug.
log from this most recent attempt. https://pastebin.com/mWaWQ4wR
Ok, thanks. The new log you provided shows that the command to activate the env is not working:
PS C:\Users\popo\Documents\InvokeAI> pew workon invokeai
C:\Program: The term 'C:\Program' is not recognized as a name of a cmdlet, function, script file, or executable program.
This usually means that a space character was not escaped, or the path is not in quotes. I'm not sure what could cause this.
I do not have a windows machine to test on, so I am paging @tildebyte for help with sorting the activation issue.
I've seen this from other users in the past - I can't repro.
I've tried using every combination of shell (cmd, Powershell, etc.) and terminal (conhost, Windows Terminal, etc., even Git Bash) I can think of, and I can't produce this error...
The only "workaround" I can think of is to use the "manual" install method from #745 (with fixes below - some things have changed)
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
python -m venv invokeai-venv
invokeai-venv/Scripts/activate.bat
pip install -r requirements-lin-win-colab-CUDA.txt
python scripts/preload_models.py
mkdir models/ldm/stable-diffusion-v1
REM Copy the downloaded stable-diffusion model to
REM models/ldm/stable-diffusion-v1/, and rename it to 'model.ckpt'
python scripts/invoke.py
I've seen this from other users in the past - I can't repro.
I've tried using every combination of shell (cmd, Powershell, etc.) and terminal (conhost, Windows Terminal, etc., even Git Bash) I can think of, and I can't produce this error...
The only "workaround" I can think of is to use the "manual" install method from #745 (with fixes below - some things have changed)
git clone https://github.com/invoke-ai/InvokeAI.git cd InvokeAI python -m venv invokeai-venv invokeai-venv/Scripts/activate.bat pip install -r requirements-lin-win-colab-CUDA.txt python scripts/preload_models.py mkdir models/ldm/stable-diffusion-v1 REM Copy the downloaded stable-diffusion model to REM models/ldm/stable-diffusion-v1/, and rename it to 'model.ckpt' python scripts/invoke.py
@tildebyte @psychedelicious
am i supposed to see something after running invokeai-venv/Scripts/activate.bat
? previously (https://github.com/invoke-ai/InvokeAI/issues/745#issuecomment-1288134341) i was accused of not starting venv.. when on powershell 7 (as suggested previously) nothing shows as with other terminals that have a response indicating i am in venv. see image.
if i continue with python scripts/preload_models.py
then still receive RuntimeError: Numpy is not available
same as noted https://github.com/invoke-ai/InvokeAI/issues/745#issuecomment-1288049349
cheers
is there a way to "wipe" clean all the previous attempts i have done, could that be causing a problem?
I'm not sure if you are supposed to see something after activating the pew venv but I would hope yes. I don't have windows, can't test for you.
You can delete the venv with pew rm invokeai
to have a clean slate to start over.
@psychedelicious the new instructions tyldebyte is giving dont use pew which are different from the originally reported issue.
@tildebyte
according to this page https://docs.python.org/3/library/venv.html to activate venv with powershell the command is PS C:\> <venv>\Scripts\Activate.ps1
while what you have is PS C:\> <venv>\Scripts\Activate.bat
... once i activate doing so the installation continues, however a new issue arises.
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'C:\\Users\\popo\\AppData\\Local\\Temp\\pip-uninstall-6bzd6962\\pip.exe'
Check the permissions.
i can confirm i am running powershell as admin and pip is newest version 22.3.
full log https://pastebin.com/qnyizrNr
i have manually removed the pip uninstall folders from appdata folder and now it is successfully preloading models... my slow internet may take a while for an update.
the install finishes, sadly the program doesnt run. from what i gather i should run --precision=float32
due to the fact my gpu is older, but no avail. any thoughts?
(invokeai-venv) PS C:\Users\popo\Documents\fun\InvokeAI> python scripts/invoke.py --precision=float32
* Initializing, be patient...
NOTE: Redirects are currently not supported in Windows or MacOs.
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> Loading stable-diffusion-1.4 from models/ldm/stable-diffusion-v1/model.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using more accurate float32 precision
** model stable-diffusion-1.4 could not be loaded: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
** restoring None
** "None" is not a known model name. Please check your models.yaml file
** Model switch failed **
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
invoke> q
goodbye!
(invokeai-venv) PS C:\Users\popo\Documents\fun\InvokeAI> python scripts/invoke.py --precision=float32
* Initializing, be patient...
NOTE: Redirects are currently not supported in Windows or MacOs.
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> Loading stable-diffusion-1.4 from models/ldm/stable-diffusion-v1/model.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using more accurate float32 precision
** model stable-diffusion-1.4 could not be loaded: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
** restoring None
** "None" is not a known model name. Please check your models.yaml file
** Model switch failed **
Ok good, I think you are very close to having it working. I am not sure what needs to happen here regarding precision and the model load failing.
Paging @tildebyte @lstein to review the last error report by @uselessgames - do we have a bug with manually specified precision and model switching or something else?
@uselessgames;
Which GPU do you have? An NVIDIA...
Did you try without --precision=float32
?
GPU is listed above... I did try without precision and same error happens.
On 26 Oct 2022, 02:43, at 02:43, tildebyte @.> wrote: @.;
Which GPU do you have? An NVIDIA...
? Did you try without
--precision=float32
?-- Reply to this email directly or view it on GitHub: https://github.com/invoke-ai/InvokeAI/issues/745#issuecomment-1291286646 You are receiving this because you were mentioned.
Message ID: @.***>
@uselessgames What GPU do you have? Not just that it is coda. I don't see it anywhere listed but it's a long thread now.
apologies, its the nvidia geforce gtx 1650
confirming results by running python scripts/invoke.py
only. @tildebyte
PS C:\Users\popo\Documents\fun\InvokeAI> invokeai-venv/Scripts/activate.ps1
(invokeai-venv) PS C:\Users\popo\Documents\fun\InvokeAI> python scripts/invoke.py
* Initializing, be patient...
NOTE: Redirects are currently not supported in Windows or MacOs.
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cuda
>> Loading stable-diffusion-1.4 from models/ldm/stable-diffusion-v1/model.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using more accurate float32 precision
** model stable-diffusion-1.4 could not be loaded: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
** restoring None
** "None" is not a known model name. Please check your models.yaml file
** Model switch failed **
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
invoke> a giant mushroom in a forest
>> Loading stable-diffusion-1.4 from models/ldm/stable-diffusion-v1/model.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using more accurate float32 precision
** model stable-diffusion-1.4 could not be loaded: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
** restoring None
** "None" is not a known model name. Please check your models.yaml file
** Model switch failed **
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\Users\popo\Documents\fun\InvokeAI\scripts\invoke.py:708 in <module> │
│ │
│ 705 │ completer.set_line(cmd) │
│ 706 │
│ 707 if __name__ == '__main__': │
│ ❱ 708 │ main() │
│ 709 │
│ │
│ C:\Users\popo\Documents\fun\InvokeAI\scripts\invoke.py:99 in main │
│ │
│ 96 │ │ ) │
│ 97 │ │
│ 98 │ try: │
│ ❱ 99 │ │ main_loop(gen, opt, infile) │
│ 100 │ except KeyboardInterrupt: │
│ 101 │ │ print("\ngoodbye!") │
│ 102 │
│ │
│ C:\Users\popo\Documents\fun\InvokeAI\scripts\invoke.py:303 in main_loop │
│ │
│ 300 │ │ │ if operation == 'generate': │
│ 301 │ │ │ │ catch_ctrl_c = infile is None # if running interactively, we catch keybo │
│ 302 │ │ │ │ opt.last_operation='generate' │
│ ❱ 303 │ │ │ │ gen.prompt2image( │
│ 304 │ │ │ │ │ image_callback=image_writer, │
│ 305 │ │ │ │ │ step_callback=step_callback, │
│ 306 │ │ │ │ │ catch_interrupts=catch_ctrl_c, │
│ │
│ c:\users\popo\documents\fun\invokeai\ldm\generate.py:351 in prompt2image │
│ │
│ 348 │ │ width = width or self.width │
│ 349 │ │ height = height or self.height │
│ 350 │ │ │
│ ❱ 351 │ │ for m in model.modules(): │
│ 352 │ │ │ if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)): │
│ 353 │ │ │ │ m.padding_mode = 'circular' if seamless else m._orig_padding_mode │
│ 354 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'NoneType' object has no attribute 'modules'
(invokeai-venv) PS C:\Users\popo\Documents\fun\InvokeAI>
** model stable-diffusion-1.4 could not be loaded: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Your video card is right on the edge of being able to load the model but it doesn't have quite enough memory. Try closing all running applications then run it again. Maybe @tildebyte can offer other windows instructions.
No, it's pretty much that your total VRAM available is < 4G
Issue is a lack of VRAM, nothing we can do, sorry.
Describe your environment
git show
and paste the line that starts with "Merge" here] there is no line which starts with "Merge".Describe the bug Hello - Trying to install on windows 10 but cant seem to get any results. Using these instructions https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install step 7.iv at the end gives the following error
when i try to start the the env manually
pew workon stable-diffusion
the same error is returned.it seems that pew is no longer supported based on their repo https://github.com/berdario/pew and not the only one with the error https://github.com/berdario/pew/issues/231
is there any alternatives or fixes to get the env started on windows?
To Reproduce Steps to reproduce the behavior:
Expected behavior i guess finish installation properly according to the given instructions
Screenshots n/a
Additional context level or programming expertise = follow instructions