MountaintopLotus / braintrust

A Dockerized platform for running Stable Diffusion, on AWS (for now)
Apache License 2.0
1 stars 2 forks source link

InvokeAI #24

Open JohnTigue opened 1 year ago

JohnTigue commented 1 year ago

Invoke has been around since 2022-11 but seemingly has improved a bunch of late to where it currently has the most sophisticated UI.

See also:

We have a Discord thread on Invoke: #invoke-ai.

InvokeAI team docs:

Videos by the team

Videos by Olivio Sarikas

JohnTigue commented 1 year ago

Some example InvokeAI work:

JohnTigue commented 1 year ago

Supposedly InvokeAI has one of the nicest outpainting machinery:

JohnTigue commented 1 year ago

Invoke does not have a separate webUI input for negative prompts. To add negative prompts, enter them in the regular prompt input field but put the words in square brackets, [like this].

JohnTigue commented 1 year ago

We got Invoke running on Jody's Windows machine, as documented in the Hypnowerk Primer gDoc.

JohnTigue commented 1 year ago

From the horse's mouth: Getting Started with InvokeAI

JohnTigue commented 1 year ago

That's a bingo.

Running InvokeAI in the cloud with Docker#

We offer an optimized Ubuntu-based image that has been well-tested in cloud deployments. Note: it also works well locally on Linux x86_64 systems with an Nvidia GPU. It may also work on Windows under WSL2 and on Intel Mac (not tested).

An advantage of this method is that it does not need any local setup or additional dependencies.

JohnTigue commented 1 year ago

Sounds like Invoke can be compiled of M1 Apple Silicon Macbooks, one of which I am typing on: "Similarly, specify full-precision mode on Apple M1 hardware.".

But don't use it with Docker on an M1:

Developers on Apple silicon (M1/M2): You can't access your GPU cores from Docker containers and performance is reduced compared with running it directly on macOS but for development purposes it's fine. Once you're done with development tasks on your laptop you can build for the target platform and architecture and deploy to another environment with NVIDIA GPUs on-premises or in the cloud.

JohnTigue commented 1 year ago

https://invoke-ai.github.io/InvokeAI/

It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.

JohnTigue commented 1 year ago

Again, don't use Docker on Mac if you want to access the GPUs.

InvokeAI Stable Diffusion Toolkit Docs, Docker:

Developers on Apple silicon (M1/M2): You can't access your GPU cores from Docker containers and performance is reduced compared with running it directly on macOS

JohnTigue commented 1 year ago

Embedding machinery of Invoke 2.2

Screen Shot 2023-01-15 at 10 59 52 AM
JohnTigue commented 1 year ago

For Embiggen upscaling, see https://github.com/ManyHands/hypnowerk/issues/34#issuecomment-1383238177.

JohnTigue commented 1 year ago

Invoke2.2 can already do textual inversion training of custom embeddings, but it is not part of the webUI yet – rather its CLI invoked to the main script.

JohnTigue commented 1 year ago

What's New in 2.2.5:

Allow usage of GPU’s in Docker.

Woot.

JohnTigue commented 1 year ago

For the grins, I've been trying to install Invoke on my MacBook Pro. Failing repeatedly

` Confguring InvokeAI Loading Python libraries...

Traceback (most recent call last): File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1093, in _get_module return importlib.import_module("." + module_name, self.name) File "/opt/homebrew/Cellar/python@3.9/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1030, in _gcd_import File "", line 1007, in _find_and_load File "", line 986, in _find_and_load_unlocked File "", line 680, in _load_unlocked File "", line 850, in exec_module File "", line 228, in _call_with_frames_removed File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 27, in from ...modeling_utils import PreTrainedModel File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/transformers/modeling_utils.py", line 78, in from accelerate import version as accelerate_version File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/accelerate/init.py", line 7, in from .accelerator import Accelerator File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/accelerate/accelerator.py", line 33, in from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/accelerate/tracking.py", line 45, in import wandb File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/wandb/init.py", line 26, in from wandb import sdk as wandb_sdk File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/wandb/sdk/init.py", line 5, in from . import wandb_helper as helper # noqa: F401 File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/wandb/sdk/wandb_helper.py", line 6, in from .lib import config_util File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/wandb/sdk/lib/config_util.py", line 9, in from wandb.util import load_yaml File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/wandb/util.py", line 51, in import sentry_sdk # type: ignore File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/sentry_sdk/init.py", line 1, in from sentry_sdk.hub import Hub, init File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/sentry_sdk/hub.py", line 9, in from sentry_sdk.scope import Scope File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/sentry_sdk/scope.py", line 7, in from sentry_sdk.utils import logger, capture_internal_exceptions File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/sentry_sdk/utils.py", line 966, in HAS_REAL_CONTEXTVARS, ContextVar = _get_contextvars() File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/sentry_sdk/utils.py", line 936, in _get_contextvars if not _is_contextvars_broken(): File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/sentry_sdk/utils.py", line 897, in _is_contextvars_broken from eventlet.patcher import is_monkey_patched # type: ignore File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/eventlet/init.py", line 17, in from eventlet import convenience File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/eventlet/convenience.py", line 7, in from eventlet.green import socket File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/eventlet/green/socket.py", line 21, in from eventlet.support import greendns File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/eventlet/support/greendns.py", line 66, in setattr(dns, pkg, import_patched('dns.' + pkg)) File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/eventlet/support/greendns.py", line 61, in import_patched return patcher.import_patched(module_name, **modules) File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/eventlet/patcher.py", line 132, in import_patched return inject( File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/eventlet/patcher.py", line 109, in inject module = import(module_name, {}, {}, module_name.split('.')[:-1]) File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/dns/zone.py", line 86, in class Zone(dns.transaction.TransactionManager): File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/dns/zone.py", line 757, in Zone ) -> dns.rdtypes.ANY.SOA.SOA: AttributeError: module 'dns.rdtypes' has no attribute 'ANY'

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/Users/jft/at/stable/invokeai/./.venv/bin/configure_invokeai.py", line 24, in from transformers import CLIPTokenizer, CLIPTextModel File "", line 1055, in _handle_fromlist File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1084, in getattr value = getattr(module, name) File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1083, in getattr module = self._get_module(self._class_to_module[name]) File "/Users/jft/at/stable/invokeai/.venv/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1095, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback): module 'dns.rdtypes' has no attribute 'ANY' jft@Manbair repos `

JohnTigue commented 1 year ago

Disk# At least 18 GB of free disk space for the machine learning model, Python, and all its dependencies.

JohnTigue commented 1 year ago

From the Invoke docs, sounds like SD is trained on AWS (*):

Hardware Type: A100 PCIe 40GB
Hours used: 150000
Cloud Provider: AWS
Compute Region: US-east
JohnTigue commented 1 year ago

How to run Stable Diffusion on an M1 Mac (2022-12-12):

Apple has also released a native implementation of StableDiffusion, and StringMeteor wrote up a nice guide on how to run that.

JohnTigue commented 1 year ago

Seems Invoke does not (yet) parse the safetensor file format. A1111 yes, Invoke no.

JohnTigue commented 1 year ago

The 2.3.1 update looks like it is going to be very nice: https://github.com/invoke-ai/InvokeAI/releases

JohnTigue commented 1 year ago

The dev community around Automatic1111 seems deeply flawed. This has been going on for months. It is hard to imagine a promising way forward. As such, depending on A1111 as the core engine behind an API is not a wise plan.

There does seem to be some projects which have instead turned to using InvokeAI as the engine behind an API. InvokeAI 2.0 is when its web service functionality was added to Invoke. Invoke v2.3.4 was release two days ago. For examples, see https://www.reddit.com/r/StableDiffusion/comments/xrsmjh/api/.

JohnTigue commented 1 year ago

Another feature of InvokeAI that looks promising for using it as the engine for our web service is that it has a mature [CLI interface(https://invoke-ai.github.io/InvokeAI/features/CLI/). This would be a great and easy way to have an Elastic Load Balancer driven health check for determining when a specific container has gone off into the weeds.

JohnTigue commented 1 year ago

Finally, there simply seems to be a mature community around InvokeAI. Like, for example, a permissive license. That license is MIT, rather than Automatic1111's Affero GPL. It's really down to a team (InvokeAI) versus a single dictator (A1111). That dictator (username AUTOMATIC1111) has put a lot of effort in but the situation is outgrowing what a single person can handle.

JohnTigue commented 1 year ago

I wonder why StabilityAI has not funded one of these web-ui projects. This seems like a major issue in the SD ecosystem. (I wonder if that might be an opportunity for hypnowerk funding. Boy, I gotta stop going with silly, obscure, too-cleve-by-half, hard to pronounce project names. Same lesson I learned the hard way with my Burma projects over the last year. Bonehead.)

Nonetheless, given the license and the dysfunctional community, ripping A1111 out seems wise.

JohnTigue commented 1 year ago

Just to be clear, A1111 still have a lot of value to add but it should be viewed as the prototyping workbench but not the core engine of something that can built upon for a team service. It is seeming in its element when used by a single dev/creative on their own private, physically colocated GPU. Cue the sad trombones.

JohnTigue commented 1 year ago

Another promising indicator in InvokeAI's favor: they are currently paying up in effort to migrate to a nodes architecture. ComfyUI also has a nodes metaphor based architecture but ComfyUI is GPL3 licensed while Invoke is MIT. So, they are strategically pivoting to compete with the new entrant but have a more commercially friendly license. Most encouraging.

JohnTigue commented 1 year ago

Looks like they started grinding through the architecture overhaul in Feburary and four days ago said they think they'll have it wrapped up in April.

Screw it. I think I got enough evidence that I should simply take the hit and move to an InvokeAI based back end. (Still not sure how to keep Automatic1111 available internally for use without building on it, short of having a dedicated separate server for it. But since it keeps creashing, might just have to take that hit $$.)

JohnTigue commented 1 year ago

Two days ago they release 2.3.5.post2.

(Again, what is with these people? If you've adopted semver, then follow semver.)

JohnTigue commented 1 year ago

I did the upgrade. The UI says 2.3.5, not 2.3.5.post2. But it loads straight from git, but on a certain stamp. Maybe I just need to update the commit ID, in a fork?

Maybe the UI cannot handle a forth version component, that is the post2 is not in the UI but it is in the code?