invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.34k stars 2.4k forks source link

[bug]: ./docker/run.sh "banana sushi" -Ak_lms -S42 -s10 == invokeai: error: unrecognized arguments: banana sushi -S42 #2575

Closed BenGuse219 closed 1 year ago

BenGuse219 commented 1 year ago

Is there an existing issue for this?

OS

Linux

GPU

cuda

VRAM

16 GB GDDR6

What happened?

following the docker install documentation at https://invoke-ai.github.io/InvokeAI/installation/040_INSTALL_DOCKER/ git clone https://github.com/invoke-ai/InvokeAI.git

set my env varible HUGGING_FACE_HUB_TOKEN

ran ./docker/build.sh (seems to work successfully)

./docker/run.sh "banana sushi" -Ak_lms -S42 -s10 (this throws an error) "invokeai: error: unrecognized arguments: banana sushi -S42"

I am trying to run this on a gcp vm and attempting to use the cli interface. any help would be greatly appreciated!

Screenshots

No response

Additional context

No response

Contact Details

No response

mauwii commented 1 year ago

@lstein: This is not a problem of the Dockerfile, since invokeai "banana sushi" -Ak_lms -S42 -s10 is giving this error:

invokeai "banana sushi" -Ak_lms -S42 -s10
usage: invokeai [-h] [--laion400m LAION400M] [--weights WEIGHTS] [--version] [--root_dir ROOT_DIR] [--config CONF] [--model MODEL] [--weight_dirs WEIGHT_DIRS [WEIGHT_DIRS ...]]
                [--png_compression {0,1,2,3,4,5,6,7,8}] [-F] [--max_loaded_models MAX_LOADED_MODELS] [--free_gpu_mem] [--xformers | --no-xformers] [--always_use_cpu]
                [--precision PRECISION] [--ckpt_convert | --no-ckpt_convert] [--internet | --no-internet]
                [--nsfw_checker | --no-nsfw_checker | --safety_checker | --no-safety_checker] [--autoconvert AUTOCONVERT] [--patchmatch | --no-patchmatch] [--from_file INFILE]
                [--outdir OUTDIR] [--prompt_as_dir] [--fnformat FNFORMAT] [-s STEPS] [-W WIDTH] [-H HEIGHT] [-C CFG_SCALE] [--sampler SAMPLER_NAME] [--log_tokenization]
                [-f STRENGTH] [-T | -fit | --fit | --no-fit] [--grid | --no-grid | -g] [--embedding_directory EMBEDDING_PATH] [--embeddings | --no-embeddings]
                [--enable_image_debugging] [--karras_max KARRAS_MAX] [--no_restore] [--no_upscale] [--esrgan_bg_tile ESRGAN_BG_TILE] [--gfpgan_model_path GFPGAN_MODEL_PATH]
                [--web] [--web_develop] [--web_verbose] [--cors [CORS ...]] [--host HOST] [--port PORT] [--certfile CERTFILE] [--keyfile KEYFILE] [--gui]
invokeai: error: unrecognized arguments: banana sushi -S42
mauwii commented 1 year ago

@BenGuse219: To run the CLI you could use ./docker/run.sh --outdir /data:

./docker/run.sh --outdir /data                                            
You are using these values:

Volumename:     invokeai_data
Invokeai_tag:   ghcr.io/mauwii/invokeai:main-cpu
local Models:   unset

/usr/src/InvokeAI/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 
  warn(f"Failed to load image Python extension: {e}")
* Initializing, be patient...
>> Initialization file /data/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.0-rc5
>> InvokeAI runtime directory is "/data"
>> GFPGAN Initialized
>> CodeFormer Initialized
>> ESRGAN Initialized
>> Using device_type cpu
>> xformers not installed
>> Initializing safety checker
/usr/src/InvokeAI/lib/python3.9/site-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
  warnings.warn(
>> Current VRAM usage:  0.00G
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
  | Using more accurate float32 precision
  | Loading diffusers VAE from stabilityai/sd-vae-ft-mse
  | Using more accurate float32 precision
Fetching 15 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 91578.69it/s]
  | Default image dimensions = 512 x 512
>> Model loaded in 3.51s
>> Textual inversions available: 
>> Setting Sampler to k_lms (LMSDiscreteScheduler)

* Initialization done! Awaiting your command (-h for help, 'q' to quit)
(stable-diffusion-1.5) invoke>
BenGuse219 commented 1 year ago

@mauwii thank you!! it seems to be working properly. Did I do something wrong when reading the docs?

mauwii commented 1 year ago

@BenGuse219 Glad that it worked ☺️ And no, I can confirm your reading skills are up to date 😜

That's why I tag'd @lstein, to inform him that this is not a problem with the container, but with the invokeai cli, so we can find out if this will be fixed or if I need to update the docs 🙈

BenGuse219 commented 1 year ago

@mauwii hahaha thank you! I didn't know if I skipped a step or forgot to do something implicit that most people would have already known to do. Thank you!