Open pinea00 opened 3 weeks ago
Can you attach full console logs?
Can you attach full console logs? thanks. when I selected Restore Details,second image will colorful,first image still is black and white.
venv "S:\venvzluda\Scripts\Python.exe" ROCm Toolkit was found. fatal: not a git repository (or any of the parent directories): .git fatal: not a git repository (or any of the parent directories): .git Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-1.7.0 Commit hash:
Installing forge_legacy_preprocessor requirement: changing opencv-python version from 4.10.0.82 to 4.8.0 Installing sd-forge-controlnet requirement: changing opencv-python version from 4.10.0.82 to 4.8.0 Total VRAM 8176 MB, total RAM 32606 MB Set vram state to: NORMAL_VRAM Device: cuda:0 AMD Radeon RX 6600 [ZLUDA] : native VAE dtype: torch.bfloat16 Launching Web UI with arguments: --theme dark --api --autolaunch CUDA Stream Activated: False Using pytorch cross attention ONNX: selected=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider'] ControlNet preprocessor location: S:\stable-diffusion-webui\models\ControlNetPreprocessor watermark logo: S:\stable-diffusion-webui\extensions\sd-webui-facefusion-dev\watermark.png [-] sd-webui-facefusion initialized. FaceFusion 2.1.2 2024-06-09 02:21:41,806 - AnimateDiff - INFO - AnimateDiff Hooking i2i_batch Loading weights [15012c538f] from S:\stable-diffusion-webui\models\Stable-diffusion\realisticVisionV60B1_v51VAE.safetensors model_type EPS UNet ADM Dimension 0 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} To load target model SD1ClipModel Begin to load 1 model [Memory Management] Current Free GPU Memory (MB) = 8008.3076171875 [Memory Management] Model Memory (MB) = 454.2076225280762 [Memory Management] Minimal Inference Memory (MB) = 1024.0 [Memory Management] Estimated Remaining GPU Memory (MB) = 6530.099994659424 Moving model(s) has taken 0.07 seconds Model loaded in 1.5s (forge load real models: 0.9s, calculate empty prompt: 0.5s). 2024-06-09 02:21:43,472 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 17.5s (prepare environment: 10.3s, import torch: 0.3s, import gradio: 0.5s, setup paths: 0.8s, initialize shared: 0.8s, other imports: 0.1s, load scripts: 5.5s, create ui: 2.0s, gradio launch: 0.3s, add APIs: 0.4s).
S:\venvzluda\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True
.
warnings.warn(
prompt:cinematic film still ,1 girl, standing under the cherry blossom tree, vignette, highly detailed, high budget, bokeh, cinemascope, moody, epic, gorgeous, film grain, grainy
negative prompt:anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured
ugly, disfigured, spend time:0.0
To load target model AutoencoderKL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 7660.10107421875
[Memory Management] Model Memory (MB) = 159.55708122253418
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 6476.543992996216
Moving model(s) has taken 0.05 seconds
To load target model AutoencoderKL
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) = 5848.04541015625
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 4824.04541015625
Moving model(s) has taken 0.01 seconds
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 5851.42041015625
[Memory Management] Model Memory (MB) = 1639.4137649536133
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 3188.0066452026367
Merged with diffusion_model.input_blocks.0.0.weight channel changed to [320, 8, 3, 3]
Moving model(s) has taken 0.58 seconds
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:22<00:00, 2.85s/it]
To load target model AutoencoderKL███████████████████████████████████████████████████████| 8/8 [00:19<00:00, 2.66s/it]
Begin to load 1 model
Reuse 1 loaded models
[Memory Management] Current Free GPU Memory (MB) = 4140.59912109375
[Memory Management] Model Memory (MB) = 0.0
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 3116.59912109375
Moving model(s) has taken 0.01 seconds
Total progress: 100%|████████████████████████████████████████████████████████████████████| 8/8 [00:25<00:00, 3.25s/it]
Total progress: 100%|████████████████████████████████████████████████████████████████████| 8/8 [00:25<00:00, 2.66s/it]███████████
Can img2img swap the loading positions of the input foreground img and light direction img? Otherwise, img2img cannot automatically grab the image size and image prompt from the settings. Thank you very much
I found the reason. It's not code problem. I set the target location and start location of user-web.bat to be different. Delete this problem , please focus on the development of forge DEV2, thank you
The use of txt2img is completely normal, but img2img will only produce black and white photos. But when I delete the cache.json in the forge home directory, img2img can generate color images normally. after turning it off and turning it on again, IC LIGHT img2img changes to black and white again. No error message, how to fix this? Thanks
cache.json