MindOfMatter / Fooocus-MindOfMatter-Edition

Fooocus-MindOfMatter-Edition: An enhanced fork of Fooocus with new features like custom LORAS configurations, additional presets/styles, and usability improvements. This edition expands the original's versatility, merging classic functionality with innovative enhancements.
GNU General Public License v3.0
6 stars 3 forks source link

For each model tester #23

Open barepixels opened 9 months ago

barepixels commented 9 months ago

Trying for the first time, got error

Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 1.00 seconds Total time: 42.06 seconds use_experimental_async_task_batch: True enable_test_loras_mode: False enable_test_base_model_mode: True enable_test_refiner_model_mode: False Traceback (most recent call last): File "E:\Fooocus-MindOfMatter-Edition\Fooocus-MindOfMatter-Edition\modules\exp_async_worker.py", line 921, in worker handler(task) File "E:\Fooocus-MindOfMatter-Edition\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "E:\Fooocus-MindOfMatter-Edition\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "E:\Fooocus-MindOfMatter-Edition\Fooocus-MindOfMatter-Edition\modules\exp_asyncworker.py", line 163, in handler loras = [[str(args.pop()), float(args.pop()), bool(args.pop())] for in range(modules.config.default_loras_max_number)] AttributeError: module 'modules.config' has no attribute 'default_loras_max_number' Total time: 0.03 seconds

barepixels commented 9 months ago

try dev version

E:\Fooocus-MindOfMatter-Edition-DEV>.\python_embeded\python.exe -s .\Fooocus\entry_with_update.py --preset $presetrnpause Already up-to-date Update succeeded. [System ARGV] ['.\Fooocus\entry_with_update.py', '--preset', '$presetrnpause'] Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.865 Load preset [E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\presets\$presetrnpause.json] failed

Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). Total VRAM 24575 MB, total RAM 32705 MB Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 NVIDIA GeForce RTX 3090 : native VAE dtype: torch.bfloat16 Using pytorch cross attention Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'} Base model loaded: E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.73 seconds App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 use_experimental_async_task_batch: True enable_test_loras_mode: False enable_test_base_model_mode: True enable_test_refiner_model_mode: False Traceback (most recent call last): File "E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\modules\exp_async_worker.py", line 921, in worker handler(task) File "E:\Fooocus-MindOfMatter-Edition-DEV\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "E:\Fooocus-MindOfMatter-Edition-DEV\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "E:\Fooocus-MindOfMatter-Edition-DEV\Fooocus\modules\exp_asyncworker.py", line 163, in handler loras = [[str(args.pop()), float(args.pop()), bool(args.pop())] for in range(modules.config.default_loras_max_number)] AttributeError: module 'modules.config' has no attribute 'default_loras_max_number' Total time: 0.03 seconds