We are glad to introduce DeFooocus - it is a fork of the Fooocus interface, it combines some other forks and adds some convenient features.
This is just a fork (and a fork of forks), we are not the authors of this creation, all thanks to llyasviel.
DeFooocus is an image generating software (based on Gradio).
DeFooocus is a rethinking of Stable Diffusion and Midjourney’s designs:
Learned from Stable Diffusion, the software is offline, open source, and free.
Learned from Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images.
Fooocus has included and automated lots of inner optimizations and quality improvements. Users can forget all those difficult technical parameters, and just enjoy the interaction between human and computer to "explore new mediums of thought and expanding the imaginative powers of the human species" [1]
.
Fooocus has simplified the installation. Between pressing "download" and generating the first image, the number of needed mouse clicks is strictly limited to less than 3. Minimal GPU memory requirement is 4GB (Nvidia).
[1]
David Holz, 2019.
Recently many fake websites exist on Google when you search “defooocus”. Do not trust those – here is the only official source of DeFooocus.
Using Fooocus is as easy as (probably easier than) Midjourney – but this does not mean we lack functionality. Below are the details.
Midjourney | DeFooocus |
---|---|
High-quality text-to-image without needing much prompt engineering or parameter tuning. (Unknown method) |
High-quality text-to-image without needing much prompt engineering or parameter tuning. (Fooocus has an offline GPT-2 based prompt processing engine and lots of sampling improvements so that results are always beautiful, no matter if your prompt is as short as “house in garden” or as long as 1000 words) |
V1 V2 V3 V4 | Input Image -> Upscale or Variation -> Vary (Subtle) / Vary (Strong) |
U1 U2 U3 U4 | Input Image -> Upscale or Variation -> Upscale (1.5x) / Upscale (2x) |
Inpaint / Up / Down / Left / Right (Pan) | Input Image -> Inpaint or Outpaint -> Inpaint / Up / Down / Left / Right (Fooocus uses its own inpaint algorithm and inpaint models so that results are more satisfying than all other software that uses standard SDXL inpaint method/model) |
Image Prompt | Input Image -> Image Prompt (Fooocus uses its own image prompt algorithm so that result quality and prompt understanding are more satisfying than all other software that uses standard SDXL methods like standard IP-Adapters or Revisions) |
--style | Advanced -> Style |
--stylize | Advanced -> Advanced -> Guidance |
--niji | Multiple launchers: "run.bat", "run_anime.bat", and "run_realistic.bat". Fooocus support SDXL models on Civitai (You can google search “Civitai” if you do not know about it) |
--quality | Advanced -> Quality |
--repeat | Advanced -> Image Number |
Multi Prompts (::) | Just use multiple lines of prompts |
Prompt Weights | You can use " I am (happy:1.5)". Fooocus uses A1111's reweighting algorithm so that results are better than ComfyUI if users directly copy prompts from Civitai. (Because if prompts are written in ComfyUI's reweighting, users are less likely to copy prompt texts as they prefer dragging files) To use embedding, you can use "(embedding:file_name:1.1)" |
--no | Advanced -> Negative Prompt |
--ar | Advanced -> Aspect Ratios |
InsightFace | Input Image -> Image Prompt -> Advanced -> FaceSwap |
Describe | Input Image -> Describe |
We also have a few things borrowed from the best parts of LeonardoAI:
LeonardoAI | DeFooocus |
---|---|
Prompt Magic | Advanced -> Style -> Fooocus V2 |
Advanced Sampler Parameters (like Contrast/Sharpness/etc) | Advanced -> Advanced -> Sampling Sharpness / etc |
User-friendly ControlNets | Input Image -> Image Prompt -> Advanced |
Fooocus also developed many "defooocus-only" features for advanced users to get perfect results. Click here to browse the advanced features.
You can directly download Fooocus with:
>>> Click here to download <<<
After you download the file, please uncompress it and then run the "run.bat".
The first time you launch the software, it will automatically download models:
If you already have these files, you can copy them to the above locations to speed up installation.
Note that if you see "MetadataIncompleteBuffer" or "PytorchStreamReader", then your model files are corrupted. Please download models again.
Below is a test on a relatively low-end laptop with 16GB System RAM and 6GB VRAM (Nvidia 3060 laptop). The speed on this machine is about 1.35 seconds per iteration. Pretty impressive – nowadays laptops with 3060 are usually at very acceptable price.
Besides, recently many other software report that Nvidia driver above 532 is sometimes 10x slower than Nvidia driver 531. If your generation time is very long, consider download Nvidia Driver 531 Laptop or Nvidia Driver 531 Desktop.
Note that the minimal requirement is 4GB Nvidia GPU memory (4GB VRAM) and 8GB system memory (8GB RAM). This requires using Microsoft’s Virtual Swap technique, which is automatically enabled by your Windows installation in most cases, so you often do not need to do anything about it. However, if you are not sure, or if you manually turned it off (would anyone really do that?), or if you see any "RuntimeError: CPUAllocator", you can enable it here:
Please open an issue if you use similar devices but still cannot achieve acceptable performances.
Note that the minimal requirement for different platforms is different.
See also the common problems and troubleshoots here.
execute git status
. You should see the following:
On branch main
Your branch is up to date with 'origin/main'.
nothing to commit, working tree clean
If not, execute git reset --hard origin/main
and check git status
again.
git remote set-url origin https://github.com/ehristoforu/DeFooocus.git
git pull
activate your venv (not necessary when installed from 7z) and update your python packages depending on your environment (7z, venv, conda, etc.)
Example for Windows (7z): ..\python_embeded\python.exe -m pip install -r "requirements_versions.txt"
OR
Windows: download the 7z file, extract it and run run.bat
. You may want to copy over already downloaded checkpoints / LoRAs / etc.
Colab | Info |
---|---|
DeFooocus Official |
If you want to use Anaconda/Miniconda, you can
git clone https://github.com/ehristoforu/DeFooocus.git
cd DeFooocus
conda env create -f environment.yaml
conda activate defooocus
pip install -r requirements_versions.txt
Then download the models: download default models to the folder "DeFooocus\models\checkpoints". Or let DeFooocus automatically download the models using the launcher:
conda activate defooocus
python entry_with_update.py
Or, if you want to open a remote port, use
conda activate defooocus
python entry_with_update.py --listen
Your Linux needs to have Python 3.10 installed, and let's say your Python can be called with the command python3 with your venv system working; you can
git clone https://github.com/ehristoforu/DeFooocus.git
cd DeFooocus
python3 -m venv defooocus_env
source defooocus_env/bin/activate
pip install -r requirements_versions.txt
See the above sections for model downloads. You can launch the software with:
source defooocus_env/bin/activate
python entry_with_update.py
Or, if you want to open a remote port, use
source defooocus_env/bin/activate
python entry_with_update.py --listen
If you know what you are doing, and your Linux already has Python 3.10 installed, and your Python can be called with the command python3 (and Pip with pip3), you can
git clone https://github.com/ehristoforu/DeFooocus.git
cd DeFooocus
pip3 install -r requirements_versions.txt
See the above sections for model downloads. You can launch the software with:
python3 entry_with_update.py
Or, if you want to open a remote port, use
python3 entry_with_update.py --listen
Note that the minimal requirement for different platforms is different.
Same with the above instructions. You need to change torch to the AMD version
pip uninstall torch torchvision torchaudio torchtext functorch xformers
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6
AMD is not intensively tested, however. The AMD support is in beta.
Note that the minimal requirement for different platforms is different.
Same with Windows. Download the software and edit the content of run.bat
as:
.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y
.\python_embeded\python.exe -m pip install torch-directml
.\python_embeded\python.exe -s DeFooocus\entry_with_update.py --directml
pause
Then run the run.bat
.
AMD is not intensively tested, however. The AMD support is in beta.
Note that the minimal requirement for different platforms is different.
Mac is not intensively tested. Below is an unofficial guideline for using Mac. You can discuss problems here.
You can install DeFooocus on Apple Mac silicon (M1 or M2) with macOS 'Catalina' or a newer version. Fooocus runs on Apple silicon computers via PyTorch MPS device acceleration. Mac Silicon computers don't come with a dedicated graphics card, resulting in significantly longer image processing times compared to computers with dedicated graphics cards.
git clone https://github.com/ehristoforu/DeFooocus.git
.cd DeFooocus
.conda env create -f environment.yaml
.conda activate defooocus
.pip install -r requirements_versions.txt
.python entry_with_update.py
. (Some Mac M2 users may need python entry_with_update.py --disable-offload-from-vram
to speed up model loading/unloading.) The first time you run DeFooocus, it will automatically download the Stable Diffusion SDXL models and will take a significant amount of time, depending on your internet connection.See docker.md
Below is the minimal requirement for running Fooocus locally. If your device capability is lower than this spec, you may not be able to use Fooocus locally. (Please let us know, in any case, if your device capability is lower but DeFooocus still works.)
Operating System | GPU | Minimal GPU Memory | Minimal System Memory | System Swap | Note |
---|---|---|---|---|---|
Windows/Linux | Nvidia RTX 4XXX | 4GB | 8GB | Required | fastest |
Windows/Linux | Nvidia RTX 3XXX | 4GB | 8GB | Required | usually faster than RTX 2XXX |
Windows/Linux | Nvidia RTX 2XXX | 4GB | 8GB | Required | usually faster than GTX 1XXX |
Windows/Linux | Nvidia GTX 1XXX | 8GB (* 6GB uncertain) | 8GB | Required | only marginally faster than CPU |
Windows/Linux | Nvidia GTX 9XX | 8GB | 8GB | Required | faster or slower than CPU |
Windows/Linux | Nvidia GTX < 9XX | Not supported | / | / | / |
Windows | AMD GPU | 8GB (updated 2023 Dec 30) | 8GB | Required | via DirectML (* ROCm is on hold), about 3x slower than Nvidia RTX 3XXX |
Linux | AMD GPU | 8GB | 8GB | Required | via ROCm, about 1.5x slower than Nvidia RTX 3XXX |
Mac | M1/M2 MPS | Shared | Shared | Shared | about 9x slower than Nvidia RTX 3XXX |
Windows/Linux/Mac | only use CPU | 0GB | 32GB | Required | about 17x slower than Nvidia RTX 3XXX |
* AMD GPU ROCm (on hold): The AMD is still working on supporting ROCm on Windows.
* Nvidia GTX 1XXX 6GB uncertain: Some people report 6GB success on GTX 10XX, but some other people report failure cases.
Note that Fooocus is only for extremely high quality image generating. We will not support smaller models to reduce the requirement and sacrifice result quality.
See the common problems here.
Given different goals, the default models and configs of Fooocus are different:
Task | Windows | Linux args | Main Model | Refiner | Config |
---|---|---|---|---|---|
General | run.bat | juggernautXL_v9Rundiffusion | not used | here | |
Realistic | run_realistic.bat | --preset realistic | realisticStockPhoto_v20 | not used | here |
Anime | run_anime.bat | --preset anime | animaPencilXL_v100 | not used | here |
Note that the download is automatic - you do not need to do anything if the internet connection is okay. However, you can download them manually if you (or move them from somewhere else) have your own preparation.
In addition to running on localhost, DeFooocus can also expose its UI in two ways:
--listen
(specify port e.g. with --port 8888
). --share
(registers an endpoint at .gradio.live
).In both ways the access is unauthenticated by default. You can add basic authentication by creating a file called auth.json
in the main directory, which contains a list of JSON objects with the keys user
and pass
(see example in auth-example.json).
The below things are already inside the software, and users do not need to do anything about these.
After the first time you run Fooocus, a config file will be generated at DeFooocus\config.txt
. This file can be edited to change the model path or default parameters.
For example, an edited DeFooocus\config.txt
(this file will be generated after the first launch) may look like this:
{
"path_checkpoints": "D:\\Fooocus\\models\\checkpoints",
"path_loras": "D:\\Fooocus\\models\\loras",
"path_embeddings": "D:\\Fooocus\\models\\embeddings",
"path_vae_approx": "D:\\Fooocus\\models\\vae_approx",
"path_upscale_models": "D:\\Fooocus\\models\\upscale_models",
"path_inpaint": "D:\\Fooocus\\models\\inpaint",
"path_controlnet": "D:\\Fooocus\\models\\controlnet",
"path_clip_vision": "D:\\Fooocus\\models\\clip_vision",
"path_fooocus_expansion": "D:\\Fooocus\\models\\prompt_expansion\\fooocus_expansion",
"path_outputs": "D:\\Fooocus\\outputs",
"default_model": "realisticStockPhoto_v10.safetensors",
"default_refiner": "",
"default_loras": [["lora_filename_1.safetensors", 0.5], ["lora_filename_2.safetensors", 0.5]],
"default_cfg_scale": 3.0,
"default_sampler": "dpmpp_2m",
"default_scheduler": "karras",
"default_negative_prompt": "low quality",
"default_positive_prompt": "",
"default_styles": [
"Fooocus V2",
"Fooocus Photograph",
"Fooocus Negative"
]
}
Many other keys, formats, and examples are in DeFooocus\config_modification_tutorial.txt
(this file will be generated after the first launch).
Consider twice before you really change the config. If you find yourself breaking things, just delete DeFooocus\config.txt
. Fooocus will go back to default.
A safer way is just to try "run_anime.bat" or "run_realistic.bat" - they should already be good enough for different tasks.
~Note that user_path_config.txt
is deprecated and will be removed soon.~ (Edit: it is already removed.)
entry_with_update.py [-h] [--listen [IP]] [--port PORT]
[--disable-header-check [ORIGIN]]
[--web-upload-size WEB_UPLOAD_SIZE]
[--external-working-path PATH [PATH ...]]
[--output-path OUTPUT_PATH] [--temp-path TEMP_PATH]
[--cache-path CACHE_PATH] [--in-browser]
[--disable-in-browser] [--gpu-device-id DEVICE_ID]
[--async-cuda-allocation | --disable-async-cuda-allocation]
[--disable-attention-upcast] [--all-in-fp32 | --all-in-fp16]
[--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2]
[--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16]
[--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32]
[--directml [DIRECTML_DEVICE]] [--disable-ipex-hijack]
[--preview-option [none,auto,fast,taesd]]
[--attention-split | --attention-quad | --attention-pytorch]
[--disable-xformers]
[--always-gpu | --always-high-vram | --always-normal-vram |
--always-low-vram | --always-no-vram | --always-cpu]
[--always-offload-from-vram] [--disable-server-log]
[--debug-mode] [--is-windows-embedded-python]
[--disable-server-info] [--share] [--preset PRESET]
[--language LANGUAGE] [--disable-offload-from-vram]
[--theme THEME] [--disable-image-log]
Click here to browse the advanced features.
Fooocus also has many community forks, just like SD-WebUI's vladmandic/automatic and anapnoe/stable-diffusion-webui-ux, for enthusiastic users who want to try!
Fooocus' forks |
---|
fenneishi/Fooocus-Control runew0lf/RuinedFooocus MoonRide303/Fooocus-MRE metercai/SimpleSDXL and so on ... |
See also About Forking and Promotion of Forks.
Special thanks to twri and 3Diva and Marc K3nt3L for creating additional SDXL styles available in Fooocus. Thanks daswer123 for contributing the Canvas Zoom!
The log is here.
We need your help! Please help translate Fooocus into international languages.
You can put json files in the language
folder to translate the user interface.
For example, below is the content of DeFooocus/language/example.json
:
{
"Generate": "生成",
"Input Image": "入力画像",
"Advanced": "고급",
"SAI 3D Model": "SAI 3D Modèle"
}
If you add --language example
arg, Fooocus will read DeFooocus/language/example.json
to translate the UI.
For example, you can edit the ending line of Windows run.bat
as
.\python_embeded\python.exe -s DeFooocus\entry_with_update.py --language example
Or run_anime.bat
as
.\python_embeded\python.exe -s DeFooocus\entry_with_update.py --language example --preset anime
Or run_realistic.bat
as
.\python_embeded\python.exe -s DeFooocus\entry_with_update.py --language example --preset realistic
For practical translation, you may create your own file like DeFooocus/language/jp.json
or DeFooocus/language/cn.json
and then use flag --language jp
or --language cn
. Apparently, these files do not exist now. We need your help to create these files!
Note that if no --language
is given and at the same time DeFooocus/language/default.json
exists, Fooocus will always load DeFooocus/language/default.json
for translation. By default, the file DeFooocus/language/default.json
does not exist.