lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
41.37k stars 5.86k forks source link

Segmentation fault (core dumped) #627

Closed Vortex1x1x1x closed 10 months ago

Vortex1x1x1x commented 1 year ago

Every time I try to run the program it crashes and says "Segmentation fault (core dumped)"

tkocou commented 1 year ago

Your OS?

Vortex1x1x1x commented 1 year ago

Pop_OS

On Wed, Oct 11, 2023 at 8:33 AM tkocou @.***> wrote:

Your OS?

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1757588444, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXJGSM4ZYCUVZ2B2JCLX62G2PANCNFSM6AAAAAA52OOVPE . You are receiving this because you authored the thread.Message ID: @.***>

Vortex1x1x1x commented 1 year ago

It opens my browser and starts working but then it stops working as soon is as I put in a prompt, this is what my terminal dose, (i also have an amd gpu if that helps)

vortext1x1x1x@byron-pc:~/Fooocus$ python3 entry_with_update.py
Already up-to-date
Update succeeded.
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0]
Fooocus version: 2.1.49
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Gtk-Message: 08:53:26.245: Failed to load module "xapp-gtk3-module"
Gtk-Message: 08:53:26.245: Failed to load module "xapp-gtk3-module"
Gtk-Message: 08:53:26.245: Failed to load module "xapp-gtk3-module"
Gtk-Message: 08:53:26.291: Failed to load module "canberra-gtk-module"
Gtk-Message: 08:53:26.292: Failed to load module "canberra-gtk-module"
Gtk-Message: 08:53:26.705: Failed to load module "xapp-gtk3-module"
Gtk-Message: 08:53:26.706: Failed to load module "xapp-gtk3-module"
Gtk-Message: 08:53:26.706: Failed to load module "xapp-gtk3-module"
Gtk-Message: 08:53:26.738: Failed to load module "canberra-gtk-module"
Gtk-Message: 08:53:26.738: Failed to load module "canberra-gtk-module"
Total VRAM 8176 MB, total RAM 15904 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon Graphics : native
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Fooocus Smart Memory] Disabling smart memory, vram_inadequate = True, is_old_gpu_arch = False.
model_type EPS
adm 2560
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
missing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Refiner model loaded: /home/vortext1x1x1x/Fooocus/models/checkpoints/sd_xl_refiner_1.0_0.9vae.safetensors
model_type EPS
adm 2816
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Base model loaded: /home/vortext1x1x1x/Fooocus/models/checkpoints/sd_xl_base_1.0_0.9vae.safetensors
LoRAs loaded: [('sd_xl_offset_example-lora_1.0.safetensors', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5), ('None', 0.5)]
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
loading new
App started successful. Use the app with http://127.0.0.1:7860/ or 127.0.0.1:7860
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 7.0
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
Segmentation fault (core dumped)
tkocou commented 1 year ago

Did you load the AMD GPU drivers?

Vortex1x1x1x commented 1 year ago

If you mean the amd pro drivers yes I do have them installed

On Wed, Oct 11, 2023 at 2:15 PM tkocou @.***> wrote:

Did you load the AMD GPU drivers?

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1758234724, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXIPB2AUK4JE7R7NFVDX63O5ZANCNFSM6AAAAAA52OOVPE . You are receiving this because you authored the thread.Message ID: @.***>

tkocou commented 1 year ago

I meant "Did you follow the instructions by the author for allowing AMD versions of torch, torchvision, torchaudio, torchtext, functorch, xformers?"

Vortex1x1x1x commented 1 year ago

Oh. Yeah I also did that.

On Wed, Oct 11, 2023 at 2:21 PM tkocou @.***> wrote:

I meant "Did you follow the instructions by the author for allowing AMD versions of torch, torchvision, torchaudio, torchtext, functorch, xformers?"

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1758250560, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXLRNHS7L6Y5KYVB3YDX63PQ3ANCNFSM6AAAAAA52OOVPE . You are receiving this because you authored the thread.Message ID: @.***>

tkocou commented 1 year ago

Okay. The author did say that the support for AMD was in Beta.

VVVVV - Probably has something to do with the segfault error missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection',

Clip processes the prompt text for the sampler module which generates the image from the latent noise.

Vortex1x1x1x commented 1 year ago

Ok thanks I’ll try to do some more troubleshooting to see if it works

On Wed, Oct 11, 2023 at 2:29 PM tkocou @.***> wrote:

Okay. The author did say that the support for AMD was in Beta.

VVVVV - Probably has something to do with the segfault error missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection',

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1758267010, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXMQAMN3AABXOTKHFY3X63QPDANCNFSM6AAAAAA52OOVPE . You are receiving this because you authored the thread.Message ID: @.***>

Vortex1x1x1x commented 1 year ago

i did some quick research on the term Segmentation fault (core dumped) and it means that a program tried to access an illegal location

tkocou commented 1 year ago

Try this: delete the Fooocus directory and re-install Fooocus. I have found that sometimes the author of a program will make changes to their program which cause the existing installation to fault. Re-installing the program can fix that class of problems.

Vortex1x1x1x commented 1 year ago

Ok I’ll try that

On Wed, Oct 11, 2023 at 3:41 PM tkocou @.***> wrote:

Try this: delete the Fooocus directory and re-install Fooocus. I have found that sometimes the author of a program will make changes to their program which cause the existing installation to fault. Re-installing the program can fix that class of problems.

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1758415745, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXOC55X6YIYGK6X6AETX63Y7HANCNFSM6AAAAAA52OOVPE . You are receiving this because you authored the thread.Message ID: @.***>

tkocou commented 1 year ago

I had this sort of problem crop up with Automatic1111 and ComfyUI. Re-installing those programs fixed the fault.

Vortex1x1x1x commented 1 year ago

So I removed and reinstalled the program and it appears to be working I’m able to put in a prompt and run it it is just extremely slow it’s at 3 minutes and hasn’t finished yet

On Wed, Oct 11, 2023 at 3:45 PM tkocou @.***> wrote:

I had this sort of problem crop up with Automatic1111 and ComfyUI. Re-installing those programs fixed the fault.

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1758421122, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXK26NR4YK7EGXM3GYLX63ZMDANCNFSM6AAAAAA52OOVPE . You are receiving this because you authored the thread.Message ID: @.***>

zavalroman commented 11 months ago

I had some problem with my rx 6500 xt. Solved it with info from https://github.com/comfyanonymous/ComfyUI: Installed nightly ROCm 5.7 (before unistall torch torchvision torchaudio torchtext functorch xformers according this readme) and run Fooocus with command HSA_OVERRIDE_GFX_VERSION=10.3.0 python entry_with_update.py. Image started to generate (I saw the preview in browser) but failed due to out of memory (only 4GB in my card)

evebyte commented 11 months ago

with installing the nightly version like @zavalroman mentioned, pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

and starting it with the command HSA_OVERRIDE_GFX_VERSION=10.3.0 python entry_with_update.py

worked with my 6700 xt

Vortex1x1x1x commented 11 months ago

i will give this a try later

On Thu, Nov 30, 2023 at 7:55 AM eve @.***> wrote:

with installing the nightly version like @zavalroman https://github.com/zavalroman mentioned, pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

and starting it with the command HSA_OVERRIDE_GFX_VERSION=10.3.0 python entry_with_update.py

worked with my 6700 xt

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1833735735, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXOCP6QCQSGTNOO3ULLYHB63DAVCNFSM6AAAAAA52OOVPGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZTG4ZTKNZTGU . You are receiving this because you authored the thread.Message ID: @.***>

Vortex1x1x1x commented 11 months ago

if anyone has any ideas plz let me know

On Thu, Nov 30, 2023 at 12:15 PM Byron Nute @.***> wrote:

ith_update.py Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py'] Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Fooocus version: 2.1.824 Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). Gtk-Message: 12:10:00.994: Failed to load module "xapp-gtk3-module" Gtk-Message: 12:10:01.009: Failed to load module "canberra-gtk-module" Gtk-Message: 12:10:01.010: Failed to load module "canberra-gtk-module" Opening in existing browser session. Total VRAM 8176 MB, total RAM 15904 MB Set vram state to: NORMAL_VRAM Disabling smart memory management Device: cuda:0 AMD Radeon Graphics : native VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Refiner unloaded. model_type EPS adm 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE extra keys {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'} Base model loaded: /home/vortext1x1x1x/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/home/vortext1x1x1x/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors]. Loaded LoRA [/home/vortext1x1x1x/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/vortext1x1x1x/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models Segmentation fault (core dumped)

its still spitting out the same thing

On Thu, Nov 30, 2023 at 8:07 AM Byron Nute @.***> wrote:

i will give this a try later

On Thu, Nov 30, 2023 at 7:55 AM eve @.***> wrote:

with installing the nightly version like @zavalroman https://github.com/zavalroman mentioned, pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

and starting it with the command HSA_OVERRIDE_GFX_VERSION=10.3.0 python entry_with_update.py

worked with my 6700 xt

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1833735735, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXOCP6QCQSGTNOO3ULLYHB63DAVCNFSM6AAAAAA52OOVPGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZTG4ZTKNZTGU . You are receiving this because you authored the thread.Message ID: @.***>

Vortex1x1x1x commented 11 months ago

ith_update.py Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py'] Python 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] Fooocus version: 2.1.824 Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). Gtk-Message: 12:10:00.994: Failed to load module "xapp-gtk3-module" Gtk-Message: 12:10:01.009: Failed to load module "canberra-gtk-module" Gtk-Message: 12:10:01.010: Failed to load module "canberra-gtk-module" Opening in existing browser session. Total VRAM 8176 MB, total RAM 15904 MB Set vram state to: NORMAL_VRAM Disabling smart memory management Device: cuda:0 AMD Radeon Graphics : native VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Refiner unloaded. model_type EPS adm 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE extra keys {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'} Base model loaded: /home/vortext1x1x1x/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/home/vortext1x1x1x/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors]. Loaded LoRA [/home/vortext1x1x1x/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/vortext1x1x1x/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models Segmentation fault (core dumped)

its still spitting out the same thing

On Thu, Nov 30, 2023 at 8:07 AM Byron Nute @.***> wrote:

i will give this a try later

On Thu, Nov 30, 2023 at 7:55 AM eve @.***> wrote:

with installing the nightly version like @zavalroman https://github.com/zavalroman mentioned, pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

and starting it with the command HSA_OVERRIDE_GFX_VERSION=10.3.0 python entry_with_update.py

worked with my 6700 xt

— Reply to this email directly, view it on GitHub https://github.com/lllyasviel/Fooocus/issues/627#issuecomment-1833735735, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDANVXOCP6QCQSGTNOO3ULLYHB63DAVCNFSM6AAAAAA52OOVPGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZTG4ZTKNZTGU . You are receiving this because you authored the thread.Message ID: @.***>

evebyte commented 11 months ago

@Vortex1x1x1x

did you uninstall before installing the nightly torch packages? pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

also, i noticed i'm running python 3.10.13 and you're running 3.10.12

and you might be having an issue with a gtk module? not sure, what is causing your problem, but maybe someone else can chime in?

jfrstnc commented 11 months ago

I've the same problem with my 7900xt. Followed author's AMD GPU guide, application segfaults a couple seconds after clicking Generate button.

It works with the --cpu argument python entry_with_update.py --cpu but that's way to slow.

demaseme commented 11 months ago

with installing the nightly version like @zavalroman mentioned, pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

and starting it with the command HSA_OVERRIDE_GFX_VERSION=10.3.0 python entry_with_update.py

worked with my 6700 xt

This worked for me in OpenSuse Tumbleweed and 6700XT. You want to do the uninstall step first.

jfrstnc commented 11 months ago

So, I did:

pip uninstall torch
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7
HSA_OVERRIDE_GFX_VERSION=10.3.0 python entry_with_update.py

No luck here, Fooocus still crashes soon after launching.

(fooocus_env) j@arch ~/w/a/Fooocus (main)> HSA_OVERRIDE_GFX_VERSION=10.3.0 python entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py']
Python 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801]
Fooocus version: 2.1.824
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
Total VRAM 20464 MB, total RAM 31234 MB
Set vram state to: NORMAL_VRAM
Disabling smart memory management
Device: cuda:0 AMD Radeon RX 7900 XT : native
VAE dtype: torch.float32
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
Refiner unloaded.
model_type EPS
adm 2816
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra keys {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Base model loaded: /home/j/work/ai/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors
Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/home/j/work/ai/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors].
Loaded LoRA [/home/j/work/ai/Fooocus/models/loras/sd_xl_offset_example-lora_1.0.safetensors] for UNet [/home/j/work/ai/Fooocus/models/checkpoints/juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
fish: Job 1, 'HSA_OVERRIDE_GFX_VERSION=10.3.0…' terminated by signal SIGSEGV (Address boundary error)
(fooocus_env) j@arch ~/w/a/Fooocus (main) [0|SIGSEGV]>
jfrstnc commented 11 months ago

Traceback with HSA_OVERRIDE_GFX_VERSION=11.0.0

Traceback (most recent call last):
  File "/home/j/work/ai/Fooocus/modules/async_worker.py", line 803, in worker
    handler(task)
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/modules/async_worker.py", line 406, in handler
    expansion = pipeline.final_expansion(t['task_prompt'], t['task_seed'])
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/modules/expansion.py", line 117, in __call__
    features = self.model.generate(**tokenized_kwargs,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/transformers/generation/utils.py", line 1319, in generate
    and torch.sum(inputs_tensor[:, -1] == generation_config.pad_token_id) > 0
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: HIP error: the operation cannot be performed in the present state
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing HIP_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.

Total time: 25.90 seconds
demaseme commented 11 months ago

Traceback with HSA_OVERRIDE_GFX_VERSION=11.0.0

Traceback (most recent call last):
  File "/home/j/work/ai/Fooocus/modules/async_worker.py", line 803, in worker
    handler(task)
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/modules/async_worker.py", line 406, in handler
    expansion = pipeline.final_expansion(t['task_prompt'], t['task_seed'])
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/modules/expansion.py", line 117, in __call__
    features = self.model.generate(**tokenized_kwargs,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/j/work/ai/Fooocus/fooocus_env/lib/python3.11/site-packages/transformers/generation/utils.py", line 1319, in generate
    and torch.sum(inputs_tensor[:, -1] == generation_config.pad_token_id) > 0
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: HIP error: the operation cannot be performed in the present state
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing HIP_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_HIP_DSA` to enable device-side assertions.

Total time: 25.90 seconds

My guy try uninstalling as the [guide](pip uninstall torch torchvision torchaudio torchtext functorch xformers pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6) explains:

pip uninstall torch torchvision torchaudio torchtext functorch xformers 
mashb1t commented 10 months ago

Continuing in https://github.com/lllyasviel/Fooocus/issues/1288, closing as duplicate