lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
40.75k stars 5.69k forks source link

RuntimeError: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype #759

Closed ksorah1 closed 11 months ago

ksorah1 commented 11 months ago

Hello,

I use your UI because it's actualy the best in/outpaint, but sometimes I get this error and it stop generating, I have to restart all and I lose all my params.

ksorah1 commented 11 months ago

Really boring

[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is C:\Users\xxx\Fooocus_portable\Fooocus\models\inpaint\inpaint.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
Requested to load GPT2LMHeadModel
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.23 seconds
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] New suffix: intricate, elegant, volumetric lighting, digital painting, highly detailed, artstation, sharp focus, illustration, concept art, ruan jia, steve mccurry
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.15 seconds
C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py:266: UserWarning: In CPU autocast, but the target dtype is not supported. Disabling autocast.
CPU Autocast only supports dtype of torch.bfloat16 currently.
  warnings.warn(error_message)
Traceback (most recent call last):
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 585, in worker
    handler(task)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 282, in handler
    t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 195, in clip_encode
    cond, pooled = clip_encode_single(final_clip, text)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 172, in clip_encode_single
    result = clip.encode_from_tokens(tokens, return_pooled=True)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd.py", line 120, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 56, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\patch.py", line 252, in encode_token_weights_patched_with_a1111_method
    out, pooled = self.encode(to_encode)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 179, in encode
    return self(tokens)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 150, in forward
    with precision_scope(model_management.get_autocast_device(device), torch.float32):
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py", line 329, in __enter__
    torch.set_autocast_cpu_dtype(self.fast_dtype)  # type: ignore[arg-type]
RuntimeError: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype
Total time: 1.52 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is C:\Users\xxx\Fooocus_portable\Fooocus\models\inpaint\inpaint.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
Requested to load GPT2LMHeadModel
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.33 seconds
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.13 seconds
C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py:266: UserWarning: In CPU autocast, but the target dtype is not supported. Disabling autocast.
CPU Autocast only supports dtype of torch.bfloat16 currently.
  warnings.warn(error_message)
Traceback (most recent call last):
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 585, in worker
    handler(task)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 282, in handler
    t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 195, in clip_encode
    cond, pooled = clip_encode_single(final_clip, text)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 172, in clip_encode_single
    result = clip.encode_from_tokens(tokens, return_pooled=True)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd.py", line 120, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 56, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\patch.py", line 252, in encode_token_weights_patched_with_a1111_method
    out, pooled = self.encode(to_encode)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 179, in encode
    return self(tokens)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 150, in forward
    with precision_scope(model_management.get_autocast_device(device), torch.float32):
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py", line 329, in __enter__
    torch.set_autocast_cpu_dtype(self.fast_dtype)  # type: ignore[arg-type]
RuntimeError: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype
Total time: 0.81 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is C:\Users\xxx\Fooocus_portable\Fooocus\models\inpaint\inpaint.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
model_type EPS
adm 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
missing {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
Refiner model loaded: C:\Users\xxx\Fooocus_portable\Fooocus\models\checkpoints\realisticStockPhoto_v10.safetensors
Requested to load GPT2LMHeadModel
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 0.24 seconds
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.30 seconds
C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py:266: UserWarning: In CPU autocast, but the target dtype is not supported. Disabling autocast.
CPU Autocast only supports dtype of torch.bfloat16 currently.
  warnings.warn(error_message)
Traceback (most recent call last):
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 585, in worker
    handler(task)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 282, in handler
    t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 195, in clip_encode
    cond, pooled = clip_encode_single(final_clip, text)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 172, in clip_encode_single
    result = clip.encode_from_tokens(tokens, return_pooled=True)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd.py", line 120, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 56, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\patch.py", line 252, in encode_token_weights_patched_with_a1111_method
    out, pooled = self.encode(to_encode)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 179, in encode
    return self(tokens)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 150, in forward
    with precision_scope(model_management.get_autocast_device(device), torch.float32):
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py", line 329, in __enter__
    torch.set_autocast_cpu_dtype(self.fast_dtype)  # type: ignore[arg-type]
RuntimeError: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype
Total time: 15.66 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is C:\Users\xxx\Fooocus_portable\Fooocus\models\inpaint\inpaint.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
LoRAs loaded: [('None', 0.25), ('None', 0.25), ('None', 0.25), ('None', 0.25), ('None', 0.25), ('C:\\Users\\xxx\\Fooocus_portable\\Fooocus\\models\\inpaint\\inpaint.fooocus.patch', 1.0)]
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
unload clone 0
loading in lowvram mode 492.100359916687
[Fooocus Model Management] Moving model(s) has taken 0.77 seconds
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.17 seconds
C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py:266: UserWarning: In CPU autocast, but the target dtype is not supported. Disabling autocast.
CPU Autocast only supports dtype of torch.bfloat16 currently.
  warnings.warn(error_message)
Traceback (most recent call last):
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 585, in worker
    handler(task)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 282, in handler
    t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 195, in clip_encode
    cond, pooled = clip_encode_single(final_clip, text)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 172, in clip_encode_single
    result = clip.encode_from_tokens(tokens, return_pooled=True)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd.py", line 120, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 56, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\patch.py", line 252, in encode_token_weights_patched_with_a1111_method
    out, pooled = self.encode(to_encode)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 179, in encode
    return self(tokens)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 150, in forward
    with precision_scope(model_management.get_autocast_device(device), torch.float32):
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py", line 329, in __enter__
    torch.set_autocast_cpu_dtype(self.fast_dtype)  # type: ignore[arg-type]
RuntimeError: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype
Total time: 2.84 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is C:\Users\xxx\Fooocus_portable\Fooocus\models\inpaint\inpaint_v25.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
LoRAs loaded: [('None', 0.25), ('None', 0.25), ('None', 0.25), ('None', 0.25), ('None', 0.25), ('C:\\Users\\xxx\\Fooocus_portable\\Fooocus\\models\\inpaint\\inpaint_v25.fooocus.patch', 1.0)]
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
unload clone 0
loading in lowvram mode 491.9561290740967
[Fooocus Model Management] Moving model(s) has taken 0.48 seconds
[Fooocus] Processing prompts ...
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.12 seconds
C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py:266: UserWarning: In CPU autocast, but the target dtype is not supported. Disabling autocast.
CPU Autocast only supports dtype of torch.bfloat16 currently.
  warnings.warn(error_message)
Traceback (most recent call last):
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 585, in worker
    handler(task)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 282, in handler
    t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 195, in clip_encode
    cond, pooled = clip_encode_single(final_clip, text)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 172, in clip_encode_single
    result = clip.encode_from_tokens(tokens, return_pooled=True)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd.py", line 120, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 56, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\patch.py", line 252, in encode_token_weights_patched_with_a1111_method
    out, pooled = self.encode(to_encode)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 179, in encode
    return self(tokens)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 150, in forward
    with precision_scope(model_management.get_autocast_device(device), torch.float32):
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py", line 329, in __enter__
    torch.set_autocast_cpu_dtype(self.fast_dtype)  # type: ignore[arg-type]
RuntimeError: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype
Total time: 5.69 seconds
lllyasviel commented 11 months ago

AutocastCPU only support Bfloat16 as the autocast_cpu_dtype happens when you run multiple Fooocus at the same time, or when you have other software using GPU to compete resources with fooocus

ksorah1 commented 11 months ago

only 1 fooocus, but as I use my computer, fooocus collapse and never repair. I have to restart it, that's really boring.

lllyasviel commented 11 months ago

share full logs

lllyasviel commented 11 months ago

hello it seems this problem can be fixed by updating pytorch. if you are using windows, you can download latest fooocus 7z package to update pytorch. https://github.com/pytorch/pytorch/issues/100565

lllyasviel commented 11 months ago

try 2.1.735

ksorah1 commented 11 months ago

try 2.1.735

Thank you, actualy I use Fooocus version: 2.1.739 and on github there is only the version Fooocus_win64_2-1-60 downloadable : https://github.com/lllyasviel/Fooocus/releases

lllyasviel commented 11 months ago

run.bat will automatically update

lllyasviel commented 11 months ago

reopen if problem happen again

ksorah1 commented 11 months ago

Thank you for help, I have the same problem but with this error instead :

`[Fooocus Model Management] Moving model(s) has taken 0.17 seconds
Traceback (most recent call last):
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 584, in worker
    handler(task)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 281, in handler
    t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 195, in clip_encode
    cond, pooled = clip_encode_single(final_clip, text)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 172, in clip_encode_single
    result = clip.encode_from_tokens(tokens, return_pooled=True)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd.py", line 120, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 56, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\patch.py", line 256, in encode_token_weights_patched_with_a1111_method
    out, pooled = self.encode(to_encode)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 179, in encode
    return self(tokens)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 150, in forward
    with precision_scope(model_management.get_autocast_device(device), torch.float32):
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py", line 241, in __init__
    raise RuntimeError(
RuntimeError: User specified an unsupported autocast device_type 'meta'
Total time: 7.05 seconds
[Parameters] Adaptive CFG = 7
[Parameters] Sharpness = 2
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] CFG = 3.0
[Parameters] Seed = 5153387860369638722
[Fooocus] Downloading inpainter ...
[Inpaint] Current inpaint model is C:\Users\xxx\Fooocus_portable\Fooocus\models\inpaint\inpaint.fooocus.patch
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 20
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
Requested to load GPT2LMHeadModel
Loading 1 new model
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] New suffix: extremely high quality artwork
[Fooocus] Preparing Fooocus text #2 ...
[Prompt Expansion] New suffix: intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by artgerm and greg rutkowski and alphonse mucha, 8 k
[Fooocus] Encoding positive #1 ...
Traceback (most recent call last):
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 584, in worker
    handler(task)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\async_worker.py", line 281, in handler
    t['c'] = pipeline.clip_encode(texts=t['positive'], pool_top_k=t['positive_top_k'])
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 195, in clip_encode
    cond, pooled = clip_encode_single(final_clip, text)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\default_pipeline.py", line 172, in clip_encode_single
    result = clip.encode_from_tokens(tokens, return_pooled=True)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd.py", line 120, in encode_from_tokens
    cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sdxl_clip.py", line 56, in encode_token_weights
    g_out, g_pooled = self.clip_g.encode_token_weights(token_weight_pairs_g)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\modules\patch.py", line 256, in encode_token_weights_patched_with_a1111_method
    out, pooled = self.encode(to_encode)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 179, in encode
    return self(tokens)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "C:\Users\xxx\Fooocus_portable\Fooocus\backend\headless\fcbh\sd1_clip.py", line 150, in forward
    with precision_scope(model_management.get_autocast_device(device), torch.float32):
  File "C:\Users\xxx\Fooocus_portable\python_embeded\lib\site-packages\torch\amp\autocast_mode.py", line 241, in __init__
    raise RuntimeError(
RuntimeError: User specified an unsupported autocast device_type 'meta'
Total time: 0.88 seconds`

when it works, it's an excellent UI, but it's very annoying to have to re-set the options at each refresh (prompts, etc.)