ehristoforu / DeFooocus

Always focus on prompting and generating
GNU General Public License v3.0
209 stars 53 forks source link

[Bug]: Blue screen error while using DeFooocus on rtx 3050 6gb vram and 16gb ram #28

Open jake200m opened 3 months ago

jake200m commented 3 months ago

Prerequisites

Describe the problem

sir I was using DeFooocus and it was working fine on my laptop which has a ryzen 7 7840HS cpu and rtx 3050 gpu with 6 gb vram but suddenly the screen freezed and blue screen error was appeared and then it restarted automatically. Now the laptop is working but I am curious why did it happen and what can I do to avoid it? Is it gonna harm my device like motherboard dead issue if I continue to use DeFooocus on my laptop. Please guide me

Full console log output

Already up-to-date
Update succeeded.
[System ARGV] ['DeFooocus\\entry_with_update.py', '--attention-split', '--in-browser', '--theme', 'dark', '--preset', 'anime']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 0.2
Loaded preset: D:\Firefox Downloads\defooocus_portable\DeFooocus\presets\anime.json
Total VRAM 6144 MB, total RAM 15655 MB
Set vram state to: NORMAL_VRAM
Always offload VRAM
Device: cuda:0 NVIDIA GeForce RTX 3050 6GB Laptop GPU : native
VAE dtype: torch.bfloat16
Using split optimization for cross attention
Refiner unloaded.
model_type EPS
UNet ADM Dimension 2816
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
Using split attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using split attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
Base model loaded: D:\Firefox Downloads\defooocus_portable\DeFooocus\models\checkpoints\animaPencilXL_v100.safetensors
Request to load LoRAs [['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [D:\Firefox Downloads\defooocus_portable\DeFooocus\models\checkpoints\animaPencilXL_v100.safetensors].
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 1.31 seconds
Started worker with PID 35700
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865

Version

DeFooocus0.2

Where are you running Fooocus?

Locally

Operating System

Windows 11

What browsers are you seeing the problem on?

Firefox