google-research / torchsde

Differentiable SDE solvers with GPU support and efficient sensitivity analysis.
Apache License 2.0
1.56k stars 195 forks source link

issue with my text to image ai Device type privateuseone is not supported for torch.Generator() api. #143

Open qWolfey opened 9 months ago

qWolfey commented 9 months ago

Hi let me first say i am very new to this all (ai/python)

I also dont really know what is needed so here i have my specs.. Radeon RX 5500 XT gpu Amd Ryzen 5 3600 6core cpu 16gb Ram 8gb dedicated gpu memory (Vram.. i think xD )

with the main isseus being: Device type privateuseone is not supported for torch.Generator() api. and the time it takes to complete the task being 16 minutes and it didnt even output anything

I have changed the Run.bat to .\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y .\python_embeded\python.exe -m pip install torch-directml .\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml pause since it was stated to do so with amd thats the only thing i have changed about the files i have the latest install as of december 9th..

Thanks in advance for any1 who can help

This is the log: E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791>.\python_embeded\python.exe -m pip uninstall torch torchvision torchaudio torchtext functorch xformers -y Found existing installation: torch 2.0.0 Uninstalling torch-2.0.0: Successfully uninstalled torch-2.0.0 Found existing installation: torchvision 0.15.1 Uninstalling torchvision-0.15.1: Successfully uninstalled torchvision-0.15.1 WARNING: Skipping torchaudio as it is not installed. WARNING: Skipping torchtext as it is not installed. WARNING: Skipping functorch as it is not installed. WARNING: Skipping xformers as it is not installed.

E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791>.\python_embeded\python.exe -m pip install torch-directml Requirement already satisfied: torch-directml in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (0.2.0.dev230426) Collecting torch==2.0.0 (from torch-directml) Using cached torch-2.0.0-cp310-cp310-win_amd64.whl (172.3 MB) Collecting torchvision==0.15.1 (from torch-directml) Using cached torchvision-0.15.1-cp310-cp310-win_amd64.whl (1.2 MB) Requirement already satisfied: filelock in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (3.12.2) Requirement already satisfied: typing-extensions in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (4.7.1) Requirement already satisfied: sympy in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (1.12) Requirement already satisfied: networkx in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (3.1) Requirement already satisfied: jinja2 in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from torch==2.0.0->torch-directml) (3.1.2) Requirement already satisfied: numpy in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from torchvision==0.15.1->torch-directml) (1.23.5) Requirement already satisfied: requests in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from torchvision==0.15.1->torch-directml) (2.31.0) Requirement already satisfied: pillow!=8.3.,>=5.3.0 in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from torchvision==0.15.1->torch-directml) (9.2.0) Requirement already satisfied: MarkupSafe>=2.0 in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from jinja2->torch==2.0.0->torch-directml) (2.1.3) Requirement already satisfied: charset-normalizer<4,>=2 in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (3.4) Requirement already satisfied: urllib3<3,>=1.21.1 in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (2.0.3) Requirement already satisfied: certifi>=2017.4.17 in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (2023.5.7) Requirement already satisfied: mpmath>=0.19 in e:\amain\fooocus ai gen\fooocus_win64_2-1-791\python_embeded\lib\site-packages (from sympy->torch==2.0.0->torch-directml) (1.3.0) DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 Installing collected packages: torch, torchvision WARNING: The scripts convert-caffe2-to-onnx.exe, convert-onnx-to-caffe2.exe and torchrun.exe are installed in 'E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed torch-2.0.0 torchvision-0.15.1

[notice] A new release of pip is available: 23.2.1 -> 23.3.1 [notice] To update, run: E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\python.exe -m pip install --upgrade pip

E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791>.\python_embeded\python.exe -s Fooocus\entry_with_update.py --directml Already up-to-date Update succeeded. [System ARGV] ['Fooocus\entry_with_update.py', '--directml'] Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.1.824 Running on local URL: http://127.0.0.1:7865/

To create a public link, set share=True in launch(). Using directml with device: Total VRAM 1024 MB, total RAM 16335 MB Set vram state to: NORMAL_VRAM Disabling smart memory management Device: privateuseone VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Refiner unloaded. model_type EPS adm 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE extra keys {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'} Base model loaded: E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors]. Loaded LoRA [E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cpu, use_fp16 = False. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.53 seconds App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 4.0 [Parameters] Seed = 5027677228259297186 [Parameters] Sampler = dpmpp_2m_sde_gpu - karras [Parameters] Steps = 30 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. [Fooocus] Processing prompts ... [Fooocus] Preparing Fooocus text https://github.com/lllyasviel/Fooocus/discussions/1 ... [Prompt Expansion] Cute girl on sofa, full perfect, fine still, intricate, elegant, highly detailed, delicate, sharp focus, dynamic light, great composition, clear background, scenic, vibrant colors, inspired very strong cinematic chanted epic, professional, winning, fantastic, artistic, positive, emotional, pretty, attractive, cute, enhanced, loving, colorful, beautiful, symmetry, illuminated [Fooocus] Encoding positive https://github.com/lllyasviel/Fooocus/discussions/1 ... [Fooocus] Encoding negative https://github.com/lllyasviel/Fooocus/discussions/1 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (1024, 1024) Preparation time: 28.32 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Traceback (most recent call last): File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\modules\async_worker.py", line 803, in worker handler(task) File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\modules\async_worker.py", line 735, in handler imgs = pipeline.process_diffusion( File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\modules\default_pipeline.py", line 354, in process_diffusion modules.patch.BrownianTreeNoiseSamplerPatched.global_init( File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\modules\patch.py", line 173, in global_init BrownianTreeNoiseSamplerPatched.tree = BatchedBrownianTree(x, t0, t1, seed, cpu=cpu) File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\k_diffusion\sampling.py", line 85, in init self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, kwargs) for s in seed] File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\k_diffusion\sampling.py", line 85, in self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, *kwargs) for s in seed] File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torchsde_brownian\derived.py", line 155, in init self._interval = brownian_interval.BrownianInterval(t0=t0, File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torchsde_brownian\brownian_interval.py", line 540, in init W = self._randn(initial_W_seed) math.sqrt(t1 - t0) File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torchsde_brownian\brownian_interval.py", line 234, in _randn return _randn(size, self._top._dtype, self._top._device, seed) File "E:\AMain\Fooocus Ai gen\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torchsde_brownian\brownian_interval.py", line 32, in _randn generator = torch.Generator(device).manual_seed(int(seed)) RuntimeError: Device type privateuseone is not supported for torch.Generator() api. Total time: 1029.91 seconds

roninDday commented 7 months ago

i have same situation on mac.

DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.; python_version >= "3.7". pip 23.3 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063