lllyasviel / Omost

Your image is almost there!
Apache License 2.0
7.3k stars 418 forks source link

Display Bug: 'Render' Button Missing After Loading Prompts #117

Closed CtrlAiDel closed 1 month ago

CtrlAiDel commented 1 month ago

I'm encountering a problem where the 'render' button doesn't appear after loading prompts. I've tried reinstalling the software 3-4 times, even with a fresh installation, but the issue persists. The prompts load successfully, so the core functionality works, but the 'render' button is missing from the UI.

PowerShell 7.4.5 Loading personal and system profiles took 550ms. (base) PS V:\Omost Img\Omost> python gradio_app.py V:\Omost Img\Omost\lib_omost\pipeline.py:64: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). alphas_cumprod = torch.tensor(np.cumprod(alphas, axis=0), dtype=torch.float32) Unload to CPU: CLIPTextModel Unload to CPU: CLIPTextModel Unload to CPU: UNet2DConditionModel Unload to CPU: AutoencoderKL Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.60s/it] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. You shouldn't move a model that is dispatched using accelerate hooks. Unload to CPU: LlamaForCausalLM Running on local URL: http://0.0.0.0:7862

To create a public link, set share=True in launch(). You shouldn't move a model that is dispatched using accelerate hooks. Load to GPU: LlamaForCausalLM The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results. Setting pad_token_id to eos_token_id:128001 for open-end generation. C:\Users\Utilisateur\miniconda3\Lib\site-packages\transformers\models\llama\modeling_llama.py:649: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) attn_output = torch.nn.functional.scaled_dot_product_attention( User stopped generation Last assistant response is not valid canvas: Response does not contain codes! The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results. Setting pad_token_id to eos_token_id:128001 for open-end generation. Last assistant response is not valid canvas: unterminated string literal (detected at line 6) (, line 6)

Thank you for your help :) It would be appreciated. such a great tool!

CtrlAiDel commented 1 month ago

image With a gtx 3060

CtrlAiDel commented 1 month ago

seems to be fine sorry for bothering... ill get a better cpu and gpu