carson-katri / dream-textures

Stable Diffusion built-in to Blender
GNU General Public License v3.0
7.78k stars 419 forks source link

KeyError: 'bpy_prop_collection[key]: key "Principled BSDF" not found' #705

Closed 0xkl closed 8 months ago

0xkl commented 1 year ago

Description

Python: Traceback (most recent call last): File "F:\blender app\blender_script\addons\dream_textures\operators\project.py", line 307, in execute material.node_tree.links.new(image_texture_node.outputs[0], material.node_tree.nodes['Principled BSDF'].inputs[0]) KeyError: 'bpy_prop_collection[key]: key "Principled BSDF" not found'

Steps to Reproduce

I don't know why it has this problem.

Expected Behavior

run

Addon Version

Windows (CUDA)

GPU

NVIDIA

0xkl commented 1 year ago

I have found a solution: just change the language to English to avoid this error.

But another error occurred:

An error occurred while generating. Check the issues tab on GitHub to see if this has been reported before: OSError('Error no file named diffusion_pytorch_model.bin found in directory C:\Users\Administrator\.cache\huggingface\hub\models--runwayml--stable-diffusion-v1-5\snapshots\ded79e214aa69e42c24d3f5ac14b76d568679cc2\vae.')

0xkl commented 1 year ago

This issue has also been resolved: It's because the VAE file was not fully downloaded, just download stable differentiation v1-5 again

GottfriedHofmann commented 11 months ago

I have found a solution: just change the language to English to avoid this error.

It is very likely that the changing names of the nodes when using another language than English are causing this. Out of curiosity - which language did you have your UI in when the error occured?

I think adding a simple check whether the UI is in English and a warning if it is not in the addon preferences could act as a workaround.

shiyuze112 commented 10 months ago

An error occurred while generating. Check the issues tab on GitHub to see if this has been reported before:

RuntimeError("OutOfMemoryError('CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 2.00 GiB total capacity; 1.67 GiB already allocated; 0 bytes free; 1.72 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF')")

github-actions[bot] commented 8 months ago

This issue is stale because it has been open for 60 days with no activity.

github-actions[bot] commented 8 months ago

This issue was closed because it has been inactive for 7 days since being marked as stale.