Open nickadkar opened 1 year ago
This is a very important feature which will set the SD apart for any other AI becauese eye position of images control the dopamine secretion in the human brain.
That is a confusing statement in the context of a feature request..
There is a color sketch feature in the ui, and there is also the ability to edit your prompt during inpainting (subject looking up) looking left, etc
Draw a draft and send it to img2img if that's really so important.
I don't find it confusing. I understand what he wants. I want to be able to specify "poses" in a more rigorous fashion. I don't believe AI is yet there to have that level of specificity. However, I do have an idea for training specifically targeted at this sort of thing but that is this is a broader idea than this.
There is a color sketch feature in the ui
Just wondering where this is? I just updated and looked everywhere but only have the masking brush
edit: Just found it, you need to add a command line argument to make it appear https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#color-sketch
This is a very important feature which will set the SD apart for any other AI becauese eye position of images control the dopamine secretion in the human brain.
That is a confusing statement in the context of a feature request..
There is a color sketch feature in the ui, and there is also the ability to edit your prompt during inpainting (subject looking up) looking left, etc
Let me clarify this confusion. When people look at images of faces, with eyes and mouth, it causes the human brain to release dopamine and have an emotional reaction at the image. Emotions such as happiness, sadness, fear, joy etc are transferred from the images to viewers. This is how we are wired basically.
You could read further about it from the following book: Paul Eckman - Facial Action Coding System
I have shared my personal copy.
https://drive.google.com/file/d/1QFDDfkxGlC4ZtHUpJR9l0lmgpgRTIjWl/view?usp=sharing
The inpainting prompt ability is not precise. I already tried it and I was not happy with the results, so I rasied this topic.
If there is an option to change the positions of eyes, mouth, etc. this will greatly enhance the feature of SD interms of image processing. Also, this will enhance the 'ease' factor.
I am still trying to figure out how to enable color-sketch. The following did not work. Any help would be appreciated.
I don't find it confusing. I understand what he wants. I want to be able to specify "poses" in a more rigorous fashion. I don't believe AI is yet there to have that level of specificity. However, I do have an idea for training specifically targeted at this sort of thing but that is this is a broader idea than this.
Sorry out of topic, but I loved your name....lol
Is there an existing issue for this?
What would your feature do ?
It would be really good if the Stable DIffusion has a feature to control the eye position precisely. This is a very important feature which will set the SD apart for any other AI becauese eye position of images control the dopamine secretion in the human brain. Please check the attached file for details. Eye postion Controller.xlsx
Proposed workflow
Additional information
No response