Open AhBumm opened 1 week ago
a new problems. when multi batch will use the 1 prompt from the first batch. i can't just roll many image in different scene.
and import setting seems not work
maybe Read the input field of the prompt directly as llm prompt will be a good way
do u mean sd-prompt directly for llm-user-prompt?
I feel sd-prompt leave for pytorch vae lanten space keyword. Ex 1girl. llm-user-prompt leave for human us use. Is it better right?
maybe Read the input field of the prompt directly as llm prompt will be a good way
do u mean sd-prompt directly for llm-user-prompt?
I feel sd-prompt leave for pytorch vae lanten space keyword. Ex 1girl. llm-user-prompt leave for human us use. Is it better right?
yes, sd-prompt directly for llm-user-prompt maybe easier to support wildcard. pytorch vae lanten space keyword seems not that important for flux. only use the Natural language could also get a great image
okie, after FLUX i seems have the same feeling too, therfor add a checkbox like [USE SD-PROMPT AS LLM-PROMPT]?
the vision issue, sd-web-ui work well but forge show the error above(TypeError: save_pil_to_file() got an unexpected keyword argument 'name')? let me install forge try it.
issue2 should be ok now. [hotfix] change gradio.Image type from path to PIL.Image avoid copy paste img source wo a real file obj.
- okie, after FLUX i seems have the same feeling too, therfor add a checkbox like [USE SD-PROMPT AS LLM-PROMPT]?
- the vision issue, sd-web-ui work well but forge show the error above(TypeError: save_pil_to_file() got an unexpected keyword argument 'name')? let me install forge try it.
issue2 should be ok now. [hotfix] change gradio.Image type from path to PIL.Image avoid copy paste img source wo a real file obj.
import setting and vision llm work after update! 🎉🎉
- okie, after FLUX i seems have the same feeling too, therfor add a checkbox like [USE SD-PROMPT AS LLM-PROMPT]
oh! we still need a place to input lora name and lora tag. add a checkbox like [USE SD-PROMPT AS LLM-PROMPT] seems not the best way to do. my bad
- okie, after FLUX i seems have the same feeling too, therfor add a checkbox like [USE SD-PROMPT AS LLM-PROMPT]
oh! we still need a place to input lora name and lora tag. add a checkbox like [USE SD-PROMPT AS LLM-PROMPT] seems not the best way to do. my bad
oh~Lora... i forgot it too;
u can try new feature LLM-Text-Loop
- okie, after FLUX i seems have the same feeling too, therfor add a checkbox like [USE SD-PROMPT AS LLM-PROMPT]
oh! we still need a place to input lora name and lora tag. add a checkbox like [USE SD-PROMPT AS LLM-PROMPT] seems not the best way to do. my bad
oh~Lora... i forgot it too;
u can try new feature LLM-Text-Loop
i tried that feature, but i don't understand how it work🥲, and it seems to break the system prompt of [LLM-text]
a new problems. when multi batch will use the 1 prompt from the first batch. i can't just roll many image in different scene.
I use a js to do this job😂
let clickCount = 0;
function clickButton() {
var generateButton = document.querySelector("#txt2img_generate");
if (generateButton) {
generateButton.click();
console.log('clicked');
clickCount++;
// click times ↓
if (clickCount < 10) {
setTimeout(clickButton, 90000); // wait 90s to next click
} else {
console.log('done!');
}
} else {
console.warn('button not found');
}
}
clickButton();
- batch mean same prompt with different random seed.
- u can try Before after action
- whats ur purpose ?
- why not just rightClick on Generate choice Generate forever
- batch=2 -> click gen -> llm-ans1 -> image1(llm-ans-1) , image2(llm-ans-1) | different content same style.
- next time gen click -> llm-ans2 got 2 image same prompt same style
- work like above, that is not u expect?
i expect: set batch count=3 -> click gen -> llm-ans1>batch1, llm-ans2>batch2, llm-ans3>batch3
but now was: batch count=3 -> click gen -> llm-ans1>batch1,2,3 (3image in completely same prompt), //not expect
I think the Advantages of Auto LLM is to generate many prompts. More batches should get more different prompts. Not many pictures in a same prompt. is not much different from using llm externally and copy into the webui.
maybe Read the input field of the prompt directly as llm prompt will be a good way
and vision llm seems not working, with error log:
thanks for your great work!