issues
search
xlinx
/
sd-webui-decadetw-auto-prompt-llm
sd-webui-auto-prompt-llm
MIT License
53
stars
8
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
(Feature/Enhancement/Suggestion) automatic Scripts,LoRas,Embeddings,Textual inversions,ControlNet usage a Way to Also clear the History/StoryBoard
#32
LadyFlames
opened
4 weeks ago
3
Problem in LLM-text with Ollama
#31
marc2608
opened
4 weeks ago
6
Unrecognized request argument supplied: top_k
#29
tazztone
closed
1 month ago
3
Weird Warnings and Errors Appear is this supposed to happen or it might be because im using Stability Matrix is there a way to get rid of these warnings
#28
LadyFlames
opened
1 month ago
4
Warnings on start GradioDeprecationWarning
#24
sandner-art
opened
2 months ago
1
The extension cannot correctly obtain the response from ollama.
#23
tsukimiya
closed
2 months ago
3
Import settings from older version does not work after last update
#21
sandner-art
opened
2 months ago
1
[BUG] cant run addon
#20
Rogal80
opened
2 months ago
1
problems with the Prompt tokens when using Send to txt2image
#19
LadyFlames
opened
2 months ago
2
Install.py - no module nameed 'launch'
#18
Torcelllo
opened
2 months ago
1
cloud LLM via API key
#17
tazztone
opened
2 months ago
9
LM Studio: [ERROR] Model does not support images. Please use a model that does.. Error Data: n/a, Additional Data: n/a ]
#16
AlexDenthanor
opened
2 months ago
1
[feature] support wildcard or dynamic prompt to output more different results
#13
AhBumm
opened
2 months ago
14
suggestion: presets of sys prompts
#10
tazztone
closed
2 months ago
2
it would be a good idea to add a slightly longer LLM max length(tokens)
#9
LadyFlames
opened
2 months ago
3
Only User & Support Roles Are Supported
#8
AlexDenthanor
closed
2 months ago
3
[Forge] - Save_pil_to_file() got an unexpected keyword argument 'name'
#7
CCpt5
opened
3 months ago
5
prompt formatting issue?
#6
tazztone
closed
3 months ago
1
Develop hot fix
#5
xlinx
closed
3 months ago
0
Process prompts from a file and feed them to the LLM.
#4
caustiq
opened
3 months ago
1
Unload the LLM from VRAM after each call?
#3
Pdonor
opened
3 months ago
2
Fix
#2
w-e-w
closed
3 months ago
3
I think it would be worth adding a detection of whether lora is being used and if so, place the prompt in front of it, or move it to the end of the prompt?
#1
AndreyRGW
opened
3 months ago
2