Closed brendanhoar closed 1 month ago
whaat the heck is a prompt hash
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#filenames-format
CW: python excerpts :P
...
class FilenameGenerator:
replacements = {
...
'prompt_hash': lambda self, *args: self.string_hash(self.prompt, *args),
'negative_prompt_hash': lambda self, *args: self.string_hash(self.p.negative_prompt, *args),
'full_prompt_hash': lambda self, *args: self.string_hash(f"{self.p.prompt} {self.p.negative_prompt}", *args), # a space in between to create a unique string
...
def string_hash(self, text, *args):
length = int(args[0]) if (args and args[0] != "") else 8
return hashlib.sha256(text.encode()).hexdigest()[0:length]
...
... so literally a sha256 of random parameter text? that sounds horrible ow
Just the first 8 characters of the sha256 of each of the prompt text(s).
That is an extremely silly feature, but, eh, there ya go
Feature Idea
On A1111-ism I preferred was the ability to build the output path using the shorthash of the prompt/negative prompt/model/etc. This provided a less messy output in some cases. Consider supporting, minimally hashes of the prompt/negative prompt as well as the AutoV3 hash of the primary model in addition to the strings/names.
E.g. my A1111 style paths would, among other parameters, include both the hash of the model and the model filename, but only the hashes of the prompts,
For prompts, many prompts share a significantly long identical preamble and it becomes difficult using the path semantics to bucket them precisely without the hashes.
Other
No response