damian0815 / compel

A prompting enhancement library for transformers-type text embedding systems
MIT License
519 stars 47 forks source link

Performance deteriorates at some prompts #77

Closed sk-uma closed 9 months ago

sk-uma commented 9 months ago

A significant time penalty has been observed at prompts where parentheses are not properly closed.

Here is the minimal code that reproduces the problem.

diffusers == 0.23.0 compel == 2.0.2

import time

import torch
from compel import Compel
from diffusers import StableDiffusionPipeline

pipeline = StableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
    variant="fp16",
).to("cuda")

compel = Compel(
    tokenizer=pipeline.tokenizer,
    text_encoder=pipeline.text_encoder,
    truncate_long_prompts=False,
)

prompt = "((((((((cat with red ball"
compel.build_conditioning_tensor(prompt)

start = time.time()

compel.build_conditioning_tensor(prompt)

print(f"Time taken: {time.time() - start} seconds")

Output.

Time taken: 20.41498303413391 seconds

The cat with red ball prompt takes about 0.05 seconds.

If you know of a way to solve the problem, please let me know.

thank you

damian0815 commented 9 months ago

this is a consequence of the complexity of the parser. fixing it is non-trivial, sorry