p1atdev / LECO

Low-rank adaptation for Erasing COncepts from diffusion models.
https://arxiv.org/abs/2303.07345
Apache License 2.0
307 stars 23 forks source link

Request: ToMe token merging optimization, or just less tokens #18

Open torridgristle opened 1 year ago

torridgristle commented 1 year ago

https://github.com/dbolya/tomesd ToMe SD has support for Diffusers so using it should be as simple as:

import tomesd
tomesd.apply_patch(model, ratio=0.5)

Alternatively, since the prompts are typically very short, the text encoder's output could be cropped down to the first 16 or so tokens, or some other user adjustable number. Like cond[:,:8] uncond[:,:8]

All the <|endoftext|> padding at the end doesn't really matter much, and isn't going to match with anything all that strongly in the cross attention layers. Having some <|endoftext|> padding is still useful since it will carry some meaning from previous words in the prompt, but padding it out to 75 or however many tokens isn't beneficial.

If the method of just cropping down the token count is used, I believe that it should verify that it's long enough to contain all of the prompts with their <|startoftext|> and at least one <|endoftext|>.