eps696 / aphantasia

CLIP + FFT/DWT/RGB = text to image/video
MIT License
772 stars 105 forks source link

Alternate Subtraction Method, Faster #11

Closed torridgristle closed 3 years ago

torridgristle commented 3 years ago

I was trying out ways of manipulating the encoded text and one that I tried was subtracting encoded text from the encoded text prompt. I tried four renders for each and they look about the same, except the one that changes the encoded text had less of the subtract prompt which suggests to me that it's more effective at subtracting a prompt. Also it ends up using just the one txt_enc rather than 2, and just the one cosine similarity.

Prompt: "a photo of a human face" and Negative: "a photo of a face"

Subtracting Subtract's txt_enc0 from text_enc resulted in these enc_sub

Existing negative method what uses cosine similarity with the image and negative prompt for loss resulted in these enc_neg

And for fun, using subtract to increase the difference between the two by txt_enc + (txt_enc - text_enc0) resulted in these enc_subdiff

The encoded text and images seem to be explorable like latent space.

eps696 commented 3 years ago

1) good point, will move subtraction out of the training loop. 2) your method of "increasing the difference" in fact just decreasing the effect of subtraction (like adding weight < 1): here 2x-y ~ x-0.5y. and the examples did show that - some kind of "faces" appeared with such weighing down. 3) sure; on my understanding, any continuous embedding is a latent vector by definition. we just don't have decoder for that, like from proper dall-e (not the stripped down published version, but the photorealistic one from the article), so have to move around with optimization techniques instead.

torridgristle commented 3 years ago

Ha! Whoops, I was so focused on trying to do something involving the tendency for CLIP to label an image with a face as "a photo of a human face" with a higher score than "a photo of a human face" that I done went and did 2*enc1-enc2, shit. Back to the drawing board.

eps696 commented 3 years ago

regarding preliminary text subtraction txt_enc - text_enc0: after second thinking, it's not the same. when we compare the losses after cossimilarity, we check how far or close we're to those prompts/concepts (that's what we probably want). if we subtract it at once, we will check instead how close we are to the difference between the two, essentially losing the position of "center of mass" of the pair (in the embedding space). so the resulting vector may have nothing in common with either of prompts, and most likely we'd get smth rather different.

eps696 commented 3 years ago

just to ensure - i've tried direct subtraction method on a few meaningful sentences, and it predictably went totally aside of main topic. and just to make it clear - encoded embeddings are NOT losses, their summation/subtraction have different impact. finally, cossim comparison is just an op, it's probably few orders of magnitude faster than encoding (and even slicing), so "time savings" should not be measurable