google / prompt-to-prompt

Apache License 2.0
2.98k stars 279 forks source link

get_replacement_mapper_ in seq_aligner.py #53

Open yuhaoliu7456 opened 1 year ago

yuhaoliu7456 commented 1 year ago

Thanks for this amazing work. I tested many times on this func, and this func always returns a diagonal matrix with 1s on the diagonal. Why don't you use the built-in func in the torch directly if this is right? If this needs to be corrected, can you help explain this issue?

HyelinNAM commented 1 year ago

I also have same question. Can anyone answer for us? :)

Kenneth-Wong commented 10 months ago

I think this is because your source prompt and target prompt are almost the same, and the processed results of the tokenizer are so similar to the space-splited ones. For these cases, it will return the diagonal matrix. However, If the source text is "a lion is eating an apple", and the edited text is "a lovely-dog is eating an apple", the "lion" will be mapped to "lovely", "-", and "dog", because the tokenizer will tokenized the "lovely-dog" into "lovely", "-", "dog".

sucongCJS commented 7 months ago

I have the same question...

sucongCJS commented 7 months ago

I think this is because your source prompt and target prompt are almost the same, and the processed results of the tokenizer are so similar to the space-splited ones. For these cases, it will return the diagonal matrix. However, If the source text is "a lion is eating an apple", and the edited text is "a lovely-dog is eating an apple", the "lion" will be mapped to "lovely", "-", and "dog", because the tokenizer will tokenized the "lovely-dog" into "lovely", "-", "dog".

you are right bro!

ybx193670 commented 4 months ago

I have the same question,too.

ybx193670 commented 4 months ago

I have the same question,too. Reason: seq_aligner file is not being imported Action: Create a new unit folder, create an empty init.py in it, and move the seq_aligner.py into this folder, and remember to modify the path when importing at the end