zqhang / AnomalyCLIP

Official implementation for AnomalyCLIP (ICLR 2024)
MIT License
285 stars 30 forks source link

The learnable token embeddings are attached to the first 9 layers of the text encoder for refining the textual space. #21

Closed phusinhngay2011 closed 5 months ago

phusinhngay2011 commented 5 months ago

Ảnh chụp màn hình 2024-06-08 214758 First of all, I would like to thank you and your colleagues for your contributions to this domain. I have a question that in the Implementation details you said that The learnable token embeddings are attached to the first 9 layers of the text encoder for refining the textual space. but I only see that the 2nd (i = 1) -> 8th (i = 7) layers are attached. Can you explain it for me, thank you.

zqhang commented 5 months ago

we do not attach text embedding at the first layer,and attach embedding for the next 8 layers (9-th layer)

---- Replied Message ---- | From | @.> | | Date | 06/08/2024 23:25 | | To | @.> | | Cc | @.***> | | Subject | [zqhang/AnomalyCLIP] The learnable token embeddings are attached to the first 9 layers of the text encoder for refining the textual space. (Issue #21) |

nh.ch.p.man.hinh.2024-06-08.214758.png (view on web) First of all, I would like to thank you and your colleagues for your contributions to this domain. I have a question that in the Implementation details you said that The learnable token embeddings are attached to the first 9 layers of the text encoder for refining the textual space. but I only see that the 2nd (i = 1) -> 8th (i = 7) layers are attached. Can you explain it for me, thank you.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

phusinhngay2011 commented 5 months ago

Thank you for clarifying, I just figure out the 9-th layer also attached.