Closed ztb-35 closed 4 months ago
It seems like you've caught on quite well. The authors mentioned in the paper, "A simple solution is to maintain a small collection of text prototypes by linearly probing E, denoted as E'." The part you mentioned about the mapping_layer likely corresponds to this.
I think they import the whole model word-embedding weight and then get the text prototype by this linear layer("self.mapping_layer").
I think they import the whole model word-embedding weight and then get the text prototype by this linear layer("self.mapping_layer").
Yes, your understanding is correct. We have also provided a detailed description in the "Patch Reprogramming" section of our paper, which you can refer to.
Hi, there. Thanks for publishing your code. I'm interested in your patch reprogramming. But I didn't find your text prototype in your code TimeLLM.py This is only the linear projection. I'm not sure if I misunderstand the code. Thanks for your reply.