KimMeen / Time-LLM

[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming Large Language Models"
https://arxiv.org/abs/2310.01728
Apache License 2.0
1.46k stars 252 forks source link

Hi ! i have a question about Reprogramming Interpretation #132

Open Jeong-Eul opened 3 months ago

Jeong-Eul commented 3 months ago

How did you obtain the word set, and how did you create a visualization that shows how closely it is related to each prototype?

In the code, vocabulary is used by taking the weight of the LLM's embedding layer, but I thought it would be difficult to interpret which word combination prototype and which time series patch had the high attention score.

Thank you for reading my question.

diaosilei commented 1 month ago

How did you obtain the word set, and how did you create a visualization that shows how closely it is related to each prototype?

In the code, vocabulary is used by taking the weight of the LLM's embedding layer, but I thought it would be difficult to interpret which word combination prototype and which time series patch had the high attention score.

Thank you for reading my question.

Hi, there. I got the same confusion when I read their paper. Do you find some cues to solve this question?