time-series-foundation-models / lag-llama

Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Apache License 2.0
1.27k stars 157 forks source link

Detail of tokenization, feedforward, and backward processes #104

Closed abizard closed 1 week ago

abizard commented 1 month ago

Hello,

I hope this message finds you well. I am currently exploring the Lag-Llama model and I would like to gain a deeper understanding of the tokenization, feedforward, and backward processes within the model. Specifically, could you provide insights on how the tokenization, feedforward, and backward passes are handled in the implementation?

Are there any specific layers or methods in the model where these processes can be observed or logged?

Thank you for your assistance!

abizard commented 1 month ago

Hi, I just wanted to follow up on the issue I raised a few days ago. I would appreciate any feedback or guidance on this matter when you have a moment. Thank you for your time and the great work you do on this project!

ashok-arjun commented 1 month ago

Hi, please see the paper: https://arxiv.org/abs/2310.08278 and the code for the implementation. I unfortunately don't have the capacity to discuss it more, sorry.