lucidrains / iTransformer

Unofficial implementation of iTransformer - SOTA Time Series Forecasting using Attention networks, out of Tsinghua / Ant group
MIT License
445 stars 36 forks source link

Typo in Description under usage in Readme.md #3

Closed meteoDaniel closed 1 year ago

meteoDaniel commented 1 year ago

Dear creators of iTransformer.

I am really looking forward to Test in the next days.

In your usage example, there might be a typo in the comment describing the dimensions of the output. The dimensions pred_length and variate have to change ?!

Best regards

lucidrains commented 1 year ago

the pred length is the forecast for all variates

lucidrains commented 1 year ago

I could have misunderstood the paper too, open to discussion

I'm not an author of the paper, btw

meteoDaniel commented 1 year ago

I mean this line in the Readme:

# preds -> Dict[int, Tensor[batch, variate, pred_length]]
#       -> (12: (2, 12, 137), 24: (2, 24, 137), 36: (2, 36, 137), 48: (2, 48, 137))

I think it should be:

# preds -> Dict[int, Tensor[batch, pred_length, variate]]
#       -> (12: (2, 12, 137), 24: (2, 24, 137), 36: (2, 36, 137), 48: (2, 48, 137))
lucidrains commented 1 year ago

@meteoDaniel thanks!