Closed spsanderson closed 3 years ago
Hi Steven,
Thanks for this.
Regarding your prediction, I see you have inconsistencies in your prediction_length and your definition for your testing(splits), which is defined by a proportion of 0.8.
The prediction length should be the same as the number of time stamps in your forecast data (test data). Count the time stamps and use that value as the prediction length. You will see that your forecast goes the full length.
Regarding the CI, I'd start by fixing the prediction length and go from there.
Ok I think I know what you mean I will revisit this soon on my pc
Sent from my iPhone
On Nov 20, 2020, at 8:18 AM, Matt Dancho notifications@github.com wrote:
Hi Steven,
Thanks for this.
Regarding your prediction, I see you have inconsistencies in your prediction_length and your definition for your testing(splits), which is defined by a proportion of 0.8.
The prediction length should be the same as the number of time stamps in your training set. Count the time stamps and use that value as the prediction length. You will see that your forecast goes the full length.
Regarding the CI, I'd start by fixing the prediction length and go from there.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Ok, I got it now. Is there a way to auto-select the prediction length from the split object? Maybe something like:
dfsplits[3] %>%
as_tibble() %>%
nrow()
Or maybe a little nicer:
nrow(training(dfsplits))
Yes, this is how I would do it for a single time series group. For multiple groups, you will need to select based on a horizon.
data_tbl.zip
I am using the dataset attached to try and play around with the package in the base example provided in the readme.
Session Info:
I run the following:
And get a result that shows that the prediction does not go to the end of the data as expected along with no confidence intervals.
Here is the resulting data that comes out: