Open Akhim-yun opened 4 months ago
@Akhim-yun Thank you for your interest. If you make use the the TimeSeriesForecastingPipeline
you can denormalize the outputs. I need to add a better example notebook that shows this, but you can look at: https://github.com/ibm-granite/granite-tsfm/blob/main/tests/toolkit/test_time_series_forecasting_pipeline.py#L113
to get some idea of the process.
@Akhim-yun Thank you for your interest. If you make use the the
TimeSeriesForecastingPipeline
you can denormalize the outputs. I need to add a better example notebook that shows this, but you can look at: https://github.com/ibm-granite/granite-tsfm/blob/main/tests/toolkit/test_time_series_forecasting_pipeline.py#L113 to get some idea of the process.
I am not sure if the following code can get all the predicted values in the test set, because when I use this predicted value to calculate metrics, I often get a bad performance, and the metrics obtained when using few-shot 100% are even worse. Can you help me take a look? Thank you very much. output = zeroshot_trainer.predict(dset_test) pred = output[0] pred = pred[0] pred[:,-1,-1]
@Akhim-yun can you share a small, working example of what you are trying to do?
@Akhim-yun starting from your code above:
output = zeroshot_trainer.predict(dset_test)
pred = output[0]
The predictions are given by pred
. It will have shape (n, prediction_length, num_channels), where n = number of examples in dset_test (i.e., len(dset_test)). The easiest way to compute the a metric would be to compare pred
to all of the "future_values" in dset_test
. A tensor of these values from dest_test
could be created with:
ground_truth = torch.stack([v["future_values"] for v in dset_test])
output = zeroshot_trainer.predict(dset_test) pred = output[0]
I got the predicted values of my dataset through this code, I would like to ask how to denormalize the predicted values, I did not find an example in the code.