Closed arnoudbuzing closed 2 weeks ago
Thank you for your question, and sorry for the late reply. I am no expert in the Wolfram Language, so I had to find some time to take a look at the link you provided, and at the Wolfram Language in general. For this reason, please take this answer with a big grain of salt!
First of all, and broadly speaking, Timeseria is a young open source library that can be used freely, not only as a tool for interactive analysis but also as a building block in larger projects (e.g. web applications and data processing pipelines). The Wolfram Language is instead a much more mature and well established solution, that on the other hand relies on proprietary software that requires a license and that is harder to integrate. They fulfil different needs, which makes it difficult to compare them in a fair way from either side.
This said, if we accept their different origins and connotations, I would say that the main difference is the object-oriented nature of Timeseria and the different abstraction levels: in Timeseria everything is an object, including the base data structures and the data values, and there are no "floating" functions since everything is abstracted (and encapsulated).
For example, form the link you provided I can see that in order to get the average value of a time series you would do something like Mean[ts]
, while in Timeseria you would do ts.avg()
: it is the TimeSeries object to provide the avg()
method, so that the underlying logic is encapsulated in the data structure itself. This translates into Timeseria's ts.avg()
supporting also multivariate time series, while Wolfram Language's Mean[ts]
does not.
Similarly to the data structures, all models in Timeseria expose methods for common functionalities, which drastically reduces the quantity of the code to be written. I asked ChatGPT to write a Wolfram Language code for defining a LSTM neural network model for next-step forecasting, to be fitted on a ts_fit
time series, and then to be tested on ts_test
time series where to compute the MAPE while also plotting the actual and predicted values for each data point. It provided me a 65-line code [1], while in Timeseria it would be as simple as:
# Define a LSTM-based forecaster
forecaster = LSTMForecaster(window=10, neurons=20)
# Fit the forecaster with normalisation
forecaster.fit(ts_fit, normalize=True, epochs=100)
# Evaluate the forecaster while plotting the evaluation series with actual and predicted values
forecaster.evaluate(ts_test, error_metrics=['MAPE'], plot_evaluation_series=True)
This is just an example, but you get the idea. Other tasks, as model cross validation, inspection, and application, but also resampling and aggregation or other operations, are also implemented in Timeseria with similar high-level abstractions.
Also, thanks to its object-oriented and encapsulation nature, base data types can be easily extended as well. A feature coming shortly re-implements the base floating point of Python with a probability density-based floating point, so that each value in a time series will be able to carry both measurement and predictive uncertainty. This kind of extensions are usually not possible if not with an approach similar to the one undertaken in Timeseria.
I am not sure if I have answered, but in any case I would like to point out that each solution has its pros and cons, and that it depends on the use-case which one is more suitable: I am sure there would be several aspects not covered here where Timeseria would fall short in comparison to the Wolfram Language.
[1] Wolfram Language code snippet for fitting and evaluating a LSTM neural network model for next-step forecasting.
(* Normalize the data for better training and comparison *)
normalizedTrainData = Rescale[ts_fit["Values"]];
normalizedTestData = Rescale[ts_test["Values"]];
(* Set up sequence length for next-step forecasting *)
sequenceLength = 10;
(* Prepare training data with sliding windows *)
trainingData = Partition[normalizedTrainData, sequenceLength + 1, 1];
inputData = trainingData[[All, ;; -2]]; (* Inputs: all but last point in each sequence *)
outputData = trainingData[[All, -1]]; (* Outputs: last point in each sequence *)
(* Pair inputs with expected outputs for training *)
trainingSet = Thread[inputData -> outputData];
(* Define the LSTM model architecture *)
sequenceNet = NetChain[{
LSTMLayer[20], (* LSTM layer with 20 units *)
LinearLayer[1] (* Fully connected layer to output a single value *)
},
"Input" -> {sequenceLength}, (* Input sequence length *)
"Output" -> "Scalar" (* Output is a single scalar value *)
];
(* Train the model on `ts_fit` *)
trainedNet = NetTrain[sequenceNet, trainingSet,
BatchSize -> 32, MaxTrainingRounds -> 100, LearningRate -> 0.001
];
(* Forecast next-step-ahead values for each data point in `ts_test` *)
initialInput = Take[normalizedTrainData, sequenceLength]; (* Initial input for forecasting *)
predictions = Reap[
FoldList[
Function[{_, input},
Module[{output, newInput},
output = trainedNet[input];
newInput = Append[Rest[input], output];
Sow[output]; (* Collect prediction *)
newInput
]
],
initialInput,
normalizedTestData
]
][[2, 1]];
(* Denormalize predictions back to original scale *)
minMaxFit = MinMax[ts_fit["Values"]];
denormalizedPredictions = Rescale[predictions, {0, 1}, minMaxFit];
(* Calculate MAPE between the denormalized predictions and `ts_test` *)
actualValuesTest = ts_test["Values"];
mape = Mean[Abs[(actualValuesTest - denormalizedPredictions)/actualValuesTest]] * 100;
(* Display MAPE *)
Print["Mean Absolute Percentage Error (MAPE): ", mape, "%"];
(* Plot the actual test values and the predicted values *)
DateListPlot[{
ts_test,
TimeSeries[denormalizedPredictions, {ts_test["Dates"]}]
},
PlotLegends -> {"Actual Test Values", "Predicted Values"},
PlotLabel -> "Next-Step Forecast on Test Data with LSTM Model"
]
Appreciate your thoughts, reading through them now ... Closing the ticket hence
More a question than an issue: How does timeseria compare to TimeSeries functionality in the Wolfram Language? (https://reference.wolfram.com/language/ref/TimeSeries.html)