End to end implementation of paper Deep and Confident Prediction for Time Series at Uber in PyTorch. We use the Metro Interstate Traffic Volume multivariate time series dataset for training and eventually predicting traffic volume.
We make use of the implementation of variational dropout from keitakurita/Better_LSTM_PyTorch for the LSTM layers with dropout.
Code is prototyped in the notebooks
before transfer into cleaned up Python scripts for reuse.
notebooks/01_dataset_creation.ipynb
src/data.py
notebooks/02_encoder_decoder.ipynb
models/encoder_decoder.py
src/utils.py
notebooks/03_encoder_decoder_dropout.ipynb
models/encoder_decoder_dropout.py
notebooks/04_pretraining_hyperparam.ipynb
Ax
for guided hyperparameter search in the pretraining of encoder-decoder. We use GCE compute for GPU accelerationsrc/utils.py
notebooks/05_pretraining_embedding.ipynb
src/utils.py
notebooks/06_prediction_network.ipynb
Ax
for hyperparameter searchsrc/utils.py
src/data.py
models/prediction.py
notebooks/07_full_inference.ipynb
src/inference.py
src/evaluation.py
In notebooks/08_evaluation.ipynb
Prediction results on the test set are compared to those made with facebook/prophet.
Results on the classical time series prediction evaluation metrics are presented below:
Metric | Uber | Prophet |
---|---|---|
Mean absolute error | 280.47 | 680.98 |
Root mean squared error | 490.92 | 955.85 |
Mean absolute percentage error | 0.13 | 0.41 |
Symmetric mean absolute percentage error | 0.029 | 0.024 |
Time series predicitons using the full inference algorithm including uncertainty bounds:
numpy
pandas
torch
tqdm
matplotlib
ax-platform
fb-prophet