Open colemanhindes opened 3 months ago
@colemanhindes Thanks for your interest. We are planning to release the training scripts soon but due to some other engagements there's no ETA yet. In the meantime, @canerturkmen and @shchur are working towards integrating Chronos into AutoGluon-TimeSeries (https://github.com/autogluon/autogluon/pull/3978) and they're also planning to offer ways of fine-tuning the models.
+1 for this, if possible please mind #22 too for some custom data. Thanks!
+1, looking forward to releasing the scripts of training and fine-tuning!
+1, looking forward to releasing the scripts of training and fine-tuning!
I caught a glimpse of it and noticed it's utilizing a torch.nn model. I've put together this notebook for training/finetuning. Could someone verify if it's set up correctly? The losses seem unusual, but I suspect it's due to the dataset being quite small and my use of:
sequence_length = 10
prediction_length = 5
notebook: here
+1 for this, if possible please mind https://github.com/amazon-science/chronos-forecasting/issues/22 too for some custom data. Thanks!
Training and fine-tuning script was added in #63, together with configurations that were used for pretraining the models on HuggingFace. We still need to add proper documentation, but roughly speaking:
pip install ".[training]"
(or pip install "chronos[training] @ git+https://github.com/amazon-science/chronos-forecasting.git"
python scripts/training/train.py --help
lists all available optionsrandom_init: false
, and adjusting learning rate and number of steps for fine-tuningHappy training! cc @colemanhindes @Saeufer @HALF111 @TPF2017 @0xrushi @iganggang
More detailed examples at: https://github.com/amazon-science/chronos-forecasting/tree/main/scripts
I get this error when training chronos-t5-small: ValueError: --tf32 requires Ampere or a newer GPU arch, cuda>=11 and torch>=1.7
@Alonelymess that means your GPU does not support TF32 floating point format. Please run training/fine-tuning with the --no-tf32
flag or set tf32
to false
in your yaml config.
I only have a single timeseries, and I want to do forecasting on it. Does it make sense to do fine-tuning in this case?
I was thinking maybe I could split the data chronologically (use data from 2022 to 2023 for training and data from 2023 to 2024 for testing), but I'm not sure if that makes sense.
@teshnizi answered you in #98
Really cool project! Enjoy the paper and have had fun testing it out. Will instructions on fine tuning be released?
Thanks for your time