Open gustavz opened 5 years ago
Those are great questions. The STS fitting API hides its trainable
variables behind the parameters of the variational distributions, where
they're not easy to access directly (this also makes it hard to do fitting
idiomatically in Eager mode). As a hack you can always get at them with
tf.trainable_variables()
, but that's obviously not ideal, and it'd still
be up to you to reconstruct the variational distributions and training
bound (the code to do this in tfp.sts.fitting.py is actually pretty simple
and should be easy to adapt, but there's no off-the-shelf solution). We're
hoping to revise this API in the coming months.
For now, the easiest approach to saving a model for later prediction is
just to save the posterior parameter samples. This is a dict of relatively
small Tensors, so the basic Numpy machinery np.save(filename, q_samples_); q_samples_ = np.load(filename)
should work fine. As a bonus this same
approach works whether you use VI or MCMC to fit the model -- this is the
underlying reason why the downstream methods only take samples as the
interface.
Re continuing to train on new data, the short answer is 'there be dragons'. If it's feasible to just retrain a model on your old+new data, I'd recommend that if possible (it'd likely help to initialize the combined optimization at the previous optimum, and you could do this by hacking the code in tfp.sts.fitting.py, but unfortunately there's no nicely-exposed approach right now). If not, one strategy that can work is to use the posterior from fitting a model on data from time 0:K as the prior for a new model for time K:T; all STS components take parameter prior distributions as arguments so you can do this by just passing in the posterior distribution objects. If exact Bayesian inference were possible in these models, this would 'do the right thing' and you'd get out the true posterior incorporating data from 0:T. With approximate variational posteriors there are no guarantees, but it might do something reasonable.
On Tue, May 28, 2019 at 4:44 AM Gustav notifications@github.com wrote:
I followed the example https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Structural_Time_Series_Modeling_Case_Studies_Atmospheric_CO2_and_Electricity_Demand.ipynb to build and fit a sts model for time series prediction which is easy and works pretty well as long as one sticks to the tutorial.
What i do not understand is, how to save a model which is fitted on some training data like it is done with neural networks in tensorflow or high lvl apis like keras.
The stretched out model fitting, elbo loss minimizing and prediction process makes it hard to understand where the model parameters lie that have been optimized.
Finally I would like to know if / how it is possible to continue training an previously fitted sts model for time-series prediction on new data.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tensorflow/probability/issues/426?email_source=notifications&email_token=AAHSFCUAOEIXFPLVHXPKYMDPXULKZA5CNFSM4HQCROMKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4GWGS4DQ, or mute the thread https://github.com/notifications/unsubscribe-auth/AAHSFCRBFOUX6D3MWBZVIVTPXULKZANCNFSM4HQCROMA .
I am also trying to save fitted posteriors. Since a lot of work has been done on tf and tfp, are there any new methods for saving a fit sts model?
I don't know of recent changes that would be directly relevant to this issue. If others have found better solutions, feel free to comment!
hello everyone. is there any update/solution to this now, please? i would love to be able to save fitted STS models. many thanks. tag
Have you checked this?
thank you @bavincen
I followed the example https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Structural_Time_Series_Modeling_Case_Studies_Atmospheric_CO2_and_Electricity_Demand.ipynb to build and fit a sts model for time series prediction which is easy and works pretty well as long as one sticks to the tutorial.
What i do not understand is, how to save a model which is fitted on some training data like it is done with neural networks in tensorflow or high lvl apis like keras.
The stretched out model fitting, elbo loss minimizing and prediction process makes it hard to understand where the model parameters lie that have been optimized.
Finally I would like to know if / how it is possible to continue training an previously fitted sts model for time-series prediction on new data.