Open amol447 opened 4 years ago
Is your variational optimization converging? (you can see this from the loss curve returned by fit_surrogate_posterior
). If not, you may want to increase the number of steps.
Generally I would expect a normalized design matrix to lead to a better-conditioned optimization problem that would be faster to converge, all else being equal.
Then I get into the issue I have over here (see the last comment). I normalize, the weights_prior and weights_constraining_bijector I would need to specify would be different for each batch and there seems to be no easy way to create this type of bijector.
I also checked the loss curve and the optimization seems to have stalled for unnormalized case so I don't think increasing num_steps will work. Here are last 50 loss_curve values for unnormalized case
<tf.Tensor: shape=(50,), dtype=float64, numpy= array([763.93081719, 766.10331749, 765.11964954, 764.46272873, 765.89419736, 770.15077838, 768.31471942, 768.91955323, 770.62193118, 765.33542602, 765.71370294, 769.89131876, 767.46364822, 768.86250098, 769.78790337, 767.90190939, 764.57111531, 766.99044293, 765.28328907, 761.63563885, 778.600896 , 775.45398601, 775.37556489, 764.92138231, 764.49788996, 762.41033604, 764.07139634, 769.58919769, 769.97874594, 766.57655588, 769.63247877, 768.3141973 , 765.39523154, 765.3549978 , 764.26272891, 765.89452207, 767.82033705, 764.80364979, 768.06425015, 770.65213456, 767.87662662, 771.56713973, 768.50769508, 764.37388141, 763.52981121, 760.54508523, 778.12263336, 786.72497995, 784.78095079, 763.13842348])>
Just tried fitting with 500 steps and no improvement in the fit. error_rms 3.1
Hi, I am trying to fit a time series model with exogenous variable. The linear regression part doesn't seem to be able to find weights correctly. Here is a simple example to reproduce the problem-
if I normalize the design_matrix, the numbers look much better