Closed wupeifan closed 2 years ago
So I think that we should basically take https://github.com/HighDimensionalEconLab/DifferentiableStateSpaceModels.jl/blob/main/src/generate_perturbation_derivatives.jl#L306-L309 and https://github.com/HighDimensionalEconLab/DifferentiableStateSpaceModels.jl/blob/main/src/generate_perturbation_derivatives.jl#L387-L391
and make them something like
sol = generate_perturbation(m, p_d, p_f, Val(2); cache, settings)
if (sol.retcode == :Success)
grad_ret = generate_perturbation_derivatives!(m, p_d, p_f, cache)
if grad_ret != :Success
sol = FirstOrderPerturbationSolution(:Failure, m, cache)
end
end
or something like that. If this causes some type-stability issues then you could instead do it at the bottom, and then it is https://github.com/HighDimensionalEconLab/DifferentiableStateSpaceModels.jl/blob/main/src/generate_perturbation_derivatives.jl#L378 becomes
return (grad_ret == :Success ? sol : FirstOrderPerturbationSolution(:Failure, m, cache)), generate_perturbation_pb
or something like that. And if this doesn't have type stability I think there is something else we could try later.
Right. I totally forget that sol
is just fetching elements from cache
.
@jlperla I'll change the :failure
to :GradientFailure
Currently, the
solution
object will return a flag indicating whether the solution computations are successful or not. If not, for B-K and other linear algebra related errors, the exact error message will be returned; otherwise just a general failure flag. This flag is used in the downstream sampling, as the log-posterior will be set automatically as-Inf
when the solution doesn't return a success sign. But there are cases that the solutions are fine and the gradients are not. With the current patch in the code, it skips the gradient computation step.I think what should be done is to add the gradient failure flag to the solutions and check it downstream in sampling.