Closed davharris closed 8 years ago
Note that all_error_grads
says
# Currently assuming scalar values for adjustable [error distribution] parameters!
and always sums up all the gradients. Next step is to add the correct logic for when to sum, when to rowsum, and when to colsum.
Not sure this is actually a real problem. The cases where we'd want the prior to act by row may not actually have adjustable parameters that we want to target with the optimizer.
Except maybe kernel parameters in a GP prior? But that's such a special case that it might be better to handle it inside the prior rather than making the rest of the code base more complicated...
Current coverage is
96.63%
Uncovered Suggestions
+0.84%
via R/predict.R#17...20+0.42%
via R/fit.R#59...60+0.21%
via R/mistnet.R#98...98