davharris / mistnet2

Neural Networks with Latent Random Variables in R
Other
4 stars 0 forks source link

Add parameter expansion (fix #28) #57

Closed davharris closed 8 years ago

codecov-io commented 8 years ago

Current coverage is 96.63%

Merging #57 into master will increase coverage by +0.11% as of b43e551

@@            master     #57   diff @@
======================================
  Files           19      20     +1
  Stmts          461     475    +14
  Branches         0       0       
  Methods          0       0       
======================================
+ Hit            445     459    +14
  Partial          0       0       
  Missed          16      16       

Review entire Coverage Diff as of b43e551


Uncovered Suggestions

  1. +0.84% via R/predict.R#17...20
  2. +0.42% via R/fit.R#59...60
  3. +0.21% via R/mistnet.R#98...98
  4. See 7 more...

Powered by Codecov. Updated on successful CI builds.

davharris commented 8 years ago

Note that all_error_grads says

# Currently assuming scalar values for adjustable [error distribution] parameters!

and always sums up all the gradients. Next step is to add the correct logic for when to sum, when to rowsum, and when to colsum.

davharris commented 8 years ago

Not sure this is actually a real problem. The cases where we'd want the prior to act by row may not actually have adjustable parameters that we want to target with the optimizer.

Except maybe kernel parameters in a GP prior? But that's such a special case that it might be better to handle it inside the prior rather than making the rest of the code base more complicated...