Open dlill opened 6 years ago
Is there really a way around? The problem is how to define obj_a(p_a) + obj_b(p_b)
. Usually, the objective functions return gradient/Hessian for common and for distinct parameters. This means, we have
grad_a(p_pop, p_indiv_a) + grad_b(p_pop, p_indiv_b) = c(grad_a(p_pop) + grad_b(p_pop), grad_a(p_indiv_a), grad_b(p_indiv_b))
i.e. some parameters are added, others are sorted to different sections of the gradient. I don't know if coercion via c()
(gradient) or cbind()
(Hessian) is faster than creating the full gradient/Hessian already, intialize by 0.
If there are many individual parameters, it is unnecessary to construct zero-derivatives for these parameters in all the conditions, where this individual parameter is not needed. One solution is to construct individual "fn"s and add their results rather than the "fn"s themselves.
I would prefer a solution, in which the "+.fn "-operator checks if an individual parameter is required in this condition and use only required parameters.
Example: obj = obj_a(p_a) + obj_b(p_b) currently does obj(c(p_a, p_b)) = obj_a(c(p_a, p_b)) + obj_b(c(p_a, p_b)). It would be better to have obj(c(p_a, p_b)) = obj_a(p_a) + obj_b(p_b)
The problem is function concatenation "*.fn" which requires the zero-derivatives to be present, since a parameter could show up in a later function, such as g. Any ideas here?