According to the function documentation, the delta argument provides, "indirectly, the difference between consecutive iterations to compare with the error tolerance". In contrast, if the user calls the function with print.iter = TRUE, they will see a "delta" number. Internally, that number is actually coming from a variable called delta.L2, which is calculated as follows (see lines 95 and 101):
According to the function documentation, the
delta
argument provides, "indirectly, the difference between consecutive iterations to compare with the error tolerance". In contrast, if the user calls the function withprint.iter = TRUE
, they will see a "delta" number. Internally, that number is actually coming from a variable calleddelta.L2
, which is calculated as follows (see lines 95 and 101):https://github.com/ocbe-uio/TruncExpFam/blob/af75716e40ae486b5dc34c0179edf2f27d7c72d6/R/mlEstimationTruncDist.R#L95-L101
I find this potentially confusing, since a user could set their own
delta
and wonder why the "delta" values in the output are completely different.Therefore, I wonder if we should change the name of the argument (or, alternatively, the name of the output) to something more meaningful or clear.
One suggestion would be to rename the argument to
delta.step.size
, which is much more descriptive (though I don't really like how long that name is).