Axom currently handles normalizing the zero vector (and tiny vectors) by returning {1, 0, 0}. Is this really the behavior we want?
I'll add a few thoughts to start:
Should we check if norm() >= PRIMAL_TINY instead of squared_norm() >= PRIMAL_TINY?
If the first check fails and we have a "tiny" vector, should we find the largest component and set that component to 1 or -1 (depending on the sign of the component) in the unit vector to be returned (all other entries would be 0)
Should we return the zero vector in the case that we really are trying to normalize the zero vector (e.g. the largest component is zero)?
Are we OK with doing a little extra work for extra accuracy? These degenerate cases probably aren't super frequent, so I would suggest only doing the extra work if the checks we are using indicate a degenerate case
Alternatively, we could add a small number to the denominator (PRIMAL_TINY) just to avoid dividing by zero
Another option is to not do anything special and let the user handle normalization of small or tiny vectors (this is less appealing since other parts of primal (e.g. Plane) use this normalization and the user may not be able to control how this is handled)
@agcapps suggested that adding PRIMAL_TINY to the denominator would follow the principle of least surprise and seems to match the expectations of our users.
Axom currently handles normalizing the zero vector (and tiny vectors) by returning {1, 0, 0}. Is this really the behavior we want?
I'll add a few thoughts to start:
norm() >= PRIMAL_TINY
instead ofsquared_norm() >= PRIMAL_TINY
?