I'm guessing there may be a reason linked deeply into the algorithm to use single-precision floats, instead of doubles, which are often the default in many applications and libraries today.
Would it be possible to template the algorithm's fundamental storage type in order to be able to use single- or double-precision floating-point numbers (or perhaps, even some more exotic types should there be a reason for these)?
I'm guessing there may be a reason linked deeply into the algorithm to use single-precision floats, instead of doubles, which are often the default in many applications and libraries today.
Would it be possible to template the algorithm's fundamental storage type in order to be able to use single- or double-precision floating-point numbers (or perhaps, even some more exotic types should there be a reason for these)?
Thanks