Closed rotu closed 2 years ago
IMO this is a really bad idea. Not only do I think that implicit conversions in general are a bad idea (the conversion may happen at any point and you have to remember that it may happen), but this also introduces a hidden allocation.
Very fair. I think implicit conversions are a bad idea in general, too. But as a general rule, Nim allows implicit widening conversions. Any objections to implicit widening for binary arithmetical operations?
Any objections to implicit widening for binary arithmetical operations?
If you mean operations that take BigInt
and int
, no, not from me (although there isn't really any implicit conversion then), but they should use an optimized implementation (i.e. not just convert the int
to a BigInt
), e.g. additionInt
, subtractionInt
.
@konsumlamm What's the overhead actually look like for a BigInt when a normal int would suffice? Is the backend compiler really not smart enough to optimize out a BigInt operand?
Converting the int
to a BigInt
would allocate a new seq
, which can be avoided. I highly doubt this is optimized away.
Converting the
int
to aBigInt
would allocate a newseq
, which can be avoided. I highly doubt this is optimized away.
I'm no master at reverse engineering, but I think I confirmed that the allocation is not optimized away on my machine. Still no idea of the performance impact though.
IMO this is a really bad idea. Not only do I think that implicit conversions in general are a bad idea (the conversion may happen at any point and you have to remember that it may happen), but this also introduces a hidden allocation.