Open ghost opened 7 years ago
var a:Numeric
a=1.0
print a+1
This is adding a float and an integer. Automatic conversions, as found in C and Java, often hide complexities. Do we really want to convert the integer into a float and then calculate a result, or would we prefer to convert the float into an integer and calculate an integer result? I think that it is best to force the programmer to make conversions explicit, as such conversions can change the affected values (due to the imprecision of a float).
I'm personally in favor of providing, by default, an unbounded rational arithmetic. In other words, 1
and 1.0
would be the exact same value, and 0.1
would be internally represented as (numerator 1, denominator 10). Floats/int32/etc would need an explicit suffix. In other words, 32.5f
(IEEE 754) would be different from 32.5
(numerator 65, denominator 2). This would be intuitive for novice programmers and would eliminate any loss of precision. (1.0/3.0)*3.0
would be exactly equal to 1.0
and 1
.
without some automatic conversion the "Numeric" type is rather dangerous. Consider
var c:Numeric =233
var b:Numeric =1.0
print c+b
I agree an unbounded rational would be a very nice default. What do you think about automatic conversion - cases like taking sqrt() of such a rational?
Also curious about things like is 1u32
of type Int
? How does 1u32+1i16
work?
sqrt() is not a rational function. One would have to first convert a rational to a float, double, or some other approximate fixed-bit representation to apply functions such as sqrt(), sin(), cos(), etc.
As for fixed-bit integers (int16, int32), they would need explicit conversions (sign-extend, 0-extend, or truncate) before being added together (using modulo arithmetic, of course).
some difficult cases in the language need explanation, http://nitlanguage.org/manual/basic_type.html leaves much open.
print 1000*1000*1000
results in-73741824
on a 32bit system - this will be surprising for many programmers. I see 8,16,32 bit integers but nothing like bignums or 64bit integers?works as expected, whereas
gives an error. Is that intended? Is there some clever trick to achieve behaviour similar to that of most other languages?
1.0/0.0
and1/0
are handled differently. This is so in most other languages but does it make much sense? In a typed language a numeric expression returning a "not-a-number" value is a paradox and mabye https://github.com/nitlang/nit/issues/2314 could go.