Dear Mr. Crockford,
I like your idea of combining floats and ints into one number type. One unified number type makes sense for most math operations and saves a lot of headache associated with handling two number types. However, when we are doing array indexing, how can we handle floating-point indices? I know that all array indices in JavaScript are strings so you can store values into floating-point indices:
let a = []
a[1.5] = 10
console.log(a[1.5]) // outputs: 10
To me, JS's behavior is both strange and confusing as many people including me regard array indices as whole numbers and expect JS to floor or round floating-point indices to whole numbers.
What approach do you recommend to deal with floating-point indices? Is this a necessary evil for adopting a unified number type?
I agree that JS is much too permissive here. Strings should not be used as array indexes.
On reading, a[1.5] should produce null because there is no such element.
On writing, it should fail.
Dear Mr. Crockford, I like your idea of combining floats and ints into one number type. One unified number type makes sense for most math operations and saves a lot of headache associated with handling two number types. However, when we are doing array indexing, how can we handle floating-point indices? I know that all array indices in JavaScript are strings so you can store values into floating-point indices:
To me, JS's behavior is both strange and confusing as many people including me regard array indices as whole numbers and expect JS to floor or round floating-point indices to whole numbers. What approach do you recommend to deal with floating-point indices? Is this a necessary evil for adopting a unified number type?