ruma / js_int

JavaScript-interoperable integer types for Rust
MIT License
16 stars 8 forks source link

Implement operations with the underlying primitive integer types. #28

Closed FSMaxB closed 2 years ago

FSMaxB commented 2 years ago

This implements common operations with the underlying primitive integer types (i64 for Int and u64 for UInt) and the other way round. So for example PartialEq<i64> for Int and PartialEq<Int> for i64.

Comparison:

Arithmetik:

Assigning Arithmetik:

jplatte commented 2 years ago

I don't think that these impls should exist, to be honest, especially not the std::ops ones. Different integer types from the Rust standard library also don't implement any of these traits.

FSMaxB commented 2 years ago

Sorry about the rebase, I accidentally put my IDE project files in there.

I don't think that these impls should exist, to be honest, especially not the std::ops ones. Different integer types from the Rust standard library also don't implement any of these traits.

You got a point there, I just think these implementations would do a great service to the ergonomics of using the Int and UInt types, especially since they don't get the benefit of a literal's type being inferred that the standard types get. (like 1234 being automatically inferred to whatever type it is used with).

Also with the standard library types, you can easily cast them into each other or use the From implementation (or .into()) to convert into the bigger version of a type. You can't do this with Int and UInt for i64 and u64, you need to use TryFrom or try_into() in this case and then unwrap() it manually. This leaves a large gap for numbers between 32 bit and 53 bit that can't be easily used.

You might also argue that things like Add<i64> would take away some of the compiler checks that Int provides and shift it to a runtime error, but I would argue that this doesn't change much in that you can easily use two Ints in an arithmetic operations in such a way that the allowed range is exceeded and the same time happens.

If this isn't convincing, would you consider just adding the comparisons PartialEq and PartialOrd?

Otherwise feel free to just close this pull request if this doesn't align with your goals for this library!

jplatte commented 2 years ago

I just think these implementations would do a great service to the ergonomics of using the Int and UInt types, especially since they don't get the benefit of a literal's type being inferred that the standard types get.

I get where you're coming from and would love that have a solution for this, but unfortunately I think the only right fix for this is extending Rust such that literals can be coerced to user-defined types.

Also with the standard library types, you can easily cast them into each other or use the From implementation (or .into()) to convert into the bigger version of a type. You can't do this with Int and UInt for i64 and u64, you need to use TryFrom or try_into() in this case and then unwrap() it manually. This leaves a large gap for numbers between 32 bit and 53 bit that can't be easily used.

There is the possibility of making the uint! and int! macros work for literals > 32-bit, it would just require swapping out the macro_rules! implementation with a proc-macro to check the bounds.


Can you describe the use cases where the additional impls (especially the std::cmp ones) would be useful to you?

FSMaxB commented 2 years ago

Can you describe the use cases where the additional impls (especially the std::cmp ones) would be useful to you?

Thank you for that question, this did help me in terms of the XY problem. Actually looking at what I was doing, I noticed that most of my use cases actually revolve around "lazyness" of not wanting to use the int! and uint! macros, especially in writing tests, or doing things like incrementing a number by 1.

Most of the other use cases actually go away by properly separating API boundaries between the "JavaScript world" and the "Rust world" (in my case most operations would then be performed on Duration types instead of Uint/Int of milliseconds that the API uses.


What I originally wrote before actually reconsidering use cases:

I get where you're coming from and would love that have a solution for this, but unfortunately I think the only right fix for this is extending Rust such that literals can be coerced to user-defined types.

AFAIK this is in planning, but only for string literals right now?

There is the possibility of making the uint! and int! macros work for literals > 32-bit, it would just require swapping out the macro_rules! implementation with a proc-macro to check the bounds.

This would address the inference of literal types. Also shouldn't be too hard, even without requiring dependencies like quote and syn since it's just one single token that needs to be parsed. Still it would require an additional crate for the procedural macro. I'm up to implementing this if you want, although I have to admit that such large integer literals are probably more of an exception than the norm.

What this doesn't address are integers that come from other APIs. It's not like the rust ecosystem out there makes use of UInt and Int, so they will use u64 and i64 for numbers in that range, requiring a conversion every time.

FSMaxB commented 2 years ago

As for tests: It mostly boils down to writing uint!(...) and int!(...) being quite tedious with all the assert_eq!(..., ...) assertions.

jplatte commented 2 years ago

Thank you for that question, this did help me in terms of the XY problem. Actually looking at what I was doing, I noticed that most of my use cases actually revolve around "lazyness" of not wanting to use the int! and uint! macros, especially in writing tests, or doing things like incrementing a number by 1.

Most of the other use cases actually go away by properly separating API boundaries between the "JavaScript world" and the "Rust world" (in my case most operations would then be performed on Duration types instead of Uint/Int of milliseconds that the API uses.

So do you agree that this PR can be closed?


AFAIK this is in planning, but only for string literals right now?

I've heard people talk about this before, but haven't heard of actual planning going on.

This would address the inference of literal types. Also shouldn't be too hard, even without requiring dependencies like quote and syn since it's just one single token that needs to be parsed. Still it would require an additional crate for the procedural macro. I'm up to implementing this if you want, although I have to admit that such large integer literals are probably more of an exception than the norm.

If you feel like it, send a PR. I think this should go behind a feature flag though, even w/o syn / quote dependencies.

FSMaxB commented 2 years ago

So do you agree that this PR can be closed?

Yes.