Open bdurbrow opened 5 months ago
Short answer: historically it was done for simplicity but probably yes it should use usize_t
for index parameters - this is a breaking change but I think makes sense to do.
Long answer:
Having the C FFI ABI change based on 32 vs 64 bit platform has annoyances/paper cuts in some cases - it forces all consumers to deal with it. E.g., historically in .NET that means you can't just use a fixed size integer you had to use IntPtr
or UIntPtr
(despite the name you'd use it as an integer). They have since added the aliases nint
and nuint
(https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/integral-numeric-types#native-sized-integers), but this is just one example.
With a fixed integer type defined in the FFI (rather than usize_t
/size_t
) then any consumer can just depend on that regardless of 32 vs 64 bit, so when redefining the structures/functions for the interface in a another language you can just use that integer type rather than dealing with it being 2 possible types (which depending on the language means different things as not all languages have a direct usize_t/size_t type).
I'm very much NOT a Rust programmer - so, if for some reason, it's boneheaded, please enlighten me.
I noticed that the C FFI functions that take or return either the count of a vector or index of an item are declared as 32 bit integers, and there's explicit overflow checking for a 32 to 64 bit conversion.
Wouldn't it be better to use size_t and its Rust counterpart usize instead? This would eliminate the "impedance mismatch", to borrow a term from an unrelated field.
My particular application shouldn't ever need more than 4 billion-sum-odd vertexes, but wouldn't it be better in principle to support the capability?
If you agree, I'll change the types in the pull request I send for adding the Shape functionality to the C FFI.