Open kmitchener opened 2 years ago
@kmitchener @alamb @andygrove I think this is an issue about behavior when overflow. We can do more investigation about this, and decide the behavior by default. We also can change the behavior with the config or option.
I don't have a huge preference -- when in doubt I think we have tried to follow the postgres semantics, for consistency.
In terms of checking for overflows, etc I would also say we should try and avoid slowing things down too much, if possible
My proposal would be to make 2 changes:
I think those proposals are very reasonable
My proposals are based on years of Oracle and Postgres use though, I have no Spark experience. What other thoughts and opinions are out there? How does Spark behave in these cases?
Like cast, if we convert a value to another type which is overflow, the default result is NULL.
For the mathematical operations, we should add the option for that.
I think the two behavior is ok, but we should make them consistent.
@kmitchener
In the spark, If we don't set the special parameter, spark will not throw the error, and just return the wrapping value.
You can try it.
the doc ref in the spark: https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html#arithmetic-operations
cc @alamb
An option to control the behavior also seems reasonable to be (though I suspect it would add some non trivial complexity to the implementation, so perhaps we can only do this if/when a user has a compelling usecase 🤔 )
@kmitchener thanks for creating this issue!
btw can we perhaps consider labelling it as a bug
too?
Is your feature request related to a problem or challenge? Please describe what you are trying to do. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] (This section helps Arrow developers understand the context and why for this feature, in addition to the what)
Describe the solution you'd like A clear and concise description of what you want to happen.
I'm opening this issue to get consensus on what the desired DataFusion behavior should be when overflowing numeric types in DataFusion. All tests below done on master as of time of issue creation.
I think the behavior between casting a "too big" number, and overflowing should be the same.
My proposal would be to make 2 changes:
My proposals are based on years of Oracle and Postgres use though, I have no Spark experience. What other thoughts and opinions are out there? How does Spark behave in these cases?
Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
Additional context Add any other context or screenshots about the feature request here.