Open seppestas opened 1 month ago
Hi @seppestas,
This is due to a long-standing issue with BigDecimal, where precision was defaulted to the maximum value (38). This has been fixed in 12.5, and is available in stable version 12.6 and above. Although, it is our recommendation you update to the latest stable version 12.8.1 instead.
The fix, however, has been known to create performance issues, and is off (false
) by default, and will need to be enabled (set to true
) through a connection string property calcBigDecimalPrecision
. Please see the connection string properties page for more information.
See using basic data types for more information.
Please let us know if this works for you, and if you have any further questions.
Hi @Jeffery-Wasty, thanks a lot for your quick answer.
I don't exactly follow why these know issues with "BigDecimal" would apply to my case, where only Java double types are used. Does the issue with BigDecimal also have effect on these types (i.e. would floats and doubles also default to a precision of 38)?
Small sidenote: upon closer inspection, no floats are actually used in my current configuration / test. All "decimal" values are passed the prepared statement as doubles.
Even so, my issue seems to be something different to me than the "BigDecimal" issue: instead of limiting the precision to 38, something is causing the precision to be 43 instead of 38. Some example numbers:
5.050000000000000710542735760100185871124268
0.2000000000000000111022302462515654042363167
0.1000000000000000055511151231257827021181583
0.4500000000000000111022302462515654042363167
0.1000000000000000055511151231257827021181583
0.35000000000000003330669073875469621270895 (!)
It looks like the value with 41 digits of precision is being stored in a decimal(8,2) column type. I still have to verify the others are all stored in a decimal(9,2) column type.
I would argue limiting the precision to 38 is exactly what I would want / expect. Even better would be to limit the precision to what the decimal type expects. Is this what the calcBigDecimalPrecision
option achieves?
As far as I know, the expected mapping is java.lang.double (1)-> java.sql.Types.DOUBLE (2)-> SQL server float (3)-> Decimal(9,2) or Decimal(8,2)
Somewhere, in either step 1 or 2, this is done incorrectly, resulting in 43 digit precision numbers. This then causes SQL server not to be able to parse these numbers as a float.
I don't know if this is limited to prepared statements calling stored procedures.
Another open question I still have is why this issue is affecting my test environment, but seemingly not the production environment, running the exact same software (and thus same MSSQL-JDBC version). I think either:
For option 1, would there be some database / SQL server level configuration that affects the calcBigDecimalPrecision
setting (or something else that is relevant)? I don't see any differences in the connection string. The calcBigDecimalPrecision option is not set in the production connection string.
Hi @seppestas,
Sorry to hear this didn't work:
I don't exactly follow why these know issues with "BigDecimal" would apply to my case, where only Java double types are used. Does the issue with BigDecimal also have effect on these types (i.e. would floats and doubles also default to a precision of 38)?
The changes also had affect on other decimal types, therefore I thought the changes might work in your case as well.
I would argue limiting the precision to 38 is exactly what I would want / expect. Even better would be to limit the precision to what the decimal type expects. Is this what the calcBigDecimalPrecision option achieves?
All parameters should be converted to have a maximum of 38
precision, to ensure compatibility across SQL Server versions. I have to look into why this is not happening here. What the feature did specifically was pass the actual precision values for decimals, as opposed to passing along the maximum of 38
, which we did previously. Passing 38
did work for the majority of customer use cases, but when users did operations in SQL Server where the result of these operations expect a higher level of precision than the inputs, the results would be truncated as the SQL Server maximum of 38
had already been reached.
For option 1, would there be some database / SQL server level configuration that affects the calcBigDecimalPrecision setting (or something else that is relevant)?
Some clarification on the different environments please. Testing the test environment against the production environment, with the same driver versions, leads to 43-digit precision in the test, but not the production environment? If this is the case, I would need full details on all differences including the SQL Server version, configuration, Java version and vendor for each, as well as other processes the application interacts with in addition to the driver. It's unusual that you're seeing these differences with the same version.
Additionally, did anything change for you when you updated the version from "8.2.2 --> whatever you're using now?" There have been a lot of changes in between these versions, so I'm not sure if there is some other change that may affect the results you are seeing. I would like to work off the most recent version (12.8.1), if this is possible for you.
To clarify my environment:
I encountered this issue while working with a legacy version of an application used to ingest data into a DB. This application uses version 8.2.2.jre8 of the MSSQL JDBC driver. My goal is to test compatibility of some of the interfaces of this old version of the application, currently in use on production.
The production environment uses SQL server:
SELECT @@VERSION
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 (Build 14393: ) (Hypervisor)
I have now also changed the version of my test environment to also use this older version:
SELECT @@VERSION
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Copyright (C) 2019 Microsoft Corporation Developer Edition (64-bit) on Linux (Ubuntu 20.04.3 LTS) <X64>
Everything else should be the same between my test environment and the production environment, as the application runs in a docker container, but the issues persists.
I'm updating my test version of the application to use version 12.8.1.jre8, I'll report back.
With version 12.8.1.jre8, I still get the same result (I have not checked the full SQL server event log, but at least the error is the same.
BT:
java.sql.BatchUpdateException: Error converting data type float to decimal.
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeBatch(SQLServerPreparedStatement.java:2273)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy34.executeBatch(Unknown Source)
I now realise I messed up the configuration of the calcBigDecimalPrecision
in the connection string before, so I never enabled it.
The issue seems to persist when calcBigDecimalPrecision
is enabled. I also verified the issue is still exactly the same in the SQL server event logs.
I have not been able to figure out yet how to get the SQL Server JDBC trace logs working. I'll have an other attempt at this tomorrow. Any hints on how to get this working with Log4J.
I hope this will allow me to verify whether calcBigDecimalPrecision
is in fact enabled.
I also plan to test with Java 17. We have many more (recent version) instances of the same applications running, and I have never seen this issue before. The do not use the exact same data insertion methods, but it's similar enough that I would excect to have seen this issue before. Tl;DR: I think it might just affect Java 1.8.
Signing out for now, as I'm in UTC+1.
Thank you for your replies, I'll look into this further and get back to you with ideas and next steps.
Can you share some example code of how you are using the Double values with your PreparedStatement? A full repro would be best, but I understand that can be a time-intensive process.
Can you share some example code of how you are using the Double values with your PreparedStatement? A full repro would be best, but I understand that can be a time-intensive process.
I wish I could. The issue seems to be hard to isolate. More or less what my application is doing:
Create a test table and a stored procedure, e.g.:
create table t (
[ID] [bigint] IDENTITY(1,1) NOT NULL,
[Col1] [decimal](9, 2) NULL
)
IF NOT EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_NAME ='NSP_InsertData' AND ROUTINE_TYPE = 'PROCEDURE' AND SPECIFIC_SCHEMA = 'dbo')
BEGIN
EXEC ('CREATE PROCEDURE dbo.NSP_InsertData AS BEGIN RETURN(1) END;')
END
GO
ALTER PROCEDURE [dbo].[NSP_InsertData](@Col1 decimal(9,2) = NULL)
AS BEGIN
SET NOCOUNT ON;
INSERT INTO dbo.t (Col1)
VALUES (@Col1)
END
The values are parsed, typically from integer types. In the case of the "5.05" value, it is parsed from a byte with value 101, to which a multiplier of 0.5 is applied.
Relevant bits of code:
final Double scalingFactor = 0.5;
Object value = 101; // Actually parsed from a java.nio.HeapByteBuffer
value = ((Number) value).doubleValue() * scalingFactor;
// While debugging, IntelliJ suggests `value` is of type Double here. Maybe some weird things happen re-using the same object originally returned by `java.nio.HeapByteBuffer.get(int index)`?
// Stringbuilder is actually used to add ~1000 of columns, depending on a config. My test config matches the prod config
StringBuilder queryBuilder = new StringBuilder("{call NSP_InsertData (?)}");
Connection con = null;
CallableStatement cstmt = null;
try {
// getJdbcTemplate returns a Spring template, which is a Tomcat JDBC pool
// Connection string: "jdbc:sqlserver://host.docker.internal;databaseName=TEST_DB;encrypt=false;calcBigDecimalPrecision=true"
con = getJdbcTemplate().getDataSource().getConnection();
// The statement is actually cached in a Hashmap with a statement for each thread. In testing, I only use 1 thread
// I think it possible a bug exists here where an incorrect cached statement is used, though it is unlikely this is the problem (the configs impacting this rarely change)
cstmt = con.prepareCall(queryBuilder.toString());
cstmt.clearBatch();
// Simulate batch size of 20
for (int i = 20; i >= 0; i++) {
// Again, we actually process many, many columns here
cstmt.setObject(1, value);
}
}
catch(Exception e) {
LOG.error(e);
}
// Throws error converting data type float to decimal
int[] updateResult = cstmt.executeBatch();
I am still tyring to get an isolated reproduction of the bug working based on the above.
A small update on this: it seems like my project always sends massive double values to the DB (at least, assuming the output of the SQL server profiler is correct) but this does not always cause problems.
In the same project, I have now changed the configuration used. This means, (among other things) results are now written to a different (test)database. This test database seems to handle the overly precise doubles just fine.
E.g., I can see a test value of 9.3 being sent to the DB as 9.300000000000000710542735760100185871124268 (43 digits of precision).
This likely explains why in my prod environment I do not see the same problem.
Both my test databases are pretty much exactly the same (same collation etc), the only difference is the columns in my telemetry tables.
My guess is that there is a mismatch somewhere between the column type and the argument type of the stored procedure, thoughI have not been able to find it yet.
Still, based on my understanding, MSSQL JDBC seems to use an accuracy of 43 (not the documented 38).
I double checked the signature of the stored procedure I use to insert data vs. the data types of the columns it inserts the data into, and it all seems to match.
I think there are 2 parts to this issue:
I have looked at different versions of the application, and I pretty much always saw Java double types being sent with a 43 digit precision.
The only possibly significant thing I could find is that makes the "problematic DB" special is that at some point, Decimal(18, 0)
types were used for most of the decimal values. These columns still exist for backwards compatibility, but are no longer used.
@Jeffery-Wasty I finally figured out what my actual problem is: somewhere in my long, long list of test variables is the value 252904979. In the SQL Server Profiler event this just looks like one of many innocent integers, but it is being inserted into a Decimal(9, 3) type, which does not work.
This is what SQL server is complaining about, not the many doubles sent with 43 digit precision as I first assumed. SQL server seems to be able to just round those to 38 digit doubles.
I.e. another instance of a test case failing without anyone noticing (for almost 4 years) and the test case not showing up in production (and SQL server generating terrible error messages).
My original point still stands though: for some reason, (nearly?) all my doubles seem to be sent with a precision of 43 digits. This does not make a lot of sense to me. According to the documentation, MSSQL-JDBC should send them with a 38 digit precision.
Looking a bit deeper into the codebase, my assumption is that some sort of toString
method is used to send doubles (and floats) to the DB. As far as I can see, the precision of 38 is only enforced for BigDecimal
types.
Additionally, the value is sometimes "rounded" incorrectly (before it reaches the DB). E.g. the value of 5.05
should have been sent as 5.04999999999999982236, not 5.05000000000000071054. This causes a further error when SQL server performs it's rounding.
Any idea where the 43 digit precision could come from?
Hi @seppestas,
Apologies I haven't been able to look into this issue over the past few days. I'm still catching up on your replies. At the moment, I don't know where the 43 precision is coming from. Logically, this should never be the case, as the SQL Server maximum is 38. It makes no sense to send a precision that is not supported (or that involves additional work on the SQL Server end to make work).
Do I understand your current situation as follows: the exception java.sql.BatchUpdateException: Error converting data type float to decimal.
was due to 252904979
being inserted into column of definition decimal(9, 3)
. After resolving this, the error is now gone? That is, you are able to use your batch insert without errors popping up, however, there is still the issue of 43-digit precision remains, and it is still a question we need to answer (which I'm currently investigating)?
Do I understand your current situation as follows: the exception java.sql.BatchUpdateException: Error converting data type float to decimal. was due to 252904979 being inserted into column of definition decimal(9, 3). After resolving this, the error is now gone? That is, you are able to use your batch insert without errors popping up, however, there is still the issue of 43-digit precision remains, and it is still a question we need to answer (which I'm currently investigating)?
Correct. Though I'm now also not 100% certain if the doubles are actually being sent as 43 digit precision decimals, or if something (e.g. SQL Server Profiler) is just making it appear as such.
The more I look into this, the more I think the doubles are actually being sent as IEEE 754 double precision floating points. My guess is SQL Server Profiler just chooses to represent them in a different way than Java (seemingly always using 43 digits iof. whatever precession would be needed to accurately represent the double).
Would there be some way to confirm this?
A key bit of information I needed to find the actual problem is that numbers in SQL Server always need to be suffixed with E
When I copied the TSQL from the SQL Server Profiler into SSMS, the doubles did not have the correct notation to be interpreted as doubles by SQL server. Instead, they were interpreted as decimals. This caused an error message for the 43 digit decimals that was similar to what I saw in my application logs, causing my to incorrectly assume it was the same problem.
Driver version
8.2.2.jre8
SQL Server version
Microsoft SQL Server 2019 (RTM-CU28) (KB5039747) - 15.0.4385.2 (X64) Jul 25 2024 21:32:40 Copyright (C) 2019 Microsoft Corporation Developer Edition (64-bit) on Linux (Ubuntu 20.04.6 LTS)
(mcr.microsoft.com/mssql/server:2019-latest docker image)
Client Operating System
Ubuntu in WSL (same as DB docker host)
Docker Host: Microsoft Windows 11 Pro version 10.0.22631 Build 22631
JAVA/JVM version
Table schema
Example:
In reality, 2 tables are used, with > 1000 columns each, of different data types. These tables are used to hold time-series telemetry data. The data is inserted with a stored procedure, which handles splitting up the data between the 2 tables.
Problem description
When using a prepared statement, passing double values (using
setObject
w/o a specified target type) to a batch insert causes some double values to be sent ~as the exact Base-10 representation~ with a precision of 43 digits, instead of a valid MSSQL float type. This causes the error: "Error converting data type float to decimal".Expected behavior
Double and float values should be sent to SQL server ~in a way that ensures they can be processed by SQL server~ with a precision of 38 digits. According to Conversions on setObject in the documentation, my Java Double type values should be passed as "DOUBLE (float)" values. I would then expect SQL server to perform the required conversion to the relevant Decimal type.
Actual behavior
Using the SQL server Profiler, I can see the following event on CallableStatement.executeBatch:
Note the floating point values have far too much precision (even for a Java double afaik). The expected values in this case are 5.05 and 0.2. I think this is the source of the issue.
This event is followed by similar calls for the rest of the batch, though seemingly with an "indirection" for the stored procedure.
Error message/stack trace
Any other details that can be helpful
I encountered this issue while debugging some legacy codebase. Interestingly, this issue never showed up in production, which uses the following SQL Server version:
Microsoft SQL Server 2019 (RTM-CU15) (KB5008996) - 15.0.4198.2 (X64) Jan 12 2022 22:30:08 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 (Build 14393: ) (Hypervisor)
I think a solution to this problem (as well as a far more efficient approach) is to explicitly pass the storage type to setObject.
In other words, I likely have a solution / workaround to the problem, but I would like to understand this issue better. My current efforts are ensuring compatibility of newer versions of the software.