ActianCorp / spark-vector

Repository for the Spark-Vector connector
Apache License 2.0
20 stars 9 forks source link

QA - SPARK CONNECTOR - money datatype not correctly implemented #34

Closed Pyrobal closed 8 years ago

Pyrobal commented 8 years ago

I get this value for the unload of any money value... 46367372913546362.88 vwload even reloads it then select * says money value out of range

create table vwload_reg02_unload_tbl (col_int int, col_float4 float4, col_money money,col_decimal382 decimal(38,2),col_decimal102 decimal(10,2), col_char20 char(20), col_varchar20 varchar(20), col_nchar20 nchar(20),col_nvarchar nvarchar(20), col_ansidate ansidate, col_timestamp timestamp);\g

$ hadoop fs -cat /Actian/VectorK1/vwload_reg02_unload_tbl_02.csv/part*

1,1.0,46367372913546362.88,1.00,1.00,a,a,a,a,0001-01-03,0001-01-03 00:00:00.0

the third value should be "1"

thats the source:

1 1 1 1 1 a a a a 0001-01-01 0001-01-01 0:0:0

select from vwload_reg02_unload_tbl_02\g Executing . . .

E_US1131 exceeded the maximum money value allowed. (Wed May 18 01:52:51 2016)

cbarca commented 8 years ago

'money' type is 'decimal(14,2)' in spark-vector connector, but 'double' in Vector

Is strange how vector interprets this data type: it keeps the number in a dbl (but the number actually can fit a long) and when it prints it the x100_snprintf(buf, len, "$ %.2lf", val*1e-2) is used to correctly format the .2 decimals.

cbarca commented 8 years ago

Fixed in pull request #35 .