scylladb / scylla-tools-java

Apache Cassandra, supplying tools for Scylla
Apache License 2.0
53 stars 85 forks source link

After loading table that contains timestamp data type with millisec numbers, the millisec numbers are not presented in Scylla 1.7.1 cqlsh (though binary value is the same) #36

Open tomer-sandler opened 7 years ago

tomer-sandler commented 7 years ago

Created the following Schema and data on Cassandra 3.10: CREATE KEYSPACE mykeyspace WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };

USE mykeyspace;

CREATE TYPE mykeyspace.udt_info (birthday date, nationality text, height int);

CREATE TABLE all_types_no_counter_no_duration (aascii ascii, abigint bigint, ablob blob, aboolean boolean, adecimal decimal, adouble double, afloat float, ainet inet, aint int, atext text, atimestamp timestamp, atimeuuid timeuuid, auuid uuid, avarchar varchar, avarint varint, alist list, amap map<int,int>, aset set, atinyint tinyint, asmallint smallint, adate date, atime time, afrozen_udt frozen, atuple tuple<int,text>, PRIMARY KEY (aascii, abigint) );

INSERT INTO all_types_no_counter_no_duration (aascii, abigint, ablob, aboolean, adecimal, adouble, afloat, ainet, aint, atext, atimestamp, atimeuuid, auuid, avarchar, avarint, alist, amap, aset, atinyint, asmallint, adate, atime, afrozen_udt, atuple) VALUES ('tzach', 1999, bigintAsBlob(3), true, 10 , 10.10, 11.11 , '204.202.130.223', 17, 'text' ,'2016-08-30 14:01:55', maxTimeuuid('2013-01-01 00:05+0000'), 123e4567-e89b-12d3-a456-426655440000, 'tzachvarchar', 17, [1,2,3], {1 : 2}, {1,2,3,4}, 8, 16, '2016-09-30', '12:14:56.789', { birthday : '1993-06-18', nationality : 'New Zealand', height : 180 }, (111, 'aaa') );

INSERT INTO all_types_no_counter_no_duration (aascii, abigint, ablob, aboolean, adecimal, adouble, afloat, ainet, aint, atext, atimestamp, atimeuuid, auuid, avarchar, avarint, alist, amap, aset, atinyint, asmallint, adate, atime, afrozen_udt, atuple) VALUES ('tzach', 2000, bigintAsBlob(3), true, 10 , 10.10, 11.11 , '204.202.130.223', 17, 'text' ,'2016-08-30 07:01:00', maxTimeuuid('2013-01-01 00:05+0000'), 123e4567-e89b-12d3-a456-426655440000, 'tzachvarchar', 17, [1,2,3], {1 : 2}, {1,2,3,4}, 32, 64, '2016-08-30', '12:24:56.789', { birthday : '1994-06-18', nationality : 'Israel', height : 185 }, (222, 'bbb') );

INSERT INTO all_types_no_counter_no_duration (aascii, abigint, ablob, aboolean, adecimal, adouble, afloat, ainet, aint, atext, atimestamp, atimeuuid, auuid, avarchar, avarint, alist, amap, aset, atinyint, asmallint, adate, atime, afrozen_udt, atuple) VALUES ('livyatan', 2001, bigintAsBlob(3), true, 10 , 10.10, 11.11 , '204.202.130.223', 17, 'text' ,'2016-08-30 07:01:00', maxTimeuuid('2013-01-01 00:05+0000'), 123e4567-e89b-12d3-a456-426655440000, 'tzachvarchar', 17, [1,2,3], {1 : 2}, {1,2,3,4}, 16, 512, '2016-08-30', '12:34:56.789', { birthday : '1995-06-18', nationality : 'Spain', height : 190 }, (333, 'ccc') );

INSERT INTO all_types_no_counter_no_duration (aascii, abigint, ablob, aboolean, adecimal, adouble, afloat, ainet, aint, atext, atimestamp, atimeuuid, auuid, avarchar, avarint, alist, amap, aset, atinyint, asmallint, adate, atime, afrozen_udt, atuple) VALUES ('tzach', 2002, bigintAsBlob(3), true, 10 , 10.10, 11.11 , '204.202.130.223', 17, 'text' ,'2016-08-30 12:34:56.789', maxTimeuuid('2013-01-01 00:05+0000'), 123e4567-e89b-12d3-a456-426655440000, 'tzachvarchar', 17, [1,2,3], {1 : 2}, {1,2,3,4}, 8, 16, '2016-09-30', '12:14:56.789', { birthday : '1993-06-18', nationality : 'New Zealand', height : 180 }, (111, 'aaa') );

INSERT INTO all_types_no_counter_no_duration (aascii, abigint, ablob, aboolean, adecimal, adouble, afloat, ainet, aint, atext, atimestamp, atimeuuid, auuid, avarchar, avarint, alist, amap, aset, atinyint, asmallint, adate, atime, afrozen_udt, atuple) VALUES ('livyatan', 2003, bigintAsBlob(3), true, 10 , 10.10, 11.11 , '204.202.130.223', 17, 'text' ,'2016-08-30 12:23:34.567', maxTimeuuid('2013-01-01 00:05+0000'), 123e4567-e89b-12d3-a456-426655440000, 'tzachvarchar', 17, [1,2,3], {1 : 2}, {1,2,3,4}, 16, 512, '2016-08-30', '12:34:56.789', { birthday : '1995-06-18', nationality : 'Spain', height : 190 }, (333, 'ccc') );

Uploaded table, using Scylla 1.7.1 sstable loader -> timestamp value doesn't present millisec numbers

Cassandra 3.10 cqlsh:mykeyspace> select atimestamp,abigint from all_types_no_counter_no_duration ;
atimestamp | abigint ---------------------------------+--------- 2016-08-30 14:01:55.000000+0000 | 1999 2016-08-30 07:01:00.000000+0000 | 2000 2016-08-30 12:34:56.789000+0000 | 2002 2016-08-30 07:01:00.000000+0000 | 2001 2016-08-30 12:23:34.567000+0000 | 2003

(5 rows)

Scylla 1.7.1 -> millisec are not displayed in our CQL client, though the binary value is still the same

cqlsh:mykeyspace> select atimestamp,abigint from all_types_no_counter_no_duration ;

atimestamp | abigint --------------------------+--------- 2016-08-30 14:01:55+0000 | 1999 2016-08-30 07:01:00+0000 | 2000 2016-08-30 12:34:56+0000 | 2002 2016-08-30 07:01:00+0000 | 2001 2016-08-30 12:23:34+0000 | 2003

Cassandra 3.10 cqlsh:mykeyspace> select timestampasblob(atimestamp) from all_types_no_counter_no_duration ; system.timestampasblob(atimestamp) ------------------------------------\ 0x00000156dbc1a038 0x00000156da4043e0 0x00000156db720095 0x00000156da4043e0 0x00000156db6797a7

(5 rows)

Scylla 1.7.1 cqlsh:mykeyspace> select timestampasblob(atimestamp) from all_types_no_counter_no_duration ; system.timestampasblob(atimestamp) ------------------------------------\ 0x00000156dbc1a038 0x00000156da4043e0 0x00000156db720095 0x00000156da4043e0 0x00000156db6797a7

(5 rows)

Using the Cassandra 3.10 cqlsh client with ScyllaDB allows you to see the millisec numbers

penberg commented 7 years ago

This is a cqlsh issue, which was fixed in Cassandra 3.4:

https://issues.apache.org/jira/browse/CASSANDRA-10428

We should backport the fix.

tomer-sandler commented 7 years ago

Yes @penberg as I wrote originally in the ticket's last line:

"Using the Cassandra 3.10 cqlsh client with ScyllaDB allows you to see the millisec numbers"

tzach commented 6 years ago

We should backport the fix.

@penberg did we? can we?