Open ffarfan opened 9 years ago
@ffarfan Not really a bug - this is by design. Your last sentence sums up your two options:
There is a third options, but it moves you into having to pack the composites by hand via annotations. This blog we wrote is a bit dated, but describes what is going on in detail: http://thelastpickle.com/blog/2013/09/13/CQL3-to-Astyanax-Compatibility.html
@zznate thanks for the prompt reply. I'll take at the blog post you shared.
@zznate, so if I understood correctly, the rationale for the exception being thrown is to discourage batch inserts in CQL3 because they are often a performance bottleneck?
@Kurt-von-Laven not so much to discourage. It has to do with how CQL table typing works under the hood - specifically the use composite prefixes as column meta data. Thrift can read composite columns just fine, they just need to be extracted by hand when doing so. That particular exception is Thrift complaining because it hit a composite column prefix when it was expecting the column to be a single byte stream as it does not have the concept of the meta-data for parsing such out.
That said...
If what you are looking to do is constant-sized batch inserts (like event streams/time series stuff particularly where you pop a consistent number of messages off a queue for insertion) Thrift will be significantly more performant and COMPACT STORAGE
will save storage overhead by removing the composite prefixing. Indeed, this is a use case that brought a lot of us to Cassandra in the first place.
Maintaining a standard CQL pool for day-to-day CRUD and a small pool of Thrift connections for large, shaped batch inserts (or wide reads for that matter) is a perfectly legitimate setup.
While trying to apply a batch of inserts using
MutationBatch
, we get the following exception:This is the definition of our test table created for this minimal example:
This is the code we use to define our context and column family:
and the tiny method that we have to add rows to the
MutationBatch
:After all this, our application throws the exception when we execute the batch.
We can work around the issue by using
ColumnFamilyQuery.useCql
instead, or stick withMutationBatch
but recreate the table "WITH COMPACT STORAGE."