Closed paayers closed 5 months ago
Based on a quick analysis I believe this behaviour is consistent with the docs for 1.11.0 @paayers. The current docs indicate that -ttl sets the cell-level time-to-live value which is exactly what you see in your first example. I'll also note that the mapping process defined there (i.e. something of the form "ttl(colname)") is exactly what is used in the data file supplied to the first example. Finally I note that this flag is ignored if -query is supplied which (presumably) is why you aren't seeing cell-level values in your second example.
In the second example you're explicitly using "USING TIMESTAMP... AND TTL" in the CQL for that query. Apparently the CQL commands are setting row-level TTLs rather than a cell-specific value... which makes sense.
I'm unaware of any change in 1.11.0 which would make this happen but I'll double-check that and make sure I didn't miss anything there. That said, if the user is asking whether this behaviour has changed between an older version and the current one it would be useful to know what older version they observed the differing behaviour on.
I think I have a more detailed explanation for you @paayers which might help explain things to your customer.
In your first example above the customer includes explicitly specified writetime and TTL values for at least some vars via some of the fields in their CSV file. Let's take "version_key" as a simple example; note that the CSV snippet in the first example above (what I'm calling the "Failing" example) includes this string:
|version_key|writetime(version_key)|ttl(version_key)
These correspond to per-column writetime and TTL values as discussed in the dsbulk settings docs. After some digging it finally became clear to me that when dsbulk is presented with column-specific writetime or TTL values it actually modifies the CQL it's using to insert values. In these cases it creates a distinct INSERT statement for each column with the column-specific writetime and/or TTL values supplied via the usual "USING" keyword. For reference the relevant section of the code is here.
A quick look at git history suggests that this code came in with this commit and was first included as part of the 1.8.0 release. I'm not sure which older version the customer is comparing 1.11.0 to but if it's older than 1.8.0 then there likely is a change in behaviour from what they expect.
Perfect, thanks for the explanation on this, we should be able to modify the inserts accordingly to resolve this one
Sounds good, thanks @paayers ! I'm going to close this issue out for now. We can re-open if new information suggests there's an actual behavioural bug here.
A customer had this issue recently where they used a process they had used in a previous version of DSBulk and got different results. When unloading a table with '-ttl true', then loading in another cluster with '-ttl true' we weren't seeing the partition level TTL, instead the columns were being expired and the PK could still be selected after TTL so we had to come back and insert with --schema.queryTtl to get rid of the PKs. I'm not sure if this is expected behavior for DSBulk or not, so I'll outline the test case below.
Given the following header and row, it's easy to see what happened:
The DSBulk load is:
After flushing, the sstabledump looks like the following with cell level TTLs:
If pull the parenthesis out of the header and use the following file and load command instead, I get the expected result where the 'expires_at' info is at the partition level.
file:
DSBulk load:
sstable dump:
If I run that batch after an initial insert was done using the '-ttl true' flag, after a compaction, we lose the partition level TTL, so after expiry, we can still select the PK and all other columns are showing null. I was under the impression that using the -ttl flag when unloading/loading an entire table or even partition would apply the partition level ttl.
Please let me know if this is a bug with the latest version or if something changed and this is now expected behavior.