olavloite / spanner-jdbc

JDBC Driver for Google Cloud Spanner
MIT License
38 stars 10 forks source link

DELETE FROM WHERE <cond> fails #110

Closed O1O1O1O closed 5 years ago

O1O1O1O commented 5 years ago

I have code that does (from Scala with Scalike)

      sql"""DELETE FROM account_owner
              WHERE client_id = $demandbaseAccountId
                AND day_generated < $dayGenerated""".update.apply()

But this fails with an error:

nl.topicus.jdbc.exception.CloudSpannerSQLException: INVALID_ARGUMENT: No matching signature for operator = for argument types: INT64, STRING. Supported signature: ANY = ANY [at 1:196]
...SALESFORCE_USER_ID` FROM `account_owner` WHERE client_id = @p1 AND day_gen...
    at nl.topicus.jdbc.resultset.CloudSpannerResultSet.callNextForInternalReasons(CloudSpannerResultSet.java:112)
    at nl.topicus.jdbc.resultset.CloudSpannerResultSet.<init>(CloudSpannerResultSet.java:64)
    at nl.topicus.jdbc.statement.CloudSpannerPreparedStatement.executeQuery(CloudSpannerPreparedStatement.java:149)
    at nl.topicus.jdbc.statement.AbstractTablePartWorker.isRecordCountGreaterThan(AbstractTablePartWorker.java:147)
    at nl.topicus.jdbc.statement.AbstractTablePartWorker.isExtendedMode(AbstractTablePartWorker.java:167)
    at nl.topicus.jdbc.statement.AbstractTablePartWorker.genericRun(AbstractTablePartWorker.java:80)
    at nl.topicus.jdbc.statement.AbstractTablePartWorker.call(AbstractTablePartWorker.java:67)
    at nl.topicus.jdbc.statement.AbstractCloudSpannerStatement.writeMutations(AbstractCloudSpannerStatement.java:213)
    at nl.topicus.jdbc.statement.CloudSpannerPreparedStatement.executeUpdate(CloudSpannerPreparedStatement.java:368)
    at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:98)
    at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:98)
    at scalikejdbc.DBConnectionAttributesWiredPreparedStatement.executeUpdate(DBConnectionAttributesWiredPreparedStatement.scala:79)
    at scalikejdbc.StatementExecutor$$anonfun$executeUpdate$1.apply$mcI$sp(StatementExecutor.scala:341)
    at scalikejdbc.StatementExecutor$$anonfun$executeUpdate$1.apply(StatementExecutor.scala:341)
    at scalikejdbc.StatementExecutor$$anonfun$executeUpdate$1.apply(StatementExecutor.scala:341)
    at scalikejdbc.StatementExecutor$NakedExecutor.apply(StatementExecutor.scala:18)
    at scalikejdbc.StatementExecutor$$anon$1.scalikejdbc$StatementExecutor$LoggingSQLAndTiming$$super$apply(StatementExecutor.scala:319)
    at scalikejdbc.StatementExecutor$LoggingSQLAndTiming$class.apply(StatementExecutor.scala:263)
    at scalikejdbc.StatementExecutor$$anon$1.scalikejdbc$StatementExecutor$LoggingSQLIfFailed$$super$apply(StatementExecutor.scala:319)
    at scalikejdbc.StatementExecutor$LoggingSQLIfFailed$class.apply(StatementExecutor.scala:296)
    at scalikejdbc.StatementExecutor$$anon$1.apply(StatementExecutor.scala:319)
    at scalikejdbc.StatementExecutor.executeUpdate(StatementExecutor.scala:341)
    at scalikejdbc.DBSession$$anonfun$updateWithFilters$1.apply(DBSession.scala:473)
    at scalikejdbc.DBSession$$anonfun$updateWithFilters$1.apply(DBSession.scala:471)
    at scalikejdbc.LoanPattern$class.using(LoanPattern.scala:18)
    at scalikejdbc.ActiveSession.using(DBSession.scala:705)
    at scalikejdbc.DBSession$class.updateWithFilters(DBSession.scala:470)
    at scalikejdbc.ActiveSession.updateWithFilters(DBSession.scala:705)
    at scalikejdbc.DBSession$class.updateWithFilters(DBSession.scala:445)
    at scalikejdbc.ActiveSession.updateWithFilters(DBSession.scala:705)
    at scalikejdbc.DBSessionWrapper$$anonfun$updateWithFilters$1.apply(DBSessionWrapper.scala:81)
    at scalikejdbc.DBSessionWrapper$$anonfun$updateWithFilters$1.apply(DBSessionWrapper.scala:81)
    at scalikejdbc.DBSessionWrapper$$anonfun$withAttributesSwitchedDBSession$1.apply(DBSessionWrapper.scala:35)
    at scalikejdbc.DBSessionWrapper$$anonfun$withAttributesSwitchedDBSession$1.apply(DBSessionWrapper.scala:34)
    at scalikejdbc.DBSessionAttributesSwitcher.withSwitchedDBSession(DBSessionAttributesSwitcher.scala:31)
    at scalikejdbc.DBSessionWrapper.withAttributesSwitchedDBSession(DBSessionWrapper.scala:34)
    at scalikejdbc.DBSessionWrapper.updateWithFilters(DBSessionWrapper.scala:81)
    at scalikejdbc.SQLUpdate.apply(SQL.scala:676)
    at com.demandbase.aa.dataservices.AccountOwnerSpannerClient$$anonfun$exists$1.apply(AccountOwnerSpannerClient.scala:57)
    at com.demandbase.aa.dataservices.AccountOwnerSpannerClient$$anonfun$exists$1.apply(AccountOwnerSpannerClient.scala:54)
    at scalikejdbc.DBConnection$$anonfun$autoCommit$1.apply(DBConnection.scala:232)
    at scalikejdbc.DBConnection$$anonfun$autoCommit$1.apply(DBConnection.scala:232)
    at scalikejdbc.LoanPattern$class.using(LoanPattern.scala:18)
    at scalikejdbc.NamedDB.using(NamedDB.scala:20)
    at scalikejdbc.DBConnection$class.autoCommit(DBConnection.scala:232)
    at scalikejdbc.NamedDB.autoCommit(NamedDB.scala:20)

Stepping with the debugger I found this is because when using extended mode the driver is generating a query to determine the number of rows from isRecordCountGreaterThan

That statement looks like

SELECT COUNT(*) AS C FROM ((SELECT `account_owner`.`CLIENT_ID`, `account_owner`.`MATCHED_DOMAIN`, `account_owner`.`DAY_GENERATED`, `account_owner`.`SALESFORCE_USER_ID` FROM `account_owner` WHERE client_id = ? AND day_generated < ?)) Q

but for some reason the params in the where clause have not been substituted yet. I believe this is driver related and not a Scalike issue.

The schema of the table being deleted from is:

CREATE TABLE account_owner (
    client_id INT64 NOT NULL,
    day_generated INT64 NOT NULL,
    matched_domain STRING(255) NOT NULL,
    salesforce_user_id STRING(190),
    account_name STRING(190) NOT NULL,
    ingest_account_id STRING(30) NOT NULL,
) PRIMARY KEY (client_id, matched_domain, day_generated DESC, salesforce_user_id)

In my case I don't really need the extended mode so I turned it off and the DELETE works fine.

olavloite commented 5 years ago

DML statements with parameters that were ran in extended mode, did not work properly as the parameter values were not applied to the count query that would determine whether the statement should be partitioned or not. This has now been fixed.

olavloite commented 5 years ago

Fix released in version 1.1.1.