Open scorsi opened 4 years ago
Redshift was never tested with Exposed and not listed as a supported dialect. Which connection url prefix do you use?
Hello, to be able to use Exposed with Redshift I used the postgresql
prefix since Redshift is based on Postgres 8 or 9
The problem is that I don't have a possibility to run tests in AWS with Redshift setup. Maybe it will be possible a bit later.
What I don't understand is that it appear that ?
is the "thing not supported" in INSERT INTO some.table (app_id, birthdate, birthdate_raw, contact_id, firstname, gender, income, income_raw, lastname, phone) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
request ? But the following code is working great with the ?
thing:
DriverManager.getConnection(
"jdbc:redshift://some-redshift-cluster.eu-central-1.redshift.amazonaws.com:5439/somedb",
"someuser",
"somepassword"
).use { redshiftClient ->
try {
redshiftClient
.prepareStatement("CREATE TABLE public.some_table (some_field VARCHAR(128), some_other_field INT);")
.execute()
.also { println("CREATE TABLE : $it") }
redshiftClient
.prepareStatement("INSERT INTO public.some_table VALUES (?, ?);")
.apply {
setString(1, "somevalue")
setInt(2, 42)
}
.execute()
.also { println("INSERT : $it") }
redshiftClient
.prepareStatement("SELECT * FROM public.some_table WHERE some_field = ?;")
.apply {
setString(1, "somevalue")
}
.executeQuery()
.use { rs ->
while (rs.next()) {
println("SELECT DATA: ${rs.getString("some_field")} ${rs.getString("some_other_field")}")
}
}
} catch (e: Exception) {
e.printStackTrace()
} finally {
redshiftClient
.prepareStatement("DROP TABLE IF EXISTS public.some_table;")
.execute()
.also { println("DROP TABLE : $it") }
}
}
May be the error isn't well explicit and the "not supported feature" is something else ? Does it come from the Redshift JDBC driver or from Exposed ? Don't know. I tried to figured it out by myself by looking directly at the Exposed source code but it's too heavy for a newbie in that source code. I don't well understanding how Exposed works and what it do with the JDBC driver and SQL queries. Do you have any idea of what is making problems here ?
I switched the Redshift JDBC driver to the official PostgreSQL JDBC driver compatible with Pgsql 8.0.2 (see : https://docs.aws.amazon.com/fr_fr/redshift/latest/dg/c_redshift-and-postgres-sql.html). And I got better stack trace and errors !
[main] WARN Exposed - Transaction attempt #0 failed: org.postgresql.util.PSQLException: Returning autogenerated keys is only supported for 8.2 and later servers.. Statement(s): INSERT INTO reveal.contacts_first (app_id, birthdate, birthdate_raw, contact_id, firstname, gender, income, income_raw, lastname, phone) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
org.jetbrains.exposed.exceptions.ExposedSQLException: org.postgresql.util.PSQLException: Returning autogenerated keys is only supported for 8.2 and later servers.
SQL: [INSERT INTO reveal.contacts_first (app_id, birthdate, birthdate_raw, contact_id, firstname, gender, income, income_raw, lastname, phone) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)]
at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:50)
at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:122)
at org.jetbrains.exposed.sql.Transaction.exec(Transaction.kt:108)
at org.jetbrains.exposed.sql.statements.Statement.execute(Statement.kt:29)
at org.jetbrains.exposed.sql.QueriesKt.insert(Queries.kt:45)
at io.adfinitas.prometer.importProcess.Import$insertFirstContacts$1.invoke(Import.kt:127)
at io.adfinitas.prometer.importProcess.Import$insertFirstContacts$1.invoke(Import.kt:27)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$1.invoke(ThreadLocalTransactionManager.kt:156)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$2.invoke(ThreadLocalTransactionManager.kt:197)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:205)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.inTopLevelTransaction(ThreadLocalTransactionManager.kt:196)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$transaction$1.invoke(ThreadLocalTransactionManager.kt:134)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:205)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:106)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:104)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction$default(ThreadLocalTransactionManager.kt:103)
at io.adfinitas.prometer.importProcess.Import.insertFirstContacts(Import.kt:114)
at io.adfinitas.prometer.importProcess.Import.access$insertFirstContacts(Import.kt:27)
at io.adfinitas.prometer.importProcess.Import$doImport$1.invokeSuspend(Import.kt:78)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at arrow.typeclasses.MonadContinuation$bind$$inlined$suspendCoroutineUninterceptedOrReturn$lambda$1.invoke(MonadContinuations.kt:36)
at arrow.typeclasses.MonadContinuation$bind$$inlined$suspendCoroutineUninterceptedOrReturn$lambda$1.invoke(MonadContinuations.kt:15)
at arrow.fx.IO$flatMap$1.invoke(IO.kt:620)
at arrow.fx.IO$flatMap$1.invoke(IO.kt:50)
at arrow.fx.IORunLoop.loop(IORunLoop.kt:295)
at arrow.fx.IORunLoop.access$loop(IORunLoop.kt:21)
at arrow.fx.IORunLoop$RestartCallback.signal(IORunLoop.kt:414)
at arrow.fx.IORunLoop$RestartCallback.resumeWith(IORunLoop.kt:445)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
at kotlin.coroutines.ContinuationKt.startCoroutine(Continuation.kt:114)
at arrow.fx.IORunLoop$RestartCallback.start(IORunLoop.kt:402)
at arrow.fx.IORunLoop.loop(IORunLoop.kt:227)
at arrow.fx.IORunLoop.access$loop(IORunLoop.kt:21)
at arrow.fx.IORunLoop$RestartCallback.signal(IORunLoop.kt:414)
at arrow.fx.IORunLoop$RestartCallback.resumeWith(IORunLoop.kt:445)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
at kotlin.coroutines.ContinuationKt.startCoroutine(Continuation.kt:114)
at arrow.fx.IORunLoop$RestartCallback.start(IORunLoop.kt:402)
at arrow.fx.IORunLoop.loop(IORunLoop.kt:227)
at arrow.fx.IORunLoop.access$loop(IORunLoop.kt:21)
at arrow.fx.IORunLoop$RestartCallback.signal(IORunLoop.kt:414)
at arrow.fx.IORunLoop$RestartCallback.resumeWith(IORunLoop.kt:445)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:46)
at kotlin.coroutines.ContinuationKt.startCoroutine(Continuation.kt:114)
at arrow.fx.IORunLoop$RestartCallback.start(IORunLoop.kt:402)
at arrow.fx.IORunLoop.loop(IORunLoop.kt:227)
at arrow.fx.IORunLoop.access$loop(IORunLoop.kt:21)
at arrow.fx.IORunLoop$suspendAsync$1.invoke(IORunLoop.kt:145)
at arrow.fx.IORunLoop$suspendAsync$1.invoke(IORunLoop.kt:21)
at arrow.fx.IORunLoop$RestartCallback.start(IORunLoop.kt:397)
at arrow.fx.IORunLoop.loop(IORunLoop.kt:218)
at arrow.fx.IORunLoop.start(IORunLoop.kt:24)
at arrow.fx.IO.unsafeRunAsync(IO.kt:796)
at arrow.fx.internal.Platform.unsafeResync(Utils.kt:156)
at arrow.fx.IO$Async.unsafeRunTimedTotal$arrow_fx(IO.kt:1017)
at arrow.fx.IO.unsafeRunTimed(IO.kt:862)
at arrow.fx.IO.unsafeRunSync(IO.kt:851)
at io.adfinitas.prometer.importProcess.MainKt.handleOrder(Main.kt:59)
at io.adfinitas.prometer.importProcess.MainKt.main(Main.kt:68)
at io.adfinitas.prometer.importProcess.MainKt.main(Main.kt)
Caused by: org.postgresql.util.PSQLException: Returning autogenerated keys is only supported for 8.2 and later servers.
at org.postgresql.jdbc3.AbstractJdbc3Statement.addReturning(AbstractJdbc3Statement.java:151)
at org.postgresql.jdbc3.AbstractJdbc3Connection.prepareStatement(AbstractJdbc3Connection.java:362)
at com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:323)
at com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
at org.jetbrains.exposed.sql.statements.jdbc.JdbcConnectionImpl.prepareStatement(JdbcConnectionImpl.kt:54)
at org.jetbrains.exposed.sql.statements.InsertStatement.prepared(InsertStatement.kt:137)
at org.jetbrains.exposed.sql.statements.Statement.executeIn$exposed_core(Statement.kt:48)
... 59 more
Ad we can see, in Exposed you use the RETURNING
keyword/feature which is not supported in Pgsql before 8.2 (as Redshift is based on 8.0.2, it doesn't support it too).
Do you know if it's possible to disable that feature to make Redshift compatible with Exposed ? :)
I think that I found where the INSERT
query are built with the RETURNING
keyword/feature.
In org.jetbrains.exposed.sql.statements.InsertStatement:137
, we have:
override fun prepared(transaction: Transaction, sql: String): PreparedStatementApi = when {
// https://github.com/pgjdbc/pgjdbc/issues/1168
// Column names always escaped/quoted in RETURNING clause
autoIncColumns.isNotEmpty() && currentDialect is PostgreSQLDialect ->
transaction.connection.prepareStatement(sql, true)
autoIncColumns.isNotEmpty() ->
// http://viralpatel.net/blogs/oracle-java-jdbc-get-primary-key-insert-sql/
transaction.connection.prepareStatement(sql, autoIncColumns.map { it.name.inProperCase() }.toTypedArray())
else ->
transaction.connection.prepareStatement(sql, true)
}
The else
branch of the when
is sending true
as parameter to returnKeys
(which is the feature missing in Redshift).
We have the following possibilities :
autoIncColumns.isEmpty() -> transaction.connection.prepareStatement(sql, false)
or directly modifying the else branch else -> transaction.connection.prepareStatement(sql, false)
RedshiftDialect
copied from PostgreSQLDialect
(I have tested it and it works like a charm Database.registerDialect("redshift") { RedshiftDialect() }
) and then add a condition currentDialect is RedshiftDialect -> transaction.connection.prepareStatement(sql, false)
I'm proposing so:
override fun prepared(transaction: Transaction, sql: String): PreparedStatementApi = when {
// https://github.com/JetBrains/Exposed/issues/711
// Redshift is not supporting RETURNING keyword/feature
currentDialect is RedshiftDialect ->
transaction.connection.prepareStatement(sql, false) /// MODIFICATION HERE
// https://github.com/pgjdbc/pgjdbc/issues/1168
// Column names always escaped/quoted in RETURNING clause
autoIncColumns.isNotEmpty() && currentDialect is PostgreSQLDialect ->
transaction.connection.prepareStatement(sql, true)
autoIncColumns.isNotEmpty() ->
// http://viralpatel.net/blogs/oracle-java-jdbc-get-primary-key-insert-sql/
transaction.connection.prepareStatement(sql, autoIncColumns.map { it.name.inProperCase() }.toTypedArray())
else ->
transaction.connection.prepareStatement(sql, false) /// MODIFICATION HERE
}
Does it may fix the problem ? Did I see it right ?
I'm not sure what it's good as you can't use DAO
and insertAndGetId
-like functions.
Could you please test your PR locally by running SamplesDao.kt
with redshift
connector?
We can see that Redshift is too far from standard SQL and from PostgreSQL and may be complicated to be compatible with Exposed by looking at this :
I just tried all the tests in SamplesSQL.kt
or SamplesDao.kt
, but the more issues I fixed, the more other it comes... I made too much changements in the test files and other part of Exposed without being able to make the tests work successfully.
Redshift is definitely too hard to make it compatible with Exposed without a lot of changements and missing features inside the core of Exposed itself :
CREATE TABLE
have to be overridable by the dialect, so RedshiftDialect
could remove ON DELETE (CASCADE/RESTRICT)/ON UPDATE (CASCADE/RESTRICT)
for foreign keys constraints.RETURNING
(which is too complicated), the only available things being : SELECT MAX(id) FROM schema.table;
(which is absolutely unsafe) or to generate ids inside Exposed and don't let Redshift auto-generate them like using UUIDs (which is not a solution but an alternative).Btw, I think you're agree with me, supporting Redshift may be too complicated for Exposed and my PR will not resolve all issues with Redshift... It's a long hard work which implies a lot of changements... It requires a big workload to support Redshift.
We may close that issue and the PR #714 except if someone has the workload to handle that issue... I will stop using Exposed with Redshift and turn back to standalone/vanilla JDBC.
Thank you
@scorsi thank you for spending time on a deep investigation and attempt to workaround Redshift issues. I will leave that issue as is with a hope what Amazon will improve Redshift and jdbc-driver as well.
Hello guys,
I'm using Exposed with AWS Amazon Redshift and got exceptions when inserting data.
The code is as simple as that (I'm using a
forEach
oninsert
instead ofbatchInsert
because I have another issue withbatchInsert
which will be issued in the github if this issue is fixed :) ) :Here the exception thrown:
My gradle config (I'm using HikariCP over Exposed) :
Do you have some idea how I can bypass that feature or fix that exception ?
Thank you