Open khaledh opened 8 years ago
I guess we need another layer of try-finally
in the finally
block to log and ignore exceptions thrown there so that we don't mask the original cause / exception.
It's a good idea to log the original exception for sure. I was hoping also for a solution to the main problem, which is how can I use spark-redshift to overwrite a table that has dependencies? I guess there's no easy way around this.
One solution would be to not overwrite, but to truncate the table then append to it. But this could lead to data loss if the append doesn't complete. It also leaves a window of time where the table would be unavailable to other queries.
Not a solution to your problem necessarily but I think that that the changes in #157 should have addressed the silent exception loss issue that made this ticket harder to debug.
Hitting same issue here. Is it possible to somehow pass a parameter to automatically do the cascading delete when this situation crops up?
http://docs.aws.amazon.com/redshift/latest/dg/r_DROP_TABLE.html
I tried using the new preactions to manually drop the table w/ cascade, but it seems that preactions happen after the table is already created.
So the only solution I found other than forking or managing a separate Redshift connection was to save to a temp table, do the drop cascade as a preaction and a rename of the table as a postaction:
.option("preactions", s"DROP TABLE IF EXISTS $dbtable CASCADE;") .option("dbtable", s"$dbtable$tempTableSuffix") .option("postactions", s"ALTER TABLE $dbtable$tempTableSuffix RENAME TO $dbtable;")
As awesome as this is, I'd love to see either preactions happen before the table create or a separate feature for CASCADE. Views can be created by any random person in Redshift, potentially causing this error.
Another way to avoid this problem would be to have spark-redshift
truncate the existing table then load new rows rather than dropping and re-creating the table. The current behavior in spark-redshift
matches how Spark's built-in JDBC data source behaves (i.e. Spark also drops and re-creates the table), but there are multiple JIRA tickets proposing to use TRUNCATE
there was well:
(/cc @marmbrus @rxin @dongjoon-hyun)
Thank you for pining me, @JoshRosen .
SPARK-16410 is trying to make SaveMode.Truncate
.
SPARK-16463 is trying to make truncate
option in SaveMode.Overwrite
.
https://issues.apache.org/jira/browse/SPARK-16463 (https://github.com/apache/spark/pull/14086)
I think SPARK-16463 is the fast way to support TRUNCATE feature with minimal change. Also, PR is ready.
facing the same issue.
Was totally fine as long as analysts created views on top of the tables, all Overwrite mode transfers are failing because it cannot drop the table
Let's say we have this scenario: There is a materialized view connected to the table. If I try to overwrite the table, as under the hood, the tables should be drop and recreate, I get an error due to the dependancies. If in the preaction, I drop the table with cascade effect, View also will be dropped, which is not preferable as the BI is connected to the view and it will cause a bad user experience. I should refresh the data every 15 minutes, is there any way that without effecting the Views, I write into tables by Spark?
When using
overwrite
mode to save data to a table, and also leavingusestagingtable
to its default value of true, the operation fails with the following error when the target table already has dependencies (e.g. a view depends on the table):I tracked this error down to the following code in
RedshiftWriter.scala
:When trying this transaction manually in SQL Workbench, I get the following error:
I was hoping that
spark-redshift
would let this error (which is the actual culprit) bubble up when it happens, but instead I get the error I mentioned in the beginning. This is happening because the original exception is masked by another exception that happens due to theDROP TABLE IF EXISTS
in thefinally
block, which fails because the transaction is in a bad state at this point, giving the error messageInvalid operation: current transaction is aborted, commands ignored until end of transaction block
.I'm not sure what the best solution is in this case. I'm open to suggestions.