From browsing code (but correct me if I'm missing something), it seems spark-redshift assumes the column structure won't change. I would think that Spark Redshift would automagically run ALTERs where possible (e.g. column additions). What do you think?
Yes, I think the connector should get this capability. We are doing some complex ETL on log files and the params keep changing time to time, right now we use external tools to do alter tables and then load.
From browsing code (but correct me if I'm missing something), it seems spark-redshift assumes the column structure won't change. I would think that Spark Redshift would automagically run ALTERs where possible (e.g. column additions). What do you think?