I have encountered an issue where the schema includes more columns than my actual data, the reading throws an error also saying this.
The number of columns in CSV/parquet file is not equal to the number of fields in Spark StructType. Either modify the attributes in manifest to make it equal to the number of columns in CSV/parquet files or modify the csv/parquet file
I was reading the documentation and unsupported scenarios and as far as I understood on the scenario where the actual data has more columns than specified in the schema is not supported, am I missing something in the documentation or perhaps I'm doing something wrong, perhaps a workaround is in place?
I have encountered an issue where the schema includes more columns than my actual data, the reading throws an error also saying this.
I was reading the documentation and unsupported scenarios and as far as I understood on the scenario where the actual data has more columns than specified in the schema is not supported, am I missing something in the documentation or perhaps I'm doing something wrong, perhaps a workaround is in place?
spark-cdm-connector 0.19.1 Databricks 6.4 Spark 2.4.5