apache / seatunnel

SeaTunnel is a next-generation super high-performance, distributed, massive data integration tool.
https://seatunnel.apache.org/
Apache License 2.0
7.81k stars 1.76k forks source link

[Bug] [Sink] Bug Hive insert error org.apache.spark.sql.execution.QueryExecutionException: Parquet column cannot be converted in file hdfs:/db/xx/dt_mon=xxxx/qwxsadas12321321.parquet. Column: [xxx ], Expected: decimal(12,2), Found: FIXED_LEN_BYTE_ARRAY #6750

Open AdkinsHan opened 4 months ago

AdkinsHan commented 4 months ago

Search before asking

What happened

   Hive {
     sql = """
     insert overwrite table datadwd.xxx partition(dt_year)
     select * from xxxx
     """
   }

SeaTunnel Version

2.3.4

SeaTunnel Config

env {
  execution.parallelism = 4
  job.mode = "BATCH"
  # job.name spark引擎 不起作用,要在shell脚本中单独执行废弃
  spark.sql.catalogImplementation = "hive"
}

source {
    Jdbc  {
        #2.3.4版本是query
        url = "jdbc:oracle:thin:@xxxxxx:1521:xxxxDB"
        driver  = "oracle.jdbc.OracleDriver"
        user = "xxxx"
        password = "xxxx"
        query = """
            select xxxx from dual
        """
        result_table_name = "dddd_mm"
        }
}

transform {

}

sink {
   Hive {
     #标准写法,table_name和metastore_uri必须要
     table_name = "data.dddd_mm"
     metastore_uri = "thrift://xxxx:9083"
   }
}

Running Command

sh /data/seatunnel/seatunnel-2.3.4/bin/start-seatunnel-spark-3-connector-v2.sh \
  --master yarn \
  --deploy-mode cluster \
  --queue xxxx\
  --executor-instances 2 \
  --executor-cores 6 \
  --executor-memory 6g \
  --name "h010-xxxxx" \
  --config /data/ghyworkbase/seatunnel/H02-01-ODS_CONF-2.3.4/h010-xxxxx.conf

Error Exception

org.apache.kyuubi.KyuubiSQLException: org.apache.kyuubi.KyuubiSQLException: Error operating ExecuteStatement: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 847.0 failed 4 times, most recent failure: Lost task 3.3 in stage 847.0 (TID 14569) (datalake136.ghy.com.cn executor 1): org.apache.spark.sql.execution.QueryExecutionException: Parquet column cannot be converted in file hdfs://xxxx/user/hive/warehouse/xxx.db/xxx/dt_mon=2024-04/xx.parquet. Column: [xx], Expected: decimal(12,2), Found: FIXED_LEN_BYTE_ARRAY at org.apache.spark.sql.errors.QueryExecutionErrors$.unsupportedSchemaColumnConvertError(QueryExecutionErrors.scala:706) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:278) at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:116) at scala.collection.Iterator$$anon$10.hasNext(Iterator. ...

Zeta or Flink or Spark Version

spark-3.3.0

Java or Scala Version

/jdk/jdk1.8.0_341

Screenshots

image

Are you willing to submit PR?

Code of Conduct

CheneyYin commented 4 months ago

Please provide the schema of oracle table and hive table.

AdkinsHan commented 4 months ago

hive table below,Oracle same cloumns -- drop table if exists ghydata.ods_webcsmsgl_ecsms_visit_info_mi; CREATE TABLE IF NOT EXISTS ghydata.ods_webcsmsgl_ecsms_visit_info_mi ( isvisit string COMMENT '', clwz string COMMENT '', content string COMMENT '', plan string COMMENT ' ', cuser string COMMENT '创建', cdate timestamp COMMENT '时间', zhanbi decimal(12, 2) COMMENT ' ', entertime timestamp COMMENT '进', leavetime timestamp COMMENT '离' ) COMMENT 'SFA-' PARTITIONED BY (dt_mon string COMMENT '月快照增量表') STORED AS PARQUET;

CheneyYin commented 4 months ago

It's better to provide the definition of the "zhanbi" field in the Oracle table. It seems like there might be an issue with the type conversion of the Oracle connector.

AdkinsHan commented 4 months ago

But it works well on seatunnel1.5.7,so i don't what is the reason that cause this

Oralce below: Column Name # Type Type Mod Not Null Default Comment ZHANBI 11 NUMBER(12,2) [NULL] false [NULL] 占比

CheneyYin commented 4 months ago

https://github.com/apache/seatunnel/blob/cd4b30bbc3a1de9a0eb90be029fcc8feba87dde9/seatunnel-connectors-v2/connector-jdbc/src/main/java/org/apache/seatunnel/connectors/seatunnel/jdbc/internal/dialect/oracle/OracleTypeMapper.java#L85-L95

Because the scale of the field ZHANBI is 2, the field type will set to Decimal(38, 18).

❯ parq ./T_836191201227964417_6d287c425d_0_1_0.parquet -s                                                                       18:45:25

 # Schema 
 <pyarrow._parquet.ParquetSchema object at 0x7ff161507b40>
required group field_id=-1 SeaTunnelRecord {
  optional fixed_len_byte_array(16) field_id=-1 f (Decimal(precision=38, scale=18));
}

You can use sql cast the field as decimal(12, 2). Like this:

source {
  Jdbc {
    result_table_name = tbl
    driver = oracle.jdbc.driver.OracleDriver
    url = "jdbc:oracle:thin:@localhost:49161/xe"
    user = xxxxx
    password = xxx
    query = "select F from tbl"
    properties {
       database.oracle.jdbc.timezoneAsRegion = "false"
    }
  }
}

transform {
  sql {
    source_table_name = tbl
    result_table_name = t_tbl
    query = "select cast(F as decimal(12,2)) as F1 from tbl"
  }
}

sink {
  LocalFile {
    source_table_name = t_tbl
    path = "/tmp/hive/warehouse/test3"
    file_format_type = "parquet"
 }
}
CheneyYin commented 4 months ago

This issue has been solved at #5872. If you consider upgrading, you can also use the latest version 2.3.5, so you don't need to use sql transform.

AdkinsHan commented 4 months ago

Great, the problem is solved, which means that I can directly use decimal(38,18) or decimal(38,0) to process any precision numbers in the future.

CheneyYin commented 4 months ago

@hailin0 please close this issue.

github-actions[bot] commented 3 months ago

This issue has been automatically marked as stale because it has not had recent activity for 30 days. It will be closed in next 7 days if no further activity occurs.