apache / hudi

Upserts, Deletes And Incremental Processing on Big Data.
https://hudi.apache.org/
Apache License 2.0
5.33k stars 2.42k forks source link

[SUPPORT] Hive Sync tool fails to sync Hoodi table written using Flink 1.16 to HMS #8848

Open Riddle4045 opened 1 year ago

Riddle4045 commented 1 year ago

Tips before filing an issue

Describe the problem you faced Hive sync tool fails with the following stacktrace

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/calcite/plan/RelOptRule
        at java.lang.Class.getDeclaredMethods0(Native Method)
        at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
        at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
        at java.lang.Class.getMethod0(Class.java:3018)
        at java.lang.Class.getMethod(Class.java:1784)
        at org.apache.hudi.hive.util.IMetaStoreClientUtil.getMSC(IMetaStoreClientUtil.java:40)
        at org.apache.hudi.hive.HoodieHiveSyncClient.<init>(HoodieHiveSyncClient.java:88)
        at org.apache.hudi.hive.HiveSyncTool.initSyncClient(HiveSyncTool.java:122)
        at org.apache.hudi.hive.HiveSyncTool.<init>(HiveSyncTool.java:116)
        at org.apache.hudi.hive.HiveSyncTool.main(HiveSyncTool.java:482)
Caused by: java.lang.ClassNotFoundException: org.apache.calcite.plan.RelOptRule
        at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
        at java.lang.C

To Reproduce Write hudi tables using Flink 1.16, run Hive Sync tool on the local box to sync metadata to HMS specify sync-mode as hms and provide required parameters

Expected behavior

This should sync hoodie table to HMS

Environment Description

Riddle4045 commented 1 year ago

Please note - the run_sync_tool script only adds some of the Hive jars to the classpath ignoring others like caclite which is the problem here, I am not sure why that is, what's the expectation to make this work on a local machine?

danny0405 commented 1 year ago

Flink support syncing hive with params, did you try that ?

CREATE TABLE t1(
  uuid VARCHAR(20),
  name VARCHAR(10),
  age INT,
  ts TIMESTAMP(3),
  `partition` VARCHAR(20)
)
PARTITIONED BY (`partition`)
with(
  ‘connector’=‘hudi’,
  ‘path’ = ‘hdfs://xxx.xxx.xxx.xxx:9000/t1’,
  ‘table.type’=‘COPY_ON_WRITE’,
  ‘hive_sync.enable’=‘true’,
  ‘hive_sync.table’=‘${hive_table}’,
  ‘hive_sync.db’=‘${hive_db}’,
  ‘hive_sync.mode’ = ‘hms’,
  ‘hive_sync.metastore.uris’ = ‘thrift://ip:9083’
);
Riddle4045 commented 1 year ago

Flink support syncing hive with params, did you try that ?

CREATE TABLE t1(
  uuid VARCHAR(20),
  name VARCHAR(10),
  age INT,
  ts TIMESTAMP(3),
  `partition` VARCHAR(20)
)
PARTITIONED BY (`partition`)
with(
  ‘connector’=‘hudi’,
  ‘path’ = ‘hdfs://xxx.xxx.xxx.xxx:9000/t1’,
  ‘table.type’=‘COPY_ON_WRITE’,
  ‘hive_sync.enable’=‘true’,
  ‘hive_sync.table’=‘${hive_table}’,
  ‘hive_sync.db’=‘${hive_db}’,
  ‘hive_sync.mode’ = ‘hms’,
  ‘hive_sync.metastore.uris’ = ‘thrift://ip:9083’
);

Hi @danny0405 - not yet, unfortunately - my use case currently doesn't allow me to use it. I'd love to have the sync tool do the work here.

xicm commented 1 year ago

As you say, adding calcite-core to the class path can solve the problem. You can raise a pr to improve the script.

Riddle4045 commented 1 year ago

@xicm makes sense, I wanted to confirm I wasn't missing anything.

I am going to add add a dev flag, it'll

Let me know if you have objections, cc @danny0405

danny0405 commented 1 year ago

I believe that only Hive3 needs the calcire related jars, when you make changes to the run_sync_tool script, keep caution for that.

aib628 commented 1 year ago

Flink support syncing hive with params, did you try that ?

CREATE TABLE t1(
  uuid VARCHAR(20),
  name VARCHAR(10),
  age INT,
  ts TIMESTAMP(3),
  `partition` VARCHAR(20)
)
PARTITIONED BY (`partition`)
with(
  ‘connector’=‘hudi’,
  ‘path’ = ‘hdfs://xxx.xxx.xxx.xxx:9000/t1’,
  ‘table.type’=‘COPY_ON_WRITE’,
  ‘hive_sync.enable’=‘true’,
  ‘hive_sync.table’=‘${hive_table}’,
  ‘hive_sync.db’=‘${hive_db}’,
  ‘hive_sync.mode’ = ‘hms’,
  ‘hive_sync.metastore.uris’ = ‘thrift://ip:9083’
);

Hi @danny0405 i had try it, and the same problem reproduced.

CREATE TABLE t1(
  uuid VARCHAR(20) PRIMARY KEY NOT ENFORCED,
  name VARCHAR(10),
  age INT,
  ts TIMESTAMP(3),
  `partition` VARCHAR(20)
)
PARTITIONED BY (`partition`)
WITH (
  'connector' = 'hudi',
  'path' = '/user/hive/warehouse/hudi.db/t1',
  'hoodie.table.base.file.format' = 'PARQUET',
  'hoodie.metadata.enable' = 'false',
  'hive_sync.enable' = 'true',
  'hive_sync.mode' = 'hms',
  'hive_sync.database' = 'default',
  'hive_sync.table' = 't1',
  'hive_sync.metastore.uris' = 'thrift://hivemetastore:9083',
  'table.type' = 'COPY_ON_WRITE' -- this creates a MERGE_ON_READ table, by default is COPY_ON_WRITE
);

we can fix it when use local box to run sync tool as @xicm said, but how can we fix it when use flink connector ?

danny0405 commented 1 year ago

@aib628 What is your issue then ? The Calcite jar is also missing ?

aib628 commented 1 year ago

@aib628 What is your issue then ? The Calcite jar is also missing ?

Yeah,The same problem reproduced when use flink connector. We can see the error message in jobmanager log as follow: Caused by: java.lang.NoClassDefFoundError: org/apache/calcite/plan/RelOptRule

And then i add calcite-core-*.jar to sql-client by --jar options, the hive_sync function finished normally:

HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath` bash /opt/hudi/flink-1.17.1/bin/sql-client.sh embedded -j /var/hoodie/ws/packaging/hudi-flink-bundle/target/hudi-flink1.17-bundle-0.14.0-SNAPSHOT.jar -j /opt/hudi/libs/calcite-core-1.16.0.jar -j /opt/hudi/libs/flink-sql-connector-kafka-1.17.1.jar shell
danny0405 commented 1 year ago

I'm sure the Calcite jar should be included in the hive-exec jar, what hive-exec jar did you use for your flink bundle?

aib628 commented 1 year ago

@danny0405 I'm using hive3 , and packaging command as follow: mvn clean install -DskipTests -Dhadoop.version=3.1.0 -Dhive.version=3.1.2 -Dflink.version=1.17.1 -Drat.skip=true -Pflink-bundle-shade-hive3

And then i cannot found any calcite dependency in my flink-bundle jar as follow command, this may due to 'provided' scope in pom.xml, and none of hive lib contain in hadoop classpath, include hive-exec.jar: jar -tf hudi-flink1.17-bundle-0.14.0-SNAPSHOT.jar |grep 'RelOptRule'

danny0405 commented 1 year ago

If you specify the hive profile, the hive-exec scope would switch to compile: -Drat.skip=true -Pflink-bundle-shade-hive3

aib628 commented 1 year ago

@danny0405 yeah, i'm do found hive-exec dependency in my flink-bundle jar file, but none of calcite dependency found.

then test found that hudi-flink1.13-bundle-0.13.1.jar that download from maven central has the same problem: none of calcite dependency found.

May some problem here?

danny0405 commented 1 year ago

Yeah, maybe it's my fault, we do not exclude calcite when packaging the bundle with hive-exec, maybe for some Hive version since 3.x, the calcite related classes are required, but the hive-exec itself does not include the calcite, do you package by using the same verison hive-exec as your hive server?

aib628 commented 1 year ago

@danny0405 Yeah, i'm using hadoop3.1.0 + hive 3.1.2 package it from source, and deploy it using docker image of 'apachehudi/hudi-hadoop_3.1.0-hive_3.1.2:latest'.

aib628 commented 1 year ago

Yeah, maybe it's my fault, we do not exclude calcite when packaging the bundle with hive-exec, maybe for some Hive version since 3.x, the calcite related classes are required, but the hive-exec itself does not include the calcite, do you package by using the same verison hive-exec as your hive server?

@danny0405 Hi, new problem found in hadoop3.2.2, build command: mvn clean install -DskipTests -Dhadoop.version=3.2.2 -Dhive.version=3.1.2 -Dflink1.13 -Drat.skip=true -Pflink-bundle-shade-hive3

Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/JobConf
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5099) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.hudi.sink.utils.HiveSyncContext.create(HiveSyncContext.java:87) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.hudi.sink.StreamWriteOperatorCoordinator.initHiveSync(StreamWriteOperatorCoordinator.java:323) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:200) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:198) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:589) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:955) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:873) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:383) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    ... 20 more
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapred.JobConf
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_282]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_282]
    at org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:64) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:65) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:48) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_282]
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5099) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.hudi.sink.utils.HiveSyncContext.create(HiveSyncContext.java:87) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.hudi.sink.StreamWriteOperatorCoordinator.initHiveSync(StreamWriteOperatorCoordinator.java:323) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.hudi.sink.StreamWriteOperatorCoordinator.start(StreamWriteOperatorCoordinator.java:200) ~[blob_p-8fb0a9a43a0b21fb8067cce0a27f3f694247a52c-7896267fa9a5412ec3662db78cba584e:0.14.0-SNAPSHOT]
    at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder.start(OperatorCoordinatorHolder.java:198) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.scheduler.DefaultOperatorCoordinatorHandler.startAllOperatorCoordinators(DefaultOperatorCoordinatorHandler.java:85) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.scheduler.SchedulerBase.startScheduling(SchedulerBase.java:589) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.jobmaster.JobMaster.startScheduling(JobMaster.java:955) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.jobmaster.JobMaster.startJobExecution(JobMaster.java:873) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.jobmaster.JobMaster.onStart(JobMaster.java:383) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStart(RpcEndpoint.java:181) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    at org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StoppedState.start(AkkaRpcActor.java:605) ~[flink-dist_2.12-1.13.1.jar:1.13.1]
    ... 20 more

same as calcite-core.jar, hive sync work normally after add hadoop-client-api.jar into flink runtime lib with --jar option manually.

as you said, maybe for some hive version since 3.x, there had some changes.

danny0405 commented 1 year ago

In principle, we do not package any hadoop related jars into the bundle jar, the classpath of the runtime env should include it.

alberttwong commented 3 months ago

after adding https://mvnrepository.com/artifact/org.apache.calcite/calcite-core, I ran into

root@spark:/opt/hudi/hudi-sync/hudi-hive-sync# ./run_sync_tool.sh  --metastore-uris 'thrift://hive-metastore:9083' --partitioned-by city --base-path 's3a://warehouse/people' --database hudi_db --table people --sync-mode hms 
setting hadoop conf dir
Running Command : java -cp /hive/lib/hive-metastore-3.1.3.jar::/hive/lib/hive-service-3.1.3.jar::/hive/lib/hive-exec-3.1.3.jar::/hive/lib/hive-jdbc-3.1.3.jar:/hive/lib/hive-jdbc-handler-3.1.3.jar::/hive/lib/jackson-annotations-2.12.0.jar:/hive/lib/jackson-core-2.12.0.jar:/hive/lib/jackson-core-asl-1.9.13.jar:/hive/lib/jackson-databind-2.12.0.jar:/hive/lib/jackson-dataformat-smile-2.12.0.jar:/hive/lib/jackson-mapper-asl-1.9.13.jar:/hive/lib/jackson-module-scala_2.11-2.12.0.jar::/hadoop/share/hadoop/common/*:/hadoop/share/hadoop/mapreduce/*:/hadoop/share/hadoop/hdfs/*:/hadoop/share/hadoop/common/lib/*:/hadoop/share/hadoop/hdfs/lib/*:/root/.ivy2/jars/*:/hadoop/etc/hadoop:/opt/hudi/hudi-sync/hudi-hive-sync/../../packaging/hudi-hive-sync-bundle/target/hudi-hive-sync-bundle-1.0.0-SNAPSHOT.jar org.apache.hudi.hive.HiveSyncTool --metastore-uris thrift://hive-metastore:9083 --partitioned-by city --base-path s3a://warehouse/people --database hudi_db --table people --sync-mode hms
2024-06-03 17:10:42,515 INFO  [main] conf.HiveConf (HiveConf.java:findConfigFile(187)) - Found configuration file null
2024-06-03 17:10:42,707 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(60)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2024-06-03 17:10:42,824 INFO  [main] impl.MetricsConfig (MetricsConfig.java:loadFirst(120)) - Loaded properties from hadoop-metrics2.properties
2024-06-03 17:10:42,858 INFO  [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(378)) - Scheduled Metric snapshot period at 10 second(s).
2024-06-03 17:10:42,858 INFO  [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - s3a-file-system metrics system started
2024-06-03 17:10:43,304 INFO  [main] table.HoodieTableMetaClient (HoodieTableMetaClient.java:<init>(148)) - Loading HoodieTableMetaClient from s3a://warehouse/people
2024-06-03 17:10:43,395 INFO  [main] table.HoodieTableConfig (HoodieTableConfig.java:<init>(309)) - Loading table properties from s3a://warehouse/people/.hoodie/hoodie.properties
2024-06-03 17:10:43,413 INFO  [main] table.HoodieTableMetaClient (HoodieTableMetaClient.java:<init>(169)) - Finished Loading Table of type COPY_ON_WRITE(version=1) from s3a://warehouse/people
2024-06-03 17:10:43,413 INFO  [main] table.HoodieTableMetaClient (HoodieTableMetaClient.java:<init>(171)) - Loading Active commit timeline for s3a://warehouse/people
2024-06-03 17:10:43,431 INFO  [main] timeline.HoodieActiveTimeline (HoodieActiveTimeline.java:<init>(177)) - Loaded instants upto : Option{val=[20240603170053432__commit__COMPLETED]}
Exception in thread "main" java.lang.NoClassDefFoundError: com/facebook/fb303/FacebookService$Iface
alberttwong commented 3 months ago

adding in https://mvnrepository.com/artifact/org.apache.thrift/libfb303

Running Command : java -cp /hive/lib/hive-metastore-3.1.3.jar::/hive/lib/hive-service-3.1.3.jar::/hive/lib/hive-exec-3.1.3.jar::/hive/lib/hive-jdbc-3.1.3.jar:/hive/lib/hive-jdbc-handler-3.1.3.jar::/hive/lib/jackson-annotations-2.12.0.jar:/hive/lib/jackson-core-2.12.0.jar:/hive/lib/jackson-core-asl-1.9.13.jar:/hive/lib/jackson-databind-2.12.0.jar:/hive/lib/jackson-dataformat-smile-2.12.0.jar:/hive/lib/jackson-mapper-asl-1.9.13.jar:/hive/lib/jackson-module-scala_2.11-2.12.0.jar::/hadoop/share/hadoop/common/*:/hadoop/share/hadoop/mapreduce/*:/hadoop/share/hadoop/hdfs/*:/hadoop/share/hadoop/common/lib/*:/hadoop/share/hadoop/hdfs/lib/*:/root/.ivy2/jars/*:/hadoop/etc/hadoop:/opt/hudi/hudi-sync/hudi-hive-sync/../../packaging/hudi-hive-sync-bundle/target/hudi-hive-sync-bundle-1.0.0-SNAPSHOT.jar org.apache.hudi.hive.HiveSyncTool --metastore-uris thrift://hive-metastore:9083 --partitioned-by city --base-path s3a://warehouse/people --database hudi_db --table people --sync-mode hms
2024-06-03 17:15:25,270 INFO  [main] conf.HiveConf (HiveConf.java:findConfigFile(187)) - Found configuration file null
2024-06-03 17:15:25,444 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(60)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2024-06-03 17:15:25,550 INFO  [main] impl.MetricsConfig (MetricsConfig.java:loadFirst(120)) - Loaded properties from hadoop-metrics2.properties
2024-06-03 17:15:25,581 INFO  [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(378)) - Scheduled Metric snapshot period at 10 second(s).
2024-06-03 17:15:25,581 INFO  [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - s3a-file-system metrics system started
2024-06-03 17:15:26,025 INFO  [main] table.HoodieTableMetaClient (HoodieTableMetaClient.java:<init>(148)) - Loading HoodieTableMetaClient from s3a://warehouse/people
2024-06-03 17:15:26,120 INFO  [main] table.HoodieTableConfig (HoodieTableConfig.java:<init>(309)) - Loading table properties from s3a://warehouse/people/.hoodie/hoodie.properties
2024-06-03 17:15:26,140 INFO  [main] table.HoodieTableMetaClient (HoodieTableMetaClient.java:<init>(169)) - Finished Loading Table of type COPY_ON_WRITE(version=1) from s3a://warehouse/people
2024-06-03 17:15:26,140 INFO  [main] table.HoodieTableMetaClient (HoodieTableMetaClient.java:<init>(171)) - Loading Active commit timeline for s3a://warehouse/people
2024-06-03 17:15:26,159 INFO  [main] timeline.HoodieActiveTimeline (HoodieActiveTimeline.java:<init>(177)) - Loaded instants upto : Option{val=[20240603170053432__commit__COMPLETED]}
2024-06-03 17:15:26,229 ERROR [main] utils.MetaStoreUtils (MetaStoreUtils.java:logAndThrowMetaException(166)) - Got exception: java.net.URISyntaxException Illegal character in hostname at index 35: thrift://demo-hive-metastore-1.demo_default:9083
java.net.URISyntaxException: Illegal character in hostname at index 35: thrift://demo-hive-metastore-1.demo_default:9083
danny0405 commented 3 months ago

@alberttwong Did you package the jar manually with the hive profile?

alberttwong commented 3 months ago

@danny0405 I'm documenting my process at https://github.com/apache/incubator-xtable/discussions/457