currently we can't infer the spark type of array column in starrocks, because the column type is missed in information_schema.COLUMNS, so the user must tell the spark type of the column via starrocks.column.types. The missed column type is fixing, and after that we can remove this limitation
the stream load will force to use json format whatever the starrocks.write.properties.format is, because there is no standard to represent the array in csv currently, but there is array type in json
Checklist:
[ ] I have added test cases for my bug fix or my new feature
[ ] This pr will affect users' behaviors
[ ] This pr needs user documentation (for new or modified features or behaviors)
[ ] I have added documentation for my new feature or new function
What type of PR is this:
Which issues of this PR fixes :
Fixes #
Problem Summary(Required) :
Support to load array data type to starrocks. Here is an example
StarRocks DDL
Spark DataFrame
Note that
starrocks.column.types
. The missed column type is fixing, and after that we can remove this limitationstarrocks.write.properties.format
is, because there is no standard to represent the array in csv currently, but there is array type in jsonChecklist: