apache / shardingsphere

Empowering Data Intelligence with Distributed SQL for Sharding, Scalability, and Security Across All Databases.
Apache License 2.0
19.97k stars 6.75k forks source link

Support parsing Doris sql #31478

Open iamhucong opened 5 months ago

iamhucong commented 5 months ago

Background

Hi community.\ The ShardingSphere SQL parser engine helps users to parse SQL to create the AST (Abstract Syntax Tree) and visit the AST to get SQLStatement (Java Object). Currently, we are planning to enhance the support for Doris SQL parsing in ShardingSphere.

More details: https://shardingsphere.apache.org/document/current/en/reference/sharding/parse/

Issue Background Explanation

The current issue involves using a custom web scraping script to fetch SQL cases from the official website of a corresponding database. These SQL cases are then passed to ShardingSphere's parsing engine for analysis. For SQL cases that fail to be parsed successfully, every 3 to 5 SQL cases are grouped together as an issue.

  1. Since SQL cases are obtained through web scraping, it cannot be guaranteed that all SQL cases are correct. Please follow the following process to handle this pull request (PR).
  2. Some SQL cases may have already been fixed in other PRs. For cases that can already be executed successfully, simply leave a comment to ignore them.
  3. If a SQL case can be executed successfully without any code changes, there is no need to add a corresponding test assert file.

Task

This issue is to support more Doris sql parsing, as follows:

SELECT * FROM tbl INTO OUTFILE "hdfs://path/to/result_" FORMAT AS CSV PROPERTIES (     "broker.name" = "my_broker",     "broker.hadoop.security.authentication" = "kerberos",     "broker.kerberos_principal" = "doris@YOUR.COM",     "broker.kerberos_keytab" = "/home/doris/my.keytab",     "column_separator" = ",",     "line_delimiter" = "\n",     "max_file_size" = "100MB" )
SELECT c1, c2, c3 FROM tbl INTO OUTFILE "hdfs://path/to/result_" FORMAT AS PARQUET PROPERTIES (     "broker.name" = "my_broker",     "broker.hadoop.security.authentication" = "kerberos",     "broker.kerberos_principal" = "doris@YOUR.COM",     "broker.kerberos_keytab" = "/home/doris/my.keytab" )
WITH x1 AS (SELECT k1, k2 FROM tbl1), x2 AS (SELECT k3 FROM tbl2) SELEC k1 FROM x1 UNION SELECT k3 FROM x2 INTO OUTFILE "hdfs://path/to/result_" PROPERTIES (     "broker.name" = "my_broker",     "broker.username"="user",     "broker.password"="passwd",     "broker.dfs.nameservices" = "my_ha",     "broker.dfs.ha.namenodes.my_ha" = "my_namenode1, my_namenode2",     "broker.dfs.namenode.rpc-address.my_ha.my_namenode1" = "nn1_host:rpc_port",     "broker.dfs.namenode.rpc-address.my_ha.my_namenode2" = "nn2_host:rpc_port",     "broker.dfs.client.failover.proxy.provider" = "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" )
SELECT k1 FROM tbl1 UNION SELECT k2 FROM tbl1 INTO OUTFILE "bos://bucket/result_" FORMAT AS PARQUET PROPERTIES (     "broker.name" = "my_broker",     "broker.bos_endpoint" = "http://bj.bcebos.com",     "broker.bos_accesskey" = "xxxxxxxxxxxxxxxxxxxxxxxxxxx",     "broker.bos_secret_accesskey" = "yyyyyyyyyyyyyyyyyyyyyyyyy" )
SELECT * FROM tbl INTO OUTFILE "hdfs://path/to/result_" FORMAT AS CSV PROPERTIES ( 'fs.defaultFS'='hdfs://hacluster/', 'dfs.nameservices'='hacluster', 'dfs.ha.namenodes.hacluster'='n1,n2', 'dfs.namenode.rpc-address.hacluster.n1'='192.168.0.1:8020', 'dfs.namenode.rpc-address.hacluster.n2'='192.168.0.2:8020', 'dfs.client.failover.proxy.provider.hacluster'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider', 'dfs.namenode.kerberos.principal'='hadoop/_HOST@REALM.COM' 'hadoop.security.authentication'='kerberos', 'hadoop.kerberos.principal'='doris_test@REALM.COM', 'hadoop.kerberos.keytab'='/path/to/doris_test.keytab' )
VARCHAR CONV(VARCHAR input, TINYINT from_base, TINYINT to_base) VARCHAR CONV(BIGINT input, TINYINT from_base, TINYINT to_base)
MySQL [test]> SELECT CONV(15,10,2)
CONV
CREATE FILE "ca.pem" PROPERTIES (     "url" = "https://test.bj.bcebos.com/kafka-key/ca.pem",     "catalog" = "kafka" )
CREATE FILE "client.key" IN my_database PROPERTIES (     "url" = "https://test.bj.bcebos.com/kafka-key/client.key",     "catalog" = "my_catalog",     "md5" = "b5bb901bf10f99205b39a46ac3557dd9" )

Overall Procedure

If you intend to participate in fixing this issue, please feel free to leave a comment below the issue. Community members will assign the issue accordingly.

For example, you can leave a comment like this: "Hi, please assign this issue to me. Thank you!"

Once you have claimed the issue, please review the syntax of the SQL on the official website of the corresponding database. Execute the SQL on the respective database to ensure the correctness of the SQL syntax.

You can check the corresponding source of each SQL case on the official database website by clicking on the link provided below each case.

Next, execute the problematic SQL cases mentioned above in the database (you can quickly start the corresponding database using the Docker image for that database, and then connect to it using a client you are familiar with), to ensure that the SQL syntax itself is correct.

Fixing ANTLR Grammar Parsing Issue

Once you have confirmed the correctness of the SQL syntax, you can validate and fix the grammar parsing issue in ShardingSphere.

If you are using IntelliJ IDEA, you will need to install the ANTLR plugin before proceeding.

If it is an ANTLR parsing error message, try to repair the .g4 file by comparing it with the official database syntax until the SQL can be correctly parsed by ANTLR.

When there is no error message in the ANTLR Preview window, it means that ANTLR can correctly parse the SQL.

Visitor problem fix

After ANTLR parses SQL into an abstract syntax tree, ShardingSphere will access the abstract syntax tree through Visitor and extract the required information. If you need to extract Segments, you need to first execute:

 mvn -T 2C clean install -DskipTests

Under the shardingsphere-parser module to compile the entire parser module.\ Then rewrite the corresponding visit method in SQLStatementVisitorr as needed to extract the corresponding Segment.

Add assertion test file

After the above SQL parsing problem is repaired, the corresponding Test needs to be added. The steps are as follows:

  1. Add the corresponding sql-case in the sql/supported directory.
  2. Add case assertions in the case directory of the shardingsphere-test-it-parser module.
  3. Run org.apache.shardingsphere.test.it.sql.parser.internal.InternalSQLParserIT\ After SQL Parser IT runs successfully, you can submit a PR.

Relevant Skills

  1. Master JAVA language
  2. Have a basic understanding of Antlr g4 file
  3. Be familiar with Doris SQLs
github-actions[bot] commented 4 months ago

There hasn't been any activity on this issue recently, and in order to prioritize active issues, it will be marked as stale.