risingwavelabs / risingwave

Best-in-class stream processing, analytics, and management. Perform continuous analytics, or build event-driven applications, real-time ETL pipelines, and feature stores in minutes. Unified streaming and batch. PostgreSQL compatible.
https://go.risingwave.com/slack
Apache License 2.0
7.04k stars 579 forks source link

[Support Databend] sink support databend #17487

Open oslet opened 4 months ago

oslet commented 4 months ago

Is your feature request related to a problem? Please describe.

databend: https://www.databend.com

I have tested the MySQL and ClickHouse interfaces provided by Databend, but I found that creating a sink was not successful. Please support the native Databend interface. Thank you very much

Describe the solution you'd like

No response

Describe alternatives you've considered

when I create mysql sink

Caused by these errors (recent errors listed first):
  1: gRPC request to meta service failed: Internal error
  2: failed to validate sink
  3: sink cannot pass validation: INVALID_ARGUMENT: failed to connect to target database: HY000

create clickhouse sink

Caused by these errors (recent errors listed first):
  1: gRPC request to meta service failed: Internal error
  2: failed to validate sink
  3: ClickHouse error: bad response: {"error":{"code":"400","message":"SemanticError. Code: 1065, Text = error: \n  --> SQL:1:33\n  |\n1 | select distinct `name`,`engine`,`create_table_query` from system.tables where database = 'dts_hardware' and name = 'hardware_equipment_control_log' FORMAT RowBinary\n  |                                 ^^^^^^^^^^^^^^^^^^^^ column create_table_query doesn't exist, do you mean 'create_table_query'?\n\n."}}

Additional context

No response

github-actions[bot] commented 2 months ago

This issue has been open for 60 days with no activity.

If you think it is still relevant today, and needs to be done in the near future, you can comment to update the status, or just manually remove the no-issue-activity label.

You can also confidently close this issue as not planned to keep our backlog clean. Don't worry if you think the issue is still valuable to continue in the future. It's searchable and can be reopened when it's time. 😄