databendlabs / databend

𝗗𝗮𝘁𝗮, 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 & 𝗔𝗜. Modern alternative to Snowflake. Cost-effective and simple for massive-scale analytics. https://databend.com
https://docs.databend.com
Other
7.78k stars 742 forks source link

Catalog API #867

Closed dantengsky closed 3 years ago

dantengsky commented 3 years ago

Summary

Currently DataSource API provides us the essential functionalities of manipulating metadata, but as we moving the v0.5, some extra requirements should be fulfilled, including but not limited to:

drmingdrmer commented 3 years ago
  • metadata of database, table should be versioned, as we are going to provide time travel features

Versions of table is a single listed list, or could it be a DAG?

v0 <- v1 <- v2 ...

or

 .--- v2 <----+- v3
|      |       |
v      v       |
v0 <- v1 <----'
BohuTANG commented 3 years ago

I think it's a single chain like git

leiysky commented 3 years ago
  • periodically sync metadata with(from) meta-store

How does DDL work?

drmingdrmer commented 3 years ago

I think it's a single chain like git

You're right. In every case it does not need a DAG history. I was overthinking.

dantengsky commented 3 years ago
  • metadata of database, table should be versioned, as we are going to provide time travel features

Versions of table is a single listed list, or could it be a DAG?

v0 <- v1 <- v2 ...

or

 .--- v2 <----+- v3
|      |       |
v      v       |
v0 <- v1 <----'

"v0 <- v1 <- v2 " is enought for time travel.

 .--- v2 <----+- v3
|      |       |
v      v       |
v0 <- v1 <----'

The above figure may describe:

  • both v2 and v1 concurrently commit on the base of v0, and v1 wins. v2 successfully resolves conflicts and commits later.
  • both v3 and v2 concurrently commit on the base of v1, and v2 wins. v4 successfully resolves conflicts and commits later.

Transaction operation log will contain an item/entry to record the base version (I might forget this in the last discussion though;-), but that's for post-audit/diagnostics, for time travel, linear history is enough I think.

dantengsky commented 3 years ago
  • periodically sync metadata with(from) meta-store

How does DDL work?

Roughly speaking,

Any specific concerns or suggestions(sincerely welcome)?

leiysky commented 3 years ago

@dantengsky

I've just realized something, and I will appreciate it if you can give some feedback:

  1. Do we distinguish between immutable and mutable tables(or partitions)? Since it's simple to do some optimization on static dataset, e.g. accurate statistics, efficient cache policy.

  2. Shall we have more fine-grained item classification? Like supporting Field item, Function item, so we can implement fine-grained priviledge management and security ensurance.

dantengsky commented 3 years ago

@dantengsky

I've just realized something, and I will appreciate it if you can give some feedback:

pleasure to discuss with you ( and all the kind people who cares about datafause)

  1. Do we distinguish between immutable and mutable tables(or partitions)? Since it's simple to do some optimization on static dataset, e.g. accurate statistics, efficient cache policy.

For each version of a given table, it is immutable: including schema, partitions, and other meta data of it. When computation layer accessing a table without specifying the version, the latest version of the table will be returned.

There might be some exceptions, like re-orging a table (background merging), things of these parts are not fully settled, we need some deeper thoughts here.

I agree with you that immutability will benefit cache policy, and statistics (with an affordable cost if the statistics index we are trying to maintain is not "heavy").

  1. Shall we have more fine-grained item classification? Like supporting Field item, Function item, so we can implement fine-grained priviledge management and security insurance.

I think we should have them eventually, at least, the ansi information_schema need them. I am not sure if we should provide "field/column" level access control, but if we could cover that without scarifying too much, it will be great.

As always, suggestions are sincerely welcome.

jyizheng commented 3 years ago

What are the file format and table format we want to support? Parquet is a popular file format and the cost-based-optimization may rely on it. Is the iceberg the table format we want to support?

leiysky commented 3 years ago

Thanks for the reply.

For each version of a given table, it is immutable: including schema, partitions, and other meta data of it. When computation layer accessing a table without specifying the version, the latest version of the table will be returned.

Correct me if I was wrong, for each write operation, like insert a single row with INSERT INTO t VALUES(...), it will generate a unique version.

This scheme would bring serious overhead, that is, user have to batch the records and ingest them with bulk load.

How do you handle this case?

leiysky commented 3 years ago

What are the file format and table format we want to support? Parquet is a popular file format and the cost-based-optimization may rely on it. Is the iceberg the table format we want to support?

And besides, I have the question too.

Maybe we can have our own unified columnar format?

As https://www.firebolt.io/performance-at-scale said, they will transform the external data(e.g. JSON, Parquet, Avro, CSV) into their own data format F3 through the ETL pipeline.

jyizheng commented 3 years ago

@leiysky I agree and thought about this, too. Maybe it is a good idea to borrow some ideas from clickhouse to define our own file format. My impression is that clickhouse is very good at filtering data so that the query can be fast. A Parquet file has some stats or summaries for the data it stores. We may want to extend a parquet file's stats or summaries with the clickhouse tricks.

leiysky commented 3 years ago

@leiysky I agree and thought about this, too. Maybe it is a good idea to borrow some ideas from clickhouse to define our own file format. My impression is that clickhouse is very good at filtering data so that the query can be fast. A Parquet file has some stats or summaries for the data it stores. We may want to extend a parquet file's stats or summaries with the clickhouse tricks.

Yes, I think it's mainly because of sparse index, which can help skipping some data units(depends on data distribution, the effect can be surprising) with min/max, bloom filter or else. It's convenient for us to build different index on a data file, even on demand on the fly.

And AFAIK, ORC does have sparse index in their data format, but I haven't do any benchmark on them, so just FYI.

Anyway, we can choose a major data format for now to bootstrap fuse-store, but eventually there should be a unified data format designed by ourselves.

dantengsky commented 3 years ago

Clickhouse's mergetree engine, and the way they organize the index, is definitely a valuable reference, but IMHO, I am afraid it is not "cloud-native" friendly, @zhang2014 @sundy-li may have more insightful comments on this.

We do consider using something like iceberg's table format, which is more "batch_write" friendly as @leiysky mentioned, i.e. if small parts keep being ingested in without batching, something like CK's "Too many parts.." may happen (and the metadata increases for each small ingestion, which brings burdens to the meta layer, as iceberg/Uber mentioned)

Totally agreed with you that statistics of parquet and orc is a kind of sparse index.

But I am afraid, the statistics of Parquet or ORC can not be used directly, since the data files are supposed to be stored in Object Storage, which is not that efficient to access; the sparse index should be maintained by meta service, and ideally, they could be accessed by computation layers as a normal data source (relation), so that we can do computations with the sparse index parallelly (or even distributedly, but that might be too much). Also, we are considering utilizing some fancy data structures to accelerating the query, like BloomRF etc.

May I transfer this issue to a discussion? DataFuse is young, constructive discussions like this are preferred.