opensearch-project / OpenSearch

🔎 Open source distributed and RESTful search engine.
https://opensearch.org/docs/latest/opensearch/index/
Apache License 2.0
9.02k stars 1.67k forks source link

[RFC] - OpenSearch Table #14524

Open penghuo opened 1 week ago

penghuo commented 1 week ago

Is your feature request related to a problem? Please describe

1. Current status

Currently, users can use the Spark Dataset API to directly read and write OpenSearch indices. The OpenSearch Spark extension internally leverages the Dataset API to access OpenSearch indices. However, we observe several problems and requirements:

Describe the solution you'd like

2. Vision of the future

Our goal is to enable users to utilize OpenSearch indices within popular query engines such as Spark. Spark users should be able to directly use OpenSearch clusters as catalogs and access OpenSearch indices as tables. We aim to enable Spark users to leverage OpenSearch's rich query and aggregation capabilities to efficiently query OpenSearch. Given OpenSearch's rich data type support, we plan to extend Spark's data type system and functions to incorporate more features from OpenSearch. We also intend to formally define the OpenSearch Table specification, covering schema and data types, partitioning, and table metadata. Users should be able to define OpenSearch tables in the AWS Glue catalog and use Lake Formation to define ACLs on OpenSearch tables. To improve performance, we will invest in more efficient data storage formats and data transmission protocols for OpenSearch. We are considering Apache Parquet as a storage format instead of _source, (similar proposal https://github.com/opensearch-project/OpenSearch/issues/13668) and Apache Arrow for zero-copy data transmission. To achieve cost savings, we aim to enable users to query OpenSearch cold indices and snapshots. This will allow them to eagerly move data from hot to cold storage without losing OpenSearch's key features. In summary, the end to end user experience is

2.1. Directly access

2.1.1. Configure Spark

By default, OpenSearch domain is catalog.

spark.sql.catalog.dev.warehouse=https://my-domain/
spark.sql.catalog.dev=org.apache.opensearch.spark.SparkCatalog

2.1.2. Query index as Table

User could directly access opensearch index without create table.

SELECT * FROM opensearch.default.index00001

2.2. Create Table

2.2.1. Configure Spark and Create Table (Spark)

spark.sql.catalog.dev.warehouse=https://my-domain/
spark.sql.catalog.dev=org.apache.opensearch.spark.SparkCatalog 
spark.sql.catalog.dev.catalog-impl=org.apache.opensearch.aws.glue.GlueCatalog 
spark.hadoop.hive.metastore.client.factory.class=com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory
CREATE TABLE my_table (
  DocId INT,
  Links STRUCT<
    Forward: ARRAY<INT>,
    Backward: ARRAY<INT>
  >,
  Name ARRAY<STRUCT<
    Language: ARRAY<STRUCT<
      Code: STRING,
      Country: STRING
    >>,
    Url: STRING
  >>
)
USING OPENSEARCH
LOCATION "http://my-domain"

2.2.2. Writes (Spark)

`INSERT INTO dev.default.tbl00001 VALUES (1), (2)`

2.2.3. Query (Spark)

SELECT * FROM dev.default.tbl00001

Related component

Search:Query Capabilities

Describe alternatives you've considered

n/a

Additional context

3. Next Steps

We will incorporate the feedback from this RFC into a more detailed proposal and high-level design that integrates the storage-related efforts in OpenSearch. We will create meta-issues to delve deeper into the components involved and continue with the detailed design.

4. How Can You Help?

Any general comments about the overall direction are welcome. Some specific questions:

mch2 commented 1 week ago

Thanks for raising this @penghuo, looking forward to discussion on this. Removed untriaged label.

andrross commented 6 days ago

Some high level comments after a discussion with @penghuo:

msfroh commented 1 day ago

We are considering Apache Parquet as a storage format instead of _source, (similar proposal https://github.com/opensearch-project/OpenSearch/issues/13668) and Apache Arrow for zero-copy data transmission.

Last year, @noCharger and I built a little prototype that avoided storing _source in OpenSearch, instead keeping document source in DynamoDB. At query time, you could still run queries against indexed fields (and doc values), but a search pipeline with a search response processor would fetch source from DynamoDB.

I wonder if we could do something similar here, where a query against an OpenSearch index retrieves matching doc IDs, sorted or scored as appropriate, then you use those doc IDs to fetch content from Parquet (or DynamoDB or Cassandra or whatever).