Open penghuo opened 1 week ago
Thanks for raising this @penghuo, looking forward to discussion on this. Removed untriaged label.
Some high level comments after a discussion with @penghuo:
_source
fields for large amount of data). Prototypes or proof-of-concept examples with benchmarking data would really help evaluate these concerns.We are considering Apache Parquet as a storage format instead of _source, (similar proposal https://github.com/opensearch-project/OpenSearch/issues/13668) and Apache Arrow for zero-copy data transmission.
Last year, @noCharger and I built a little prototype that avoided storing _source
in OpenSearch, instead keeping document source in DynamoDB. At query time, you could still run queries against indexed fields (and doc values), but a search pipeline with a search response processor would fetch source from DynamoDB.
I wonder if we could do something similar here, where a query against an OpenSearch index retrieves matching doc IDs, sorted or scored as appropriate, then you use those doc IDs to fetch content from Parquet (or DynamoDB or Cassandra or whatever).
Is your feature request related to a problem? Please describe
1. Current status
Currently, users can use the Spark Dataset API to directly read and write OpenSearch indices. The OpenSearch Spark extension internally leverages the Dataset API to access OpenSearch indices. However, we observe several problems and requirements:
Describe the solution you'd like
2. Vision of the future
Our goal is to enable users to utilize OpenSearch indices within popular query engines such as Spark. Spark users should be able to directly use OpenSearch clusters as catalogs and access OpenSearch indices as tables. We aim to enable Spark users to leverage OpenSearch's rich query and aggregation capabilities to efficiently query OpenSearch. Given OpenSearch's rich data type support, we plan to extend Spark's data type system and functions to incorporate more features from OpenSearch. We also intend to formally define the OpenSearch Table specification, covering schema and data types, partitioning, and table metadata. Users should be able to define OpenSearch tables in the AWS Glue catalog and use Lake Formation to define ACLs on OpenSearch tables. To improve performance, we will invest in more efficient data storage formats and data transmission protocols for OpenSearch. We are considering Apache Parquet as a storage format instead of
_source
, (similar proposal https://github.com/opensearch-project/OpenSearch/issues/13668) and Apache Arrow for zero-copy data transmission. To achieve cost savings, we aim to enable users to query OpenSearch cold indices and snapshots. This will allow them to eagerly move data from hot to cold storage without losing OpenSearch's key features. In summary, the end to end user experience is2.1. Directly access
2.1.1. Configure Spark
By default, OpenSearch domain is catalog.
2.1.2. Query index as Table
User could directly access opensearch index without create table.
2.2. Create Table
2.2.1. Configure Spark and Create Table (Spark)
dev.default.tabl00001.metadata
to store metadata.tbl00001
to store data.2.2.2. Writes (Spark)
2.2.3. Query (Spark)
Related component
Search:Query Capabilities
Describe alternatives you've considered
n/a
Additional context
3. Next Steps
We will incorporate the feedback from this RFC into a more detailed proposal and high-level design that integrates the storage-related efforts in OpenSearch. We will create meta-issues to delve deeper into the components involved and continue with the detailed design.
4. How Can You Help?
Any general comments about the overall direction are welcome. Some specific questions: