microsoft / hyperspace

An open source indexing subsystem that brings index-based query acceleration to Apache Spark™ and big data workloads.
https://aka.ms/hyperspace
Apache License 2.0
423 stars 115 forks source link

[PROPOSAL]: support not implementation of delta lake or some other physical manifestation, but V2 data source of Spark #513

Open MironAtHome opened 2 years ago

MironAtHome commented 2 years ago

Problem Statement

A clear and concise description of what the problem is e.g., I have this scenario and Hyperspace does not work [...] There seems to be a gaping hole in the architectural approach in designing spark physical data access. It is somewhat natural and totally inline with how developer would plunge into delivering a working solution. Unawares though of a greater design picture. I believe it is time to take a step back and look at a Spark not as a wonder that has capabilities, but as a design, a blueprint, or a total sum of its blueprints, which holds the key to its purpose and approach to implementing its features. One of the key blueprints underpinning entire Spark physical data access path ( and indexing is a nothing, but the point of the razor, not only accessing physical data, but optimizing this access to the finite degree, is the V2 Data Source interface, or collection interfaces defining physical layer of data access of Spark. It was lurking under the copious amount of code all the way since version 2.3.1. I believe it is time to dust away the obfuscating details an implement this V2 interface, albeit limited.

In doing so indexing subsystem becomes integral part of Spark query subsystem, and rather than having to invoke ".enable" stanza the user can start benefiting from using indexes by means of both, legacy API and ANSI SQL, such as CREATE INDEX. And, granted interface develops in evolutionary approach, iteratively, functionality will remain valid and current, and will transparently applicable to more than parquet. In fact it would probably open a window of opportunity to make this indexing product open ended and allow extensions in its own right. Enabling Hyperspace team to define interfaces for extensions, and then, potentially, benefiting from domain specific implementations, partnering with people working on products.

clee704 commented 2 years ago

I agree that it could be more beneficial for the community in general in the long run if Spark supports index as a concept in Data Source V2 API. Actually, there is already work going on (SPARK-36525). If you want to contribute to that direction, then it might be better to work on Spark directly.

I think one of the values of Hyperspace is being or trying to be data format agnostic. For example, covering indexes work as long as the data source supports creating from an existing dataset a new dataset with fewer columns and laid out in certain ways (i.e. bucketing) for efficient scanning by selected columns. Data skipping indexes work as long as the data source stores data in multiple objects (i.e. files) and can compute aggregations grouped by object. So we support not only Parquet, Delta Lake, Iceberg, but also CSV and other data sources as long as they are capable of certain things.

Looking at the current code and how it's evolving, it seems the index API in Spark doesn't allow Hyperspace-like indexing subsystems that support more than one data source to be plugged in. If we want to build Hyperspace around the Data Source V2 API, then we should propose a suitable change to the API so that for example we can hook into the index API.