openLooKeng is a drop in engine which enables in-situ analytics on any data, anywhere, including geographically remote data sources. It provides a global view of all of your data via its SQL 2003 interface. With high availability, auto-scaling, built-in caching and indexing support, openLooKeng is ready for enterprise workload with required reliability.
The goal of openLooKeng is to support data exploration, ad hoc queries, and batch processing with near real time latency ranging from 100+ms to minutes, without moving your data around. openLooKeng also supports hierarchical deployment enabling geographically remote openLooKeng clusters to participate in the same query. With its cross region query plan optimization capability, queries involving remote data can achieve close to "local" performance.
Data management systems like RDBMS (such as MySQL and Oracle) and NoSQL (such as HBase, ES, and Kafka) are widely used in various application systems of customers. With the increase of data volume and better data management, customers gradually build data warehouses based on Hive or MPPDB. These data storage systems are often isolated from each other, resulting in independent data islands. Data analysts often suffer from the following problems:
openLooKeng can be used to implement joint query of data warehouses such as RDBMS, NoSQL, Hive, and MPPDB. With the cross-source heterogeneous query capability of openLooKeng, data analysts can quickly analyze massive data.
In a two-level or multi-level data center scenario, for example, a province-city data center or a headquarters-branch data center, users often need to query data from the provincial (headquarters) data center or municipal (branch) data center, the bottleneck of cross-domain query is the network problems (such as insufficient bandwidth, long delay, and packet loss) between multiple data centers. As a result, the query delay is long and the performance is unstable. openLooKeng is a cross-domain and cross-DC solution designed for cross-domain queries. The openLooKeng cluster is deployed in multiple DCs. After the openLooKeng cluster in DC2 completes computing, the result is transmitted to the openLooKeng cluster in DC1 through the network, aggregation calculation is completed in the openLooKeng cluster of DC1. In the cross-domain and cross-DC openLooKeng solution, calculation results are transmitted between openLooKeng clusters. This avoids network problems caused by insufficient network bandwidth and packet loss, and solves the cross-domain query problem to some extent.
openLooKeng itself does not have a storage engine, but it can query data stored in different data sources. Therefore, the system is a typical storage and computing separated system, which can easily expand the computing and storage resources independently. The openLooKeng storage and computing separation architecture is suitable for dynamic expansion clusters, facilitating quick elastic scaling of resources.
Customers have a large amount of data. To use the data, they usually build a dedicated data warehouse. However, this will cause extra labor costs for data warehouse maintenance and data ETL time costs. For customers who need to quickly explore data but do not want to build a dedicated data warehouse, it is time-consuming and labor-intensive to replicate data and load the data to the data warehouse. openLooKeng can use standard SQL to define a virtual data mart and connect to each data source with the cross-source heterogeneous query capability. In this way, various analysis tasks that users need to explore can be defined at the semantic layer of the virtual data mart. With the data virtualization capability of openLooKeng, customers can quickly build exploration and analysis services based on various data sources without building complex and dedicated data warehouses.
A new connector is directly connected to another openLooKeng cluster to implement collaboration between multiple DCs. The key technologies are as follows:
In the multi-table join scenario with low correlation, most probe side rows are filtered out because they do not match the join conditions after being read. As a result, unnecessary join calculation, I/O read, and network transmission are caused. Dynamic filtering dynamically generates filtering conditions based on join conditions and data read from the build-side table during query running, and applies the filtering conditions to the table scan phase of the probe-side table. This reduces the data volume of the probe table that participates in the join operation, effectively reducing network transmission and improving performance by 30%.
openLooKeng increases query efficiency by creating indexes on existing data and storing the indexes outside the data source. Bitmap is a bitmap index that records binary information. It is applicable to the AND operation and can be quickly queried using a dictionary. Bloom uses a bit array to represent a set, which can quickly determine whether a value (only the equal sign is supported) exists in a set. Min-max records the maximum and minimum values in a file, which is applicable to statements that are greater than or less than the condition.
openLooKeng creates multiple caches such as metadata cache, execution plan cache, and ORC row data cache in order to improve query performance.
All Connectors that use JDBC connections cache metadata during the first query to improve the query performance.
For ORC files, the ORC row data cache provides an efficient method to cache frequently accessed data to improve query latency. The administrator is able to create a cache on specific tables and partitions.
The execution plan is cached after the first query, rather than discarded after each query request, thereby reducing the preprocessing time and resources required for subsequent queries. By caching these plans, time-consuming plan generation steps can be skipped.
Click here to obtain the design document.