Open kirkrodrigues opened 2 years ago
is this limited to JSON format or any text?
The plugins would be limited to input JSON records and CLP's encoding can be applied to any text field within those records.
Our short term goal is to kind of use Pinot as a black box columnar store. So we'd apply CLP's encoding to decompose a text field into a columnar format and store the columns in Pinot; for logs this should reduce the storage overhead of that field while still allowing it to be searched without resorting to a text index. Then when a user wants to query the field, we'd use CLP to convert their wildcard query into a SQL query on the decomposed columns in Pinot. Since the query operates on the decomposed columns, this should be faster than a query on the original text and only matching rows would need to be reconstructed from the columns using a UDF.
If this works well, (and if the community gives us their blessing :), we hope to try and integrate this deeper into Pinot, perhaps as a special type of index that could be applied to any text column which contains logs.
The CLP encoding should be applicable to multiple input formats including JSON and text. Our first implementation of stream decoder should will be based on JSON log input. @kirkrodrigues can we extract the core logic so that other input stream format can easily use it too? May not be the first PR but can be a good follow up one.
@chenboat Yeah, I could lift the CLP-encoding logic into a class so that it's easy for other input formats to use it.
Does this change support both storing and querying JSON ? Asking because the design doc does not have details for querying.
@kirkrodrigues @chenboat I was wondering if searching/querying is now supported? Is it in roadmap?
We want to be able to store JSON log events in Pinot so that they can be queried efficiently and so that we can reduce storage costs. Part of this involves encoding unstructured message fields in the log event using a new log compressor called CLP. The other part is to transform the log event to fit a table's schema (e.g., extracting nested fields and storing them in a column). We think this can be done with a custom
StreamMessageDecoder
and a few UDFs.We've written more about the motivation and proposal here.