apache / datafusion

Apache DataFusion SQL Query Engine
https://datafusion.apache.org/
Apache License 2.0
5.99k stars 1.14k forks source link

Simple Functions #12635

Open findepi opened 2 days ago

findepi commented 2 days ago

Is your feature request related to a problem or challenge?

Verbosity

Currently implementing a scalar function is a pretty involved process. For example a simple function calculating greatest common divisor looks like this (a lot of detail omitted for brevity):

impl ScalarUDFImpl for GcdFunc {  
    fn name(&self) -> &str {
        "gcd"
    }

    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {
        Ok(Int64)
    }

    fn invoke(&self, args: &[ColumnarValue]) -> Result<ColumnarValue> {
        make_scalar_function(gcd, vec![])(args)
    }
}

/// Gcd SQL function
fn gcd(args: &[ArrayRef]) -> Result<ArrayRef> {
    match args[0].data_type() {
        Int64 => {
            let arg1 = downcast_arg!(&args[0], "x", Int64Array);
            let arg2 = downcast_arg!(&args[1], "y", Int64Array);

            Ok(arg1
                .iter()
                .zip(arg2.iter())
                .map(|(a1, a2)| match (a1, a2) {
                    (Some(a1), Some(a2)) => Ok(Some(compute_gcd(a1, a2)?)),
                    _ => Ok(None),
                })
                .collect::<Result<Int64Array>>()
                .map(Arc::new)? as ArrayRef)
        }
        other => exec_err!("Unsupported data type {other:?} for function gcd"),
    }
}

/// Computes greatest common divisor using Binary GCD algorithm.
pub fn compute_gcd(x: i64, y: i64) -> Result<i64> {
   ...
}

This function is still "relatively simple" because:

It should be possible to express functions using simple row-level operations, because in query engines most function logic is structured like that anyway (compute_gcd in the example).

Local data structures

Currently DataFusion functions are singletons plugged into the execution engine. They have no way to store and reuse buffers or compiled regular expressions, etc. Thread local storage is not an option because DataFusion, when embedded, does not control creation of application threads.

Describe the solution you'd like

Simple

It should be possible to separate function logic from the mechanics. Exemplary GCD function needs to provide fn compute_gcd(x: i64, y: i64) -> Result<i64> and the rest of boilerplate should not be hand-written for every function separately.

It should be possible to implement a function that accepts string values, without having to deal with the 5 different runtime representations that can carry string values: StringArray, LargeStringArray, StringViewArray, DictionaryArray<Key>, RunArray<Key> (maybe more than 5 because they are recursive in theory: can DictionaryArray contain a DictionaryArray? can it contain RunArray?)

Focused

Because SQL is statically typed, it is necessary to select function overload during planning time, so that return type is also known (the return_type function). The invoke function needs to implement same logic again. This process should be less error-prone: once the function is bound to the query call site, its signature is known and the implementation should not need to do type checks.

Performant / Efficient

It should be the compiler's / framework's job to provide vectorization, without hand-writing same logic in every function separately.

It should be the compiler's / framework's to do null checks, providing efficient tight loops for the all-non-null case without having to write such logic in every function separately.

It should be possible to write efficient functions that need thread local data structures, for example for regular expressions, without having to use thread locals and/or shared pools which introduce contention.

Describe alternatives you've considered

However, direct use of the library is not possible because

The library could still be part of the solution, but doesn't have to be and it's a non-goal to use a particular library.

Additional context

findepi commented 2 days ago

cc @alamb, @andygrove, @jayzhan211, @ozankabak, @notfilippo, @comphead

findepi commented 2 days ago

FYI: I am doing some experiments how this could look like

alamb commented 1 day ago

I think the idea of making it easier to write functions that include specialized implementations for different types is a great idea. This would likely both make our code faster (more specializations, more uniformly handle constants) as well as reduce the replication (there are several different patterns for dispatch / specialization now -- consolidating would be really nice)

alamb commented 1 day ago

Currently DataFusion functions are singletons plugged into the execution engine. They have no way to store and reuse buffers or compiled regular expressions, etc.

here is some prior work on the precompiled regular expressions: https://github.com/apache/datafusion/issues/11146 (note compiling the regex hasn't ever shown up in any performance trace I did, probably because the cost of actually matching is so much bigger than the cost of compiling and the cost of compiling is spread over 8192 rows

comphead commented 1 day ago

Thanks @findepi I think this process go through iterations, and easier than was before but still far from perfect.

The ScalarUDFImpl common trait is already a huge help, but for example dynamic typing, like MAX/MIN/GREATEST when return type depends on input is gonna be challenging, in Trino it seems also implemented in runtime rather than static.

Rust can benefit from macros and generate functions with all possible output types combinations, but this is just 1 problem.

I'm totally down if we can make it easier

alamb commented 1 day ago

Just to be clear, what I am imagining comes out of this work is:

  1. No changes to ScalarUDFImpl
  2. Some new way (generic functions, macros, etc) that would make creating the specialized function implementations (perhaps following the pattern of declarative macros on https://github.com/apache/datafusion/issues/11413)