Kotlin / dataframe

Structured data processing in Kotlin
https://kotlin.github.io/dataframe/overview.html
Apache License 2.0
849 stars 64 forks source link

Arrow Support #78

Open Lundez opened 2 years ago

Lundez commented 2 years ago

Hi, I can't find that dataframe supports Arrow as internal serialization / backend.

Is this something which you're working on?

nikitinas commented 2 years ago

Hi, Lundez!

Currently DataFrame doesn't use Arrow as backend, but it's on the roadmap.

Until now we were mostly focused on frontend part: typesafe Kotlin API, code generation, schema inference and other tricks that provide great experience when you work with data in Kotlin. But now API and overall model are getting stable, so it's time to do more performance tuning and scalability, including Arrow support as a backend.

Currently the project has only two active contributors, so any help will be very much appreciated!

Lundez commented 2 years ago

Hi, do you have any headers on how to start?

Do you think the java arrow API can work with your "typing" (or whatever to call the typing is used in data frames)? 😊

I think adding arrow would give this project a big boost. Also adding a query optimizer would follow up as a huge bonus, like pola.rs / spark. To optimize columns and other this when using arrow makes a lot of sense! 😄

jimexist commented 2 years ago

I have some experience with arrow (as an arrow committer) so let me try to set this up.

Current plan is to split into two parts:

  1. Arrow schema reading
  2. Arrow file / data loading and off-heap memory management

Subsequent features can come into more tangible forms when reading is done. Eg arrow file writing, streaming, predicate push down, etc.

Lundez commented 2 years ago

@Jimexist incredibly excited to hear this!

Kopilov commented 2 years ago

Currently the project has only two active contributors, so any help will be very much appreciated!

Hello @nikitinas, what do you think about my last PR-s?

Also I have made some code writing to Arrow but it does not cover all DataFrame-supported column types (was made for Krangl originally)

Kopilov commented 2 years ago

Hello again. I am working with more complex unit test for Arrow reading. Will make PR a little later. Just now, you can look at data example and code it was generated with here

Kopilov commented 2 years ago

@koperagen, @nikitinas, I want your opinion about the next detail.

In Arrow schema we have nullable flag but it's value does not depend on column content. And we may get a column that is marked as not nullable but actually contains null values. Here is an example.

So, we can:

Lundez commented 2 years ago

@koperagen, @nikitinas, I want your opinion about the next detail.

In Arrow schema we have nullable flag but it's value does not depend on column content. And we may get a column that is marked as not nullable but actually contains null values. Here is an example.

So, we can:

  • Ignore nullable flag in the file, read all data and set nullable flag in DataFrame schema if and only if there are null values in the column;
  • Look at nullable flag and always copy it to DataFrame schema; thus reading data like above will produce an error;
  • Look at nullable flag, copy it to DataFrame schema by default and then change not nullable to nullable if there are null values.

    What behavior is the best and should we support different of them, in your point of view?

Could we support different read-modes? Defaulting to first or third makes sense, but a strict-mode would be great (second) through a flag/read-mode IMO

koperagen commented 2 years ago

@koperagen, @nikitinas, I want your opinion about the next detail.

In Arrow schema we have nullable flag but it's value does not depend on column content. And we may get a column that is marked as not nullable but actually contains null values. Here is an example.

So, we can:

  • Ignore nullable flag in the file, read all data and set nullable flag in DataFrame schema if and only if there are null values in the column;
  • Look at nullable flag and always copy it to DataFrame schema; thus reading data like above will produce an error;
  • Look at nullable flag, copy it to DataFrame schema by default and then change not nullable to nullable if there are null values.

What behavior is the best and should we support different of them, in your point of view?

Hm, i would prefer 1 as a default, because in REPL it can help avoid unnecessary null handling when there are no nulls. But we also need 3 for Gradle plugin which generates schema declaration from data sample.

Do i understand the second option right? Something like this would be possible?

    val df = DataFrame.readArrow()
    df.notNullableColumn.map { it  / 2 } // null pointer exception 

I think we shouldn't have this mode unless there is very strong evidence that it is very useful for someone :)

Or do you mean this?

    val df = DataFrame.readArrow() // Exception: notNullableColumn marked not nullable in schema, but has nulls

All that reminds me of "Infer" that is used as a flat for some operations.

Kopilov commented 2 years ago

Thank you for highlighting Infer enum. It can probably be used as parameter.

Hm, i would prefer 1 as a default

OK, thanks for sharing. About 2, I expected something like

val df = DataFrame.readArrow() // Exception: notNullableColumn marked not nullable in schema, but has nulls

when callnig

DataColumn.createValueColumn(field.name, listWithNulls, typeNotNullable, Infer.None)

but actually we have

val df = DataFrame.readArrow()
df.notNullableColumn.map { it  / 2 } // null pointer exception

now. I will fix that.

Where can I read more about the Gradle plugin? How do you use it?

Kopilov commented 2 years ago

I suggest next mapping if use Infer as a parameter:

koperagen commented 2 years ago

Where can I read more about the Gradle plugin? How do you use it?

https://kotlin.github.io/dataframe/gradle.html

I suggest next mapping if use Infer as a parameter:

I'm not sure about it anymore. Because Infer.Type does a different thing in other operations. Infer.Nulls is "actual data nullability" == "schema nullability", and in our case "set nullable flag in DataFrame schema if and only if there are null values in the column" is "narrow nullability if possible", and a third option is "widen nullability if needed" What do you think about a new enum, let's say something like SchemaVerification? It describes variants of this operation: actual nullability (from data) + schema nullability (from file) -> nullability | error Maybe some other name, idk.

edit. Colleagues suggested NullabilityOptions, NullabilityTransformOptions, NullabilityOperatorOptions, NullabilityCompositionOptions As for enum variants, could be WIDENING, NARROWING, CHECKING.

Kopilov commented 2 years ago

Implemented in #129 Narrowing was renamed to Keeping because on schema ignoring we can get no nulls in nullable as well as some nulls in not nullable.