elixir-explorer / explorer

Series (one-dimensional) and dataframes (two-dimensional) for fast and elegant data exploration in Elixir
https://hexdocs.pm/explorer
MIT License
1.13k stars 123 forks source link

Support for SQLContext #999

Open ceyhunkerti opened 1 month ago

ceyhunkerti commented 1 month ago

Request for Feedback

@billylanchantin, @josevalim, @philss, @anyone_maintaining_the_repo :). I would greatly appreciate it if you could review this and provide feedback, pointing me in the right direction for any improvements.

Notes

Thank you for your time and guidance!

billylanchantin commented 1 month ago

Hi @ceyhunkerti!

Quick question before I dig into the code. You say:

The goal is to enable SQL usage, which I believe could help to address issues https://github.com/elixir-explorer/explorer/issues/969.

Would Explorer.DataFrame.sql/3 be sufficient for this?

ceyhunkerti commented 1 month ago

Hey, thanks for the quick feedback, I saw it, but I believe it's restricted to a single df right?. SQLContext operates on any number of dfs. as if they are tables.

billylanchantin commented 1 month ago

Ah I see. A function that only works on one frame certainly won't enable multi-frame joins! Thanks for clarifying the goal.

I'll take a look soon. Note: we were/are hesitant about supporting Polars-specific SQL interfaces. You can read the discussion here:

https://github.com/elixir-explorer/explorer/issues/818

We went with DF.sql/3 because it was backend-agnostic.

ceyhunkerti commented 1 month ago

hmm, thinking loudly 🤔

to understand backend independence.

# pseudo
df1 = DF.sql("select ...", backend: polars) |> compute
df2 = DF.some_action(df1, backend: my_new_backend)

I believe the above one is not possible unless my_new_backend supports interacting with polars dataframes right? So in that manner DF.sql or any other function interacting with df is not actually backend independent in mixing.

I think by backend independence we just imply the api independence without mixing the backends.

If we are worrying about the SQL itself, I partially agree, for example; select ... from t1, t2 not supported on polars-sql but works with duckdb But I believe it's a reasonable trade off when developer knows about the backends they use. (and imho: for any backend claiming having a SQL backend should at least support ANSI SQL)

PS. When thinking of another backend, first one comes to my mind is duckdb which can operate on polars dataframes. So in that manner the SQLContext on the PR can be also easily implmented in duckdb backend.

btw, I do these for learning purposes and am happy to support if I can. We can park this one for now if you'd like, and I can focus on something else that you think might be more helpful or a higher priority.

josevalim commented 1 month ago

What if we allow DF.sql(df, "select", tables: [foo: df2, bar: df3])?

josevalim commented 1 month ago

Or this could be even better:

DF.sql([foo: df1, bar: df2, baz: df3], "select")?
billylanchantin commented 1 month ago

@josevalim You read my mind! Perhaps even:

DF.sql(%{foo: df1, bar: df2, baz: df3}, "select")

to prevent duplicates?

Also I think behind the scenes we'd need to individually register each table with SQLContext.

josevalim commented 1 month ago

If we use a map, we don't have the concept of "first".

ceyhunkerti commented 1 month ago

@josevalim You read my mind! Perhaps even:

DF.sql(%{foo: df1, bar: df2, baz: df3}, "select")

to prevent duplicates?

Also I think behind the scenes we'd need to individually register each table with SQLContext.

Also mentioned something similar to it in here lf_sql (sql: string, registry: [(df, table_name), ...]) map is better though like you suggested.

Back to the original issue. DF.sql vs SQLContext

SQLContext is a wider and more flexible interface I think, also our current DF.sql uses it on the polars, side. But it's your call we can do the way you decide. I can only comment about the order of the parameters if we go with DF.sql(%{foo: df1, bar: df2, baz: df3}, "select") :)

DF.sql("select", %{}) 

seems more natural to me like print("Hello %s" % param). :)

ceyhunkerti commented 1 month ago

Also, this referenced issue here makes me think. those;

In the issue author mentioned in order to achieve complex and/or logic one can do the following with the api

result =
  all_pairs
  |> DataFrame.join(all_relationships_1, how: :left,
      on: [
        {"table_name_1", "table_name"}, 
        {"column_name_1", "column_name"}, 
        {"table_name_2", "referenced_table_name"}, 
        {"column_name_2", "referenced_column_name"}
      ])
  |> DataFrame.join(all_relationships_2, how: :left,
      on: [
        {"table_name_2", "table_name"}, 
        {"column_name_2", "column_name"}, 
        {"table_name_1", "referenced_table_name"}, 
        {"column_name_1", "referenced_column_name"}
      ])
  |> DataFrame.mutate(has_relationship: coalesce(has_relationship_1, has_relationship_2))

So not exactly sure what execution plan polars creates for it but it makes me wonder if it duplicates the dataframe on memory does some hash like join just to achieve a slightly complex and/or cond and have a probable performace degradation. It maybe smarter than I think but this makes me suspicious.

At the end, this case, IMHO, justifies having a powerful sql interface in the library in way or the other.