TulipaEnergy / TulipaIO.jl

Apache License 2.0
0 stars 3 forks source link

Desired data transformations in analysis workflow #11

Open suvayu opened 6 months ago

suvayu commented 6 months ago

Terminology

Desired input

It would be good to collect the kind of data transformations that a user might want to do before using the final result as TEM input.

We can create issues from this list, and mark them "Now" or "Soon".

Workflow wishlist

Future outlook

The idea is to provide canned Julia recipes (functions) that wrap the complexity of some of these steps. Then a future user can simply write a Julia script that looks like somewhat like this:

import TulipaIO: source, @transform1, @transform2, @save, @read, @dump
import TulipaEnergyModel as TEM

source("<file>") |> @transform1 params... |> transform2 params... |> @save "name"

@read "name" |> TEM.create_model |> @dump "results"

This is of course an overly simplified example, think of this as "Eventually".

Unsolved issues

As a user works on a project, tries out different data processing steps, they will probably create many tables in DuckDB. This will easily turn into a mess. There are two mitigations:

Dev setup

The transformations mentioned earlier can be in SQL or Julia, or a mix of both. So it would be good to have SQL recipes that do some of the simpler ones, and for the more custom ones we can use Julia, and combine them to get the best of both worlds.

To develop the SQL recipes, the simplest setup is to use the DuckDB CLI, and try to work with CSV files.

The other option is to use the Julia client:

CC: @clizbe

clizbe commented 6 months ago

I really like bullet 3 (doing a partial replacement) as this would reduce the amount of filtering necessary.

Do we always require the user to have a second file that they're merging with the main? Or can we give them the option to "type in a value" that is applied to a bunch of rows? (Aka, set all specified rows to "Active")

As for the Julia wrappers - I think first we should just document the SQL/Julia way of doing things. If people have that cookbook, they can do a lot. Then we can start looking into simplifying things they're going to type over and over again. I'm a little worried about starting with the simplification, since it adds a layer of abstraction, not having the user understand what's happening. What do you think?

suvayu commented 6 months ago

Do we always require the user to have a second file that they're merging with the main? Or can we give them the option to "type in a value" that is applied to a bunch of rows? (Aka, set all specified rows to "Active")

A file is not mandatory. For example for your example we need filter and set, while set is something we can do now, filter is not yet possible. So definitely another requirement.

I'm a little worried about starting with the simplification, since it adds a layer of abstraction, not having the user understand what's happening.

Good point, I think maybe now it looks a bit confusing because you don't see all the steps one after the other, and it's also a rather advanced feature. How about I first create a few example sessions based on the test datasets, and then you can create a tutorial out of it? The tutorial can have more conceptual explanations to make it easier to follow what's happening under the hood.

clizbe commented 6 months ago

I think with respect to saving the data files for later, we can come up with a decision and then build it into the workflow and documentation - explaining why we do it the way we do it. But of course taking whatever way they would do it naturally and automating it / adding meta information would be nicest to the user.

clizbe commented 6 months ago

@suvayu This is bizarre: If I run this code in a script, I get an error:

con = DBInterface.connect(DB) # in-memory DB
res = DBInterface.execute(con, "SELECT * FROM read_csv_auto('test/data/Norse/assets-data.csv', header=true, skip=1)")
my_df = DF.DataFrame(res.tbl)

image

But if I run it by line in the Julia terminal, it works fine.

clizbe commented 6 months ago

I think C. should be an optional argument, so the user can choose whether to fill missing values from the original source or leave them blank (or set to 0 maybe).

suvayu commented 6 months ago

I think I've seen this error before. I would like to understand it better. Could you open a bug report and provide a recipe to reproduce? Even if it's not an actual bug, it would be best to understand the behaviour.

I think C. should be an optional argument, so the user can choose whether to fill missing values from the original source or leave them blank (or set to 0 maybe).

If they don't want to fill the missing values, can't they use the replace columns transform (B)?

clizbe commented 6 months ago

Sure I'll make a bug report.

Yes that's what I mean - to me this should be an option for B. Something like this maybe?

General behavior: Replace column X with column Y
Options:
- Missing Values = X
- Missing Values = 0
- Missing Values = [empty]
- Missing Values = Error/Warning
(Missing values in Y)
suvayu commented 6 months ago

Okay, that makes sense, I think the current implementation assumes (or maybe implements) an INNER JOIN. I think LEFT JOIN is more useful than INNER JOIN.

I guess we should uncheck B and convert it to an issue as well :-p

suvayu commented 6 months ago

Actually B should be merged with C. I'll do it

clizbe commented 4 months ago

image

clizbe commented 1 month ago

@suvayu Another situation I discussed with someone from OPERA the other day: Two people are working on the same "core" data and will want to merge their changes eventually. One guy is working on the structure of the data - as in, breaking a region into smaller subregions and updating all the tables, etc to reflect this increased detail. The other guy is changing numbers in tables, to do updates and things. Maybe the same tables the other guy is restructuring, but we don't really know.

This might not be a problem with the way we're separating raw data/model/scenarios, but it might still be an issue. I'm not sure.

suvayu commented 1 month ago

What is the difference between "updating all the tables" and "changing numbers in tables"?

Does the first mean changing the schema, i.e. changing columns, what constitutes a unique row, etc? Whereas the second would mean, update the source dataset to newest release, change weather year, or inflation calculation, etc?

clizbe commented 1 month ago

Yeah it's kind of hard to relate their structure to ours.

One guy is changing the structure of the data. So maybe making new rows and updating old rows. Maybe duplicating a table and filling it in with new information.

The other guy is updating the source data, maybe in the same tables where the structure is changing.

So the dumb-method to merge them is to copy-paste the new structure tables into the updated source database. BUT that would lose any possible updates in that same table.

And as I understand it, the structural changes sort of break the data/scenario until they're complete, so it isn't really possible to do them simultaneously in the same database. (At least with their way of working.)


EXAMPLE ATTEMPT:

Like I said, with the way we're going to work, this might not even be a problem.

clizbe commented 1 month ago

Or maybe Person 1 is just adding rows, but Person 2 doesn't want those rows interfering with what he's working on as they appear over time. Basically a version control issue.

suvayu commented 1 month ago

Adding columns doesn't really interfere with what person 2 is doing, but removing a column would. As for adding rows, depending on the operation it may or may not interfere. Basically any kind of aggregation will break. But there's a technique where you add a date column that is your "version" (since it's date, a common name is "asof"), then you can do your query as SELECT * FROM table WHERE asof = '2024-05-01'. New rows will have have a different asof date, so they won't interfere. I guess if your data isn't frequently changing, it could also be just a version number.

Note that this technique assumes no data is ever removed. I also don't know what other knock-on effects it might have; e.g. it's possible normal queries become a bit more complex and you incur a small performance hit.

suvayu commented 1 month ago

Also, another technique would be person 1 does their changes as logic (transformations) instead of directly working on the tables. I don't think this will cover everything.