Closed skokenes closed 5 years ago
Hi, on quick glance this seems like something that should be possible. I am busy today, but might have time to draft an example tomorrow. Is the goal just to transform (augment) the data structure (one way or two way) or to treat it as if it were augmented and then do something with the data structure (modify it or query something from it)?
The goal is to transform it for reading by a charting library. I did get something working for this but had to essentially create 3 lenses, 1 for each scenario. Then I pick which one to use based on whether there are rows, columns, or both. I don't have that code with me but can post it here later today. Would love your feedback on it, thank you for looking at this!
Here's what I have currently, where dataLens
is the final lens that will be used. Is there a better way to accomplish or approach this?
const recurseLens = L.lazy(rec =>
L.ifElse(
d => d instanceof Array,
L.collect([L.elems, rec]),
L.pick({
value: ["value", L.valueOr("")],
nodes: ["nodes", rec]
})
)
);
let dataLens = [];
if (hasRows === false && hasColumns === false) {
// insert row and column, then recurse
dataLens = [
L.collect(
L.pick({
value: ["value", L.define("DEFAULT_ROW")],
nodes: L.collect(
L.pick({
value: ["value", L.define("DEFAULT_COLUMN")],
nodes: recurseLens
})
)
})
)
];
} else if (hasRows === false) {
// insert row, then recurse
dataLens = [
L.collect(
L.pick({
value: ["value", L.define("DEFAULT_ROW")],
nodes: recurseLens
})
)
];
} else if (hasColumns === false) {
// Get the rows, then insert columns, then recurse
dataLens = [
L.collect([
L.elems,
L.pick({
value: ["value", L.valueOr("")],
nodes: L.collect(
L.pick({
value: ["value", L.define("DEFAULT_COLUMN")],
nodes: ["nodes", recurseLens]
})
)
})
])
];
}
else {
// If rows and columns exist, just recurse the whole structure
dataLens = recurseLens;
}
Is there a way to tell whether two level deep structure has rows or columns based on just the structure itself (and not by using variables external to the data structure like in the example)?
The recurseLens
const recurseLens = L.lazy(rec =>
L.ifElse(
d => d instanceof Array,
L.collect([L.elems, rec]),
L.pick({
value: ["value", L.valueOr("")],
nodes: ["nodes", rec]
})
)
)
filters a nested structure so that only value
and nodes
properties are left. Is that necessary? IOW, does the input data contain additional properties that need to be dropped, for example?
Also, the recurseLens
handles arbitrarily deep nesting. Is that also intentional or is the maximum depth always 3 (something like rows, columns, and data nodes)?
I assumed that neither of the above aspects is actually necessary. Here is a playground that does a kind of augmentation. If I understood the problem correctly, then the key is that when the input does not contain columns, then the default column needs to be inserted inside every row. The code in the playground achieves this using the insideLevelI
isomorphism constructor.
The data structure itself does not tell me whether rows or columns are included.
It's not necessary to filter the additional properties off. I don't show it in the example code, but I actually rename the properties as well in my real implementation. The reason I do this is to try and keep the data provider somewhat agnostic from my charting library; otherwise if I switch to a different data provider, I will have to update my charting library to accept a new format. Does this step have negative performance implications?
The data can be arbitrarily nested after the rows and columns, so that was on purpose. Sorry, I should have mentioned that.
Thanks for coming up with an example solution, I'm working through it now. What is the global I
that you are using in I.seq()
for example?
The I.seq
function comes from the Infestines library. It just pipes a given value through given unary functions. IOW, I.seq(x, f_1, ..., f_N)
is equivalent to f_N(...f_1(x)...)
.
BTW, one thing that is important to understand about optics is that the API is different (press s
and go forward from that slide) compared to a traditional map/filter/reduce library. In particular, operations are separated from optics. An optic roughly specifies how the elements are selected from a data structure and the operation specifies what to do with those elements. So, when using the library, one first constructs an optic using primitive optics and optic combinators and then passes that optic and a data structure to an operation to get a result.
operation( optic , data ) ~> result
When you look at recurseLens
const recurseLens = L.lazy(rec =>
L.ifElse(
d => d instanceof Array,
L.collect([L.elems, rec]),
L.pick({
value: ["value", L.valueOr("")],
nodes: ["nodes", rec]
})
)
)
one thing that pops out is L.collect
, which is an operation rather than an optic combinator. When you apply an operation to an optic you get a function from data to a result rather than an optic. IOW, you get an ordinary unidirectional function. Of course, such ordinary functions are fine as read only optics, but usually one rather wants to create bidirectional optics. In this case, instead of using L.collect
one could use L.partsOf
to create a bidirectional lens like this.
Thanks, this helps a lot. Especially thinking of an optic as a "selector" and separating it from the operations. There are probably a few places in my code where I'm using L.collect
in some weird/unnecessary ways, mainly to handle dealing with traversing arrays
I'm struggling a bit with taking your original solution and modifying it. If I copy and paste as is, its great. But there are two modifications that I'm trying to make:
I'm pretty lost on where I would incorporate those changes
Thanks for taking the time to put these together, they are really helping me learn.
I see what you are saying about not leveraging optics if just doing a transformation in one direction, which is the case here for sure. I do leverage optics for bidirectional things in several places in my application, but I think I then got used to using L
and started reaching for it in places where it wasn't necessary. Probably a good opportunity for me to clean up some code. Thanks again for the feedback
This isn't an issue but really a question about if/how partial.lenses could be used to solve a data transformation that I am working on.
I have a nested data structure that I am working on that should be 3 levels deep; let's call the levels "rows", "columns", and "cells" like a table. So I might have a structure like:
Sometimes however, the rows or columns levels might be missing. So I may just get back a single level or two levels. In those scenarios, I want to fill the gaps in tree with a default value. So lets say that I received the following data structure:
I want to turn that into something like:
Is this kind of transformation possible with this lib? I've struggled to figure out how I could dynamically accomplish this. I do have flags that let me know if columns and rows are present in the returned data or not. Thanks for any advice