Open quincyjo opened 3 years ago
@verbetam I believe this feature is best implemented as an operator within json-logic, as the behavior would be ambiguous as to whether you mean to execute an operation or map the values of an object to be executed.
While it may be a bit aggressive to post this here, I've implemented this operation within my implementation of JSON Logic, as a higher order operator called eachKey
(I chose to use higher order operators to describe methods in json-logic that hijack the behavior of the traversal of its input):
https://totaltechgeek.github.io/json-logic-engine/docs/higher
Using your example above, I implemented the logic as
{
"if": [
{
"var": "hasData"
},
{
"eachKey": {
"c": {
"var": "a"
},
"d": {
"var": "b"
}
}
},
null
]
}
And applied to the input you gave, the result was {"c":1,"d":2}
@TotalTechGeek This makes sense and has some benefits in explicitly controlling traversal. I assume you have it implemented as a unary operator so that:
{
"eachKey": {
"a": {
"var": "foo"
}
}
}
is syntactic sugar for:
{
"eachKey": [
{
"a": {
"var": "foo"
}
}
]
}
in order to be compliant with the terse AST structure?
I think it is worth noting that this is inconsistent with this library's behavior in that an array is traversed without the need for an operation as in my examples. I think all of this boils down to a deeper question of whether or not the whole tree should be traversed or not.
In my implementation, a JSON object is only interpreted as an operation if it is a single value object and the key matches a known operation and the value is an array (or non-array for a unary operator; eg, var), so it is very unlikely (although technically not impossible) for an intended value to be interpreted as an operation. It seems that either there should be a matching forValue
operator for arrays as:
{
"forValue": [
{ "var": "a" },
{ "var": "b" }
]
}
So that checking for logic within associations is always explicit via an operation, or that both should be traversed eagerly as I had implemented. It feels to me that explicit operations may be more correct as you suggest. I suppose the difference being how one views JSON Logic; either as an operation (meaning the top level of the json value will always be an operation) or as a JSON value that may contain logic within it. I had adopted the latter because of the Always And Never description while explicit traversal operations seem to lean towards the former.
While my implementation is compatible with the methods in json-logic-js
, I did not opt to implement the automatic sugaring of operands to arrays.
What you pass in as an operand in json-logic-engine
is what it receives.
All operands are traversed eagerly in my engine, unless you've flagged the method to not be traversed (this allows developers access to create higher order control structures in the language that would otherwise be impossible).
I suppose the difference being how one views JSON Logic; either as an operation (meaning the top level of the json value will always be an operation) or as a JSON value that may contain logic within it.
I suppose I opted for the former ideology.
I wanted to avoid ambiguity, so In order to allow for embedded JSON within the logic, json-logic-engine
has support for a preserve
method,
{
"preserve": { "var": "a" }
}
will return { "var": "a" }
Which is implemented in the engine quite literally with:
{
traverse: false,
method: i => i
}
At a high level, the desire is to have the output of the JSON Logic function be an object. The use case is when serializing a function that takes some data in and produces another data model as its output. This was natural to me as I thought of JSON Logic as a JSON serialization for a function of type
JSON => JSON
.While you may have an object returned say from a
var
orif
statement, you cannot at present have JSON logic spread through an object and have it resolved. This is odd, as this works at a shallow level for arrays in this JS implementation. Eg: the rule:applied to:
produces:
However, it does not resolve the
var
operations if they are within an object. Instead, if simply returns the provided rule without resolving any of the logic contained within. Eg: the rule:applied to:
produces:
I would expect:
A more descriptive case: the rule:
applied to:
produces:
I would expect:
Of course, the above example is trivial, but it demonstrates the behavior.
For my work I wanted to use JSON Logic to serialize rules which would take an input datastructure and product a different data model as the output. I work in Scala, so I built my own JSON Logic implementation for this use case. It functions by taking the JSON value given as the rule and traversing it, resolving any logical operations found throughout the tree. This is effectively thinking of an object or an array as a logical operation which produces itself by applying the given data to each of its elements. This means that the JSON value provided as the rule will lazily apply the given data to any logical elements through the entire tree. This takes avantage of the order of operations being explicit in an abstract syntax tree.
When the structure of the output is known and the attributes may be derived in a different ways and the substrcuture may vary, this is the easiest way to structure the JSON logic. Additionally, it pairs well with JSON schema and is easy to consume them as a pair. I thought I had tested this (I was using the JS implementation behavior as my guide for edge cases), but I must have just seen the shallow array case and assumed it would handle objects the same way. I was also surprised to find out that the logical traversal is shallow.
So my quesiton really becomes, is this difference in behavior between my expectation and this implementation caused by me misunderstanding the spec and high level idea, or is this an oddity in this particular implementation? One of the benefits of using JSON logic for my use case is that it is rather well understood and has support across many langauges (both JS and Python application will also be in play), so if this style of logical structure is not intended it would make JSON Logic a much less desirable tool for this use case.
I suppose a vantage of this current implementation would be that the result of a single evaluation of a rule on some data could produce another rule, but that would mean that the top levle JSON Logic is actually of type
JSON => JSON => JSON
, so the distinction becomes blurry in my eyes. When do you stop applying the result and what data do you provide it the resulting function?