aig-upf / tarski

Tarski - An AI Planning Modeling Framework
Apache License 2.0
58 stars 18 forks source link

Boolean-valued fluents and RDDL-style boolean & numeric interoperability #91

Open mejrpete opened 4 years ago

mejrpete commented 4 years ago

Related to #90

(Sorry for the amount of text here! I also hope I'm understanding the moving parts in tarski correctly, or else I might not be making much sense.)

RDDL has a number of language features and expected behaviors related to boolean and numeric-valued action and state fluents that do not appear to be fully supported by tarski. These include

One of the challenges in supporting all of these features, particularly with boolean-valued fluents is the question of which tarski structures should be used to represent boolean-value fluents (predicates or functions). When representing a state-fluent as a function, for example, we write

f = lang.function('f', somesort, lang.Boolean) #this requires the 'boolean' theory
p = lang.variable('p', somesort)
task.declare_state_fluent(f(p), 'true')

This puts us in a bit of a messy situation, however. Rather than getting out a boolean-valued state fluent (in the RDDL sense), RDDL built from this configuration will have a new Boolean object, with values True and False, on which no logical operators can be applied and where none of the standard control structures above are defined. It is worth noting that for the comparable use of the Integer sort, tarski correctly treats integer-valued items as numeric and can generate appropriate arithmetic in RDDL without issue.

The obvious alternative is to use predicates to represent boolean-valued state/action fluents and nonfluents. In this case, we do not require the Boolean theory in the language and can write

f = lang.predicate('f', somesort)
p = lang.variable('p', somesort)
task.declare_state_flunt(f(p), 'true')
#if some-action is an appropriate action, correctly generates f'(p) = f(p) | some-action(p) (when written with io.rddl).
task.add_cpfs(f(p), f(p) | some-action(p))

In this case, we do not end up with a new Boolean object type (which is good!), and we are able to treat values as true boolean-values in RDDL for the purposes of logical formulae, etc. There is no problem, for example, declaring a CPF entry as in the last line above. There are a few bits of strange behavior doing it this way (like only being able to set instantiations of f(constant) to true in an initial state because we are only able to use model.add() and not model.setx()), but I think that can probably be handled as a separate (focused) bug/feature issue if it turns out predicates are the correct tool to be using to represent state/action fluents and non-fluents.

These predicate-defined state and action fluents work great with logcial operations, as we would expect! But RDDL expects more interoperability between boolean and numeric types than this. In particular, we expect to be able to do arithmetic on boolean values where typical 0/1 numeric values are assumed for false/true. However, in both the predicate-defined and the function-defined case it doesn't appear that this is currently possible. Figuring out how to handle this seems to be a prerequisite to fully supporting "IF/Then/Else", RDDL-style sums over state/action values, and standard CPF & action-constraint definition approaches (which often rely heavily on arithmetic over bool-valued items).

I want to note that the separation of boolean and numeric types for functions, and the lack of interoperability between numeric-valued things and "boolean valued things that are actually predicates under the hood" makes complete sense, especially in context of any PDDL-like language and given tarski's careful understanding of numeric values as numeric Sorts, etc. What is not obvious to me is the correct way to handle this kind of thing within the tarski framework.

The key observation here is that either way, being able to support an RDDL definition like reward = CONSTANT * exists_{v : some_type}[bool-state-fluent(?v)] requires either Formula objects to accept Function objects as subterms, or operators in CompoundTerm objects to be able to handle a numeric sort and the underlying boolean type of a predicate as two acceptable types (for Function objects or Predicate objects respectively).

Thoughts? I don't think I'm missing anything about how Predicate/Formula and Function/Term objects are working in tarski that would make this trivial. Apologies if I am!

miquelramirez commented 4 years ago

Hi @mejrpete,

I will get back to you in more detail by next week but just some quick notes:

I think that our ite covers well what you are after, but probably needs to be tested more thoroughly :)

@gfrances and me have been discussing as well whether or not we want to support "type conversions", in fact, we were thinking of doing some changes on how we manage sorts and how expressions are checked for equivalence.

mejrpete commented 4 years ago

@miquelramirez well, that is embarrassing! You are absolutely right that sumterm implements the summation functionality I was looking for. Additionally, on closer inspection I believe that the behavior I had seen with ite was actually just an instance of the lack of implicit Boolean/Integer type conversion in an example I was working with. I should have been more careful before casually saying that those features weren't fully implemented! My mistake.

Regarding "type conversions" --- I definitely understand that's a relatively significant undertaking regarding the structure of Sorts. In terms of RDDL specifically I would definitely be interested in Boolean/Integer interoperability for practical reasons. That is actually the main feature that I was looking into when I realized I should reach out to you all to see if you already had a direction you were interested in.

I noticed that the benchmarks you pointed to in #90 had primarily real-valued fluent modeled with functions, so I imagine that it would make sense to approach boolean-valued action and state fluents in a similar way. I think the main barriers to doing that would be to ensure first that Boolean values from the boolean sort interacted properly with logical expressions and also the more general issue of whether type conversion could be implemented in such a way that the arithmetic-with-booleans patterns that are standard in many RDDL domains would be supported.

Thanks a lot! I know that some of these questions are far from trivial. I appreciate the discussion! I'm really hoping that I'm going to be able to leverage tarski for RDDL generation given its power as a tool.

miquelramirez commented 4 years ago

No need to apologize @mejrpete ! Nothing of this is documented :)

Feedback on these features is very welcome, as I pretty much got them in to support the modelling of RDDL domains in the style you suggest, but I haven't really had time to work on them very much. Looking forward to someone to give them a good bang for their buck.

One important thing is that in tarski we make a distinction between predicates and functions. This follows from a very specific tradition in how to present first order logic, where boolean functions are represented in terms of relations (the set of tuples of arguments for which the predicate is true) rather than functions. We don't support atm Boolean functions but nothing stops you from defining a function over the integers with a domain constraint that excludes anything else than {0, 1}.

@gfrances and me have had some back and forth about whether we should ditch the concept of predicate and stick with Boolean functions, and IIRC we concluded to stick with predicates for clarity when it comes to deal with STRIPS and PDDL.

Let us know if the features we have at the moment work for you, any feedback you have, or improvements in the modeling interfaces you may want to propose, is greatly appreciated.

gfrances commented 4 years ago

Thanks @mejrpete for your comments. I am not 100% sure that the workarounds mentioned by Miquel would be enought to have proper support for the kind of RDDL expressions discussed above:

So a question here would be whether transitioning to a more SMT-like representation where atoms just happen to be terms that take values within a {0, 1} domain would solve most of the issues with representing RDDL domains. A related question would be if there would be disadvantages to such a transition.

If the answers to the above are "Yes" and "Not too many", then looks like supporting RDDL would be a reasonable argument in favor of switching to a representation that is agnostic about the textbook distinction between functions and predicates, more in line with the APIs of, say, most SMT solvers, then treat True and False equivalently to 0 and 1, etc. As Miquel says, we've discussed this change a number of times (last of which here) but never found compelling enough reasons to do the refactoring within our limited set of use (mostly PDDL-based) cases. Of course I don't think this would be a trivial change, but wouldn't be opposed to creating a new branch to explore this if there are good use cases for it.

mejrpete commented 4 years ago

@miquelramirez @gfrances thank you both for your detailed replies!

I believe (based on the information above and a bit of quick testing) that @gfrances is correct that the workarounds mentioned above fall short of allowing for modeling of the kind of RDDL domains I'm working with. @gfrances observation that 0/1 functions can't be used with logical connectives to form logical formulas being the main challenge.

I completely understand that shifting to a more SMT-like representation internally is a very significant change (with the potential to have negative effects elsewhere in the system). To speak directly to question 1 from @gfrances I do think that a more SMT-like representation would solve most (if not all) of the issues with representing RDDL domains, since it would allow for direct interoperability between numerically-valued functions and boolean-valued-things where it's still possible to treat everything boolean-valued as Predicates when appropriate (following the textbook distinction between predicates and functions). I believe this would mean that even things like if-then-else could then be handled in a relatively straightforward way.

There are, of course, advantages to having structural barriers to conflating predicates and functions both in terms of traditional formal definitions and approaches (e.g. FSTRIPS being an extension of STRIPS with Functions augmenting the core functionality of predicates). I also realize that the current Tarski implementation leverages the distinction between Formula and Term to allow functionality including but not limited to formula simplification, etc. I imagine that's a major potential downside of considering a more SMT-like representation, or at least a major barrier to simply subclassing Predicate from Function since it would be important to ensure that none of the useful logical tools (some of which I'm sure I'm not aware of) aren't broken by allowing expressions with mixed Function and Predicate types in their structure. The obvious question is whether it would be possible to do this in a "correct" way.

I think the above question is demonstrated pretty well in something like if-then-else, since depending on the types of the expressions (lets say "numeric-domain-functions" or "predicate") and the context within the if-then-else resides the behavior must change. For example, the following are both perfectly legal RDDL (albeit not necessarily the most efficient representation of the behavior)

state-fluent'(?v) = state-fluent'(?v)
    + (CONSTANT * if (action-fluent and other-action-fluent(?v))
                                then numeric-state-fluent
                                else (-1));
bool-state-fluent'(?v) = bool-state-fluent(?v)
                                      | if (action-fluent == other-action-fluent(?v))
                                        then true
                                        else false;

What I don't know is whether it's possible to actually support this without losing any of the advantages of having a fully separate understanding of Predicate/Formula and Function/Term objects.

I imagine one possibility would be to have Predicate subclassed from Function, have all logical connectives require Predicate objects on either side of them, and have no implementation of logical connectives for non-Predicate Function objects. Expressions which mixed numerically valued Functions and Predicates would then have to do that in a very particular way --- where Predicates joined by logical connectives could be subterms of that expression where they would be treated as 0/1 valued Functions, but where non-Predicate functions could not be subterms of an expression involving logical connectives. I'm not sure if it's possible to tell how feasible that is without some exploratory implementation, but I'd definitely like to know if you have any thoughts on the idea, generally.

(I'm not sure if I'm stepping on the formal notion of Theory here, so please excuse me if I am). One advantage that this design would afford is requiring a special language Theory to allow a logical formula to be interpreted as having a 0/1 numeric value when included as a subterm of an expression involving math-y operators. I could imagine that in the case where this "interoperability" (that's probably a bad name) theory was not added to the language, this could allow Tarski to fail early and loudly if Predicate and Function types were not being treated as formally separate things (much like it does now!).

I wonder if this --- a default strict separation of Predicate and Function objects in practice, with an option to explicitly add interoperability via a language theory --- might prevent the transition to a less separate internal representation from breaking intuition & functionality. The idea would be to preserve the conceptual separation of predicates and functions in FSTRIPS/PDDL/etc as default behavior and allow other tools that expect guarantees around "formulas involving predicates being logical formulas with no funny stuff from functions.

Do that seem like a plausible approach? I obviously don't have much familiarity with the functionality of Tarski that I haven't personally worked with yet, so I assume there's a decent chance I'm missing something very significant! This obviously doesn't handle likely problems with backwards-compatibility arising from a change like this (which I guess is fine as long as this is considered to be a fork or separate branch, but I'm not sure how you all feel about it).

Thanks to you both for discussing this with me! I really appreciate it. I think it goes without saying at this point that if there is a direction here that you two find promising for an exploratory branch, etc I would be very interested in contributing heavily to it (whether it's as a direct contribution or a fork of the project that could later on be included as part of the upstream, depending on what you think of the approach).

Thanks a lot!

mejrpete commented 4 years ago

Sorry if this isn't the right way to do this, but @emilkeyder-invitae asked that I add him to this conversation when we last spoke, and an at-mention is the only way I know how to do it on a repo that I'm not a contributor for. Mentioning him here so he gets notified of where this is.

emilkeyder-invitae commented 4 years ago

Thanks for adding me Pete. I haven't looked at tarski for a while, but the last approach you outlined above that would (dis)allow casting between the two types based on the presence of a theory sounds like a reasonable approach to me. @miquelramirez, @gfrances, do you think it makes sense for the direction you want to go with tarski? Would it be helpful if Pete did a quick sketch of what the code would look like so we can review and see whether it's an idea worth committing to?

gfrances commented 4 years ago

Hi @mejrpete, Emil, and sorry for the slight delay in answering.

I think it'd definitely make sense to start an exploratory branch / PR / fork, whatever you guys are more comfortable with, and I would definitely try to help a bit with that.

As for the best design to start with that: I fear that trying to keep everyone happy by having some "theory", or similar mechanism, change the way functions and predicates are conceptualized, will end up meaning a lot of duplicated efforts. If I were to spearhead this, I would perhaps start by trying:

Of course there's a certain tradeoff: these options might mean a bit more refactoring work at the beginning, but much less maintenance work in the midterm.

What do you guys think?

miquelramirez commented 4 years ago

I think that going for the full SMT-like representation is the way to go, as you say @gfrances it will be paying off handsomely in the mid term. Maybe @mejrpete can generate those test cases in a dedicated branch, which we can use to harness the refactoring.

mejrpete commented 4 years ago

Thank you both for your responses! I think a full SMT-like representation would work great, especially if going that route is the preference for both of you. I can definitely see the value of reducing the complexity of the internal representation that way.

@miquelramirez I would be happy to generate a selection of test cases to drive the refactoring. My tendency would be to focus on RDDL/FSTRIPS-related integration and single-feature tests (in the same assertion-based pytest style that I see in the existing test suite) to start, if that seems reasonable to you guys.

Do you think that the Predicate identifier should be struck from the API altogether then? Or should it be an alias for boolean-valued functions? If the SMT-like representation involves actually striking "Predicate", etc from the API, I imagine that we will break API assumptions used for a significant chunk of the rest of your test suite. I imagine those tests could either be translated to the new function-based API as the refactor is happening, or could be a significant part of eventually (if things work properly!) merging the refactored code in off the dedicated branch. In the second case, I think the tests I could write myself during the refactor would be good enough to ensure correct support for the features I'm most familiar with in Tarski (mostly reading/generation/writing PDDL and RDDL), but I think additional work to ensure test coverage outside of the PDDL/RDDL reading and generation code on the new API would be needed to confirm the same level of test coverage that you have currently.

I will get started working on an initial set of tests. Perhaps for the first round of tests I could submit them as a PR on a new branch from a clean fork and we could go from there? Is there any particular naming convention you both prefer for the new branch I'll create on that fork? I'm assuming that the current HEAD for master is a reasonable place to source that branch, unless there's more tests or other features elsewhere?

Thanks!

emilkeyder-invitae commented 4 years ago

Would it be feasible to make Predicate an alias for bool valued Function? That way it might be possible to keep most of the test code the same.

On Thu, Jul 9, 2020 at 12:31 PM mejrpete notifications@github.com wrote:

Thank you both for your responses! I think a full SMT-like representation would work great, especially if going that route is the preference for both of you. I can definitely see the value of reducing the complexity of the internal representation that way.

@miquelramirez https://github.com/miquelramirez I would be happy to generate a selection of test cases to drive the refactoring. My tendency would be to focus on RDDL/FSTRIPS-related integration and single-feature tests (in the same assertion-based pytest style that I see in the existing test suite) to start, if that seems reasonable to you guys.

Do you think that the Predicate identifier should be struck from the API altogether then? Or should it be an alias for boolean-valued functions? If the SMT-like representation involves actually striking "Predicate", etc from the API, I imagine that we will break API assumptions used for a significant chunk of the rest of your test suite. I imagine those tests could either be translated to the new function-based API as the refactor is happening, or could be a significant part of eventually (if things work properly!) merging the refactored code in off the dedicated branch. In the second case, I think the tests I could write myself during the refactor would be good enough to ensure correct support for the features I'm most familiar with in Tarski (mostly reading/generation/writing PDDL and RDDL), but I think additional work to ensure test coverage outside of the PDDL/RDDL reading and generation code on the new API would be needed to confirm the same level of test coverage that you have currently.

I will get started working on an initial set of tests. Perhaps for the first round of tests I could submit them as a PR on a new branch from a clean fork and we could go from there? Is there any particular naming convention you both prefer for the new branch I'll create on that fork? I'm assuming that the current HEAD for master is a reasonable place to source that branch, unless there's more tests or other features elsewhere?

Thanks!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/aig-upf/tarski/issues/91#issuecomment-656228111, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKLRF3FJ3LDOMW5FLIHOY73R2XWFVANCNFSM4N4PP67A .

gfrances commented 4 years ago

Hi guys,

@mejrpete, I would branch from devel, not master. Next week I could have some time to help a bit with the refactoring, so if you think there's room for that and it fits your schedule, let me know and I could set apart some time.

I'm not opposed to keeping some "alias" for Predicate (not sure what kind of Python thing you have in mind for that). We can of course keep API methods such as Language::predicate() that is used to create a new predicate, but still there'll be things such as isinstance() checks, and similar, that will need to be refactored. Hopefully that won't be too painful; a quick grep shows:

(tarski) 09:21:59-frances@lea:~/projects/code/tarski/src$ grep -iIrn "isinstance(.*Predicate)"
tarski/syntax/formulas.py:306:        if not isinstance(head, Predicate):
tarski/io/rddl.py:80:                    elif isinstance(tsym, Predicate):
tarski/io/rddl.py:101:                if not isinstance(tsym, Predicate):
tarski/io/rddl.py:110:                elif isinstance(tsym, Predicate):
tarski/io/rddl.py:122:            if not isinstance(tsym, Predicate):
tarski/model.py:80:        if not isinstance(predicate, Predicate):
tarski/model.py:141:        if isinstance(symbol, Predicate):
tarski/dl/concepts.py:14:        assert isinstance(predicate, Predicate)

(tarski) 09:21:59-frances@lea:~/projects/code/tarski/src$ grep -iIrn "isinstance(.*Function)"
tarski/io/rddl.py:78:                    if isinstance(tsym, Function):
tarski/io/rddl.py:108:                if isinstance(tsym, Function):
tarski/model.py:54:        if not isinstance(term.symbol, Function):
tarski/grounding/common.py:18:        if not isinstance(symbol, (Predicate, Function)) or not all(isinstance(c, Constant) for c in binding):
tarski/dl/concepts.py:148:        if isinstance(predicate, (Predicate, Function)):
tarski/dl/concepts.py:384:        assert isinstance(predicate, (Predicate, Function))
tarski/dl/features.py:151:#         assert isinstance(fun, Function)

which looks quite manageable.

I'm not that concerned with whether removing Predicates will mean a lot of internal changes, but a bit more about breaking backwards compatibility for "external" users of Tarski, even though at the moment Miquel and I are probably 50% of the users of Tarski :-) But in any case, as long as we document properly how to adapt existing code, everything should be fine.

mejrpete commented 4 years ago

@gfrances Understood! I will branch from devel instead. I definitely have time (between this weekend and next week) to get some necessary tests implemented and work on the refactor. I'll get those tests implemented and get the PR up sooner rather than later so that can be all in place.

I'm also not sure what python mechanism would be best for that. I suppose the pythonic way to deal with it would be exactly to subclass Function -- which goes a bit contrary to doing the SMT-like representation, so perhaps we can just deal with modifying the few isinstance() checks, etc as they come up.

Understood completely on being concerned about breaking backwards compatibility. Thanks for being so open to considering these changes, I really appreciate it (especially selfishly, since it enables my own use case!).

gfrances commented 3 years ago

Hi @mejrpete , just wondering if you ever had the chance to start with this? I could have some time these upcoming weeks to help on it if there's still interest.

mejrpete commented 3 years ago

@gfrances Thanks for checking in. Sorry I dropped off the face of the earth after this discussion.

This past week I just wrapped up some progress on this --- although not 100% in the direction that we had previously discussed.

Because of the scale of the SMT-like representation refactor (and the fact that there appeared to be some necessary API changes), I had encountered a bit of difficulty writing a test suite for the discussed features that would simply bolt into the existing tests. In order to avoid duplicating test infrastructure and to explore the changes that were going to be needed, I ended up adding to the existing test suite (while assuming minimal changes to the public API) in parallel with implementing an intermediate refactor that relies on a FormulaTerm wrapper class to enable interoperability between Term and Formula objects. This is not the refactor that we previously discussed, since both Term and Formula (& Predicate and Function) continue to coexist in the internal representation. It is, however, directionally closer to the SMT refactor, since the wrapper class FormulaTerm is treated as a boolean-valued Term, and the Boolean sort is now a builtin that is loaded by default.

I found this implementation to be an easier starting point for me, since it also gave me an opportunity to become much more familiar with the portions of the tarski codebase which I had not previously touched. I was able to avoid significant API changes, and was able to continue to leverage the existing test suite while making my own additions. That's not to say I'm not interested (and interested in working on) the full SMT-like representation refactor --- just that this allowed me to work out of some of the technicalities without having to immediately worry about API changes that would also break the existing tests.

The version I currently have is passing all but 1 (plus 1 more that was also not passing on devel when I forked) of the existing non-sdd (which I was not able to get configured for testing) tests. The (pretty minimal) changes that I needed to make to the existing tests were primarily related to a change to Tautology and Contradiction that warrants more discussion. I have also added tests that reflect the desired functionality that was discussed from the RDDL perspective that prompted our conversations about this refactor. In particular, I have tests that demonstrate appropriate interoperability between Predicate/Formula objects and Function/Term objects through the FormulaTerm wrapper class --- meaning that (as currently tested) numeric interpretation of Boolean-valued objects works as expected for all basic arithmetic, if then else, and sumterm operators. I've also added a large integration test in tests/io/test_rddl_writer.py that uses a significant chunk of the new functionality to generate one of the academic-advising RDDL domain instances used in IPPC-2018.

I'm currently finishing up

There are a few hacks involved in my current version (which are certainly not a good idea moving forward) that I used to preserved the existing API. Unless I'm missing something, I think removing these hacks for the full SMT-like refactor will likely require some API changes (which obviously warrants discussion with you guys)

In the very short term, I believe I am going to start using this intermediate refactor for the functionality for the project I'm working on with @emilkeyder-invitae. However, I'm very interested in implementing the full SMT-like refactor, and (once I add a few more specific unit tests) think that my testing additions (taken either alone, or along with my intermediate refactor) may be a relatively good starting point for that.

What would be your preferred path forward for this? One strategy could be for me to file a PR pointed at a new feature branch with my current version (along with the corresponding tests) where the PR text could give my detailed descriptions of the internal changes made and the current questionable design choices (or hacks) that let me preserve API components that I was concerned about breaking (in particular, non-conditional FSTRIPS *Effect construction syntax). From my perspective, it seems like the changes I've made so far could be used to jumpstart the full SMT-like refactor on that new feature branch, leveraging the RDDL-focused tests over Boolean and numeric-valued arithmetic, etc as the driving test suite (as long as you are sufficiently comfortable with the code style and version history associated with the PR). Unless you have a different preference, it seems like that PR could give you guys the opportunity to take a look at what I've done so far and decide whether it's a reasonable starting point for a feature branch for the full refactor.

Let me know what your preferences are! I will get those descriptions finished up to help with the discussion (either for a PR or as a reply on this issue). Apologies again for my total radio silence on this since our previous discussion. I will be sure to be more communicative moving forward!

miquelramirez commented 3 years ago

Hi @mejrpete,

great to hear from you, and that you have been making significant progress. I am also going to start using (again) the parts of Tarski that interface with RDDL, so it is indeed good timing.

My preference would be a PR, as github makes very easy to track changes and keep the discussion in context.

One important bit that we haven't really discussed - because it is not entirely sure that it is relevant - is that currently in Tarski we don't have a "standard" way to deal with RDDL "actions". These actually are a more general concept than it is the norm in planning languages where decisions are given as discrete actions a(t), as the decision at a given time point t can be given as a high-dimensionality vector of real numbers x(t), subject to constraints. RDDL has a clear influence from the literature in Operations Research and Stochastic Programming, and there is a gap here that needs to be navigated. In the yowie-benchmarks I shared we just walked around this issue, as the back end wasn't a PDDL-like MCTS planner, requiring decisions to come "pre-packaged" into operators.

Regards,

Miquel.

gfrances commented 3 years ago

Hi,

I'd also love to see that PR and be able to contribute!

Interesting point, Miquel. What would you suggest? Is this easier to handle if we just stay at the representational level, i.e. don't care about what a planner might want to do with these actions, and just make a distinction between action variables and state variables, and represent the constraints between them? Aren't we doing that already? I'm not too familiar with the RDDL part of the code, as can be seen :-)

miquelramirez commented 3 years ago

I think that Scott didn't want to rile up people and decided to call "action variables" what he would have probably wanted to call "decision variables".

One of the elements in the gap is how do you define the transition function. In planning languages very specific frame axioms are assumed, this then allows to define the transition function very efficiently.

In RDDL the transition function is fully specified by the modeller, and interpreted as a DBN.

Another gap is that RDDL assume vector decisions to be chosen somehow from a constraint satisfaction problem. The planner is very much expected to compute its own actions on the spot.

A problem in RDDL, from my point of view is that the above is incomplete, there should be the possibility to both define what actions are admissible (comply with constraints) and which are more suitable (minimise or maximise a cost function). Resolving this would allow to consider problems like portfolio management, where we want to compute the stock order that maximises a given performance index, which depends on the volume of the orders (the magnitude of the values of the decision variables). This would also do away with the problem of having a potentially very large set of feasible instantiations of decision variables (say the admissible decisions are inside a polytope).

It is very much a topic of tools research.

miquelramirez commented 3 years ago

In what respects language support we don't assume an interpretation, we just expose a nicer AST :)

mejrpete commented 3 years ago

Hi all --- apologies for the delay. I have just submitted the PR as discussed above! However, I realized when submitting it that Github doesn't appear to support (I think Bitbucket used to at least) requesting a new branch in the PR, it only allows me to submit a PR to an existing upstream branch. I submitted this as a PR against devel, but I think the smarter thing to do would be to open a new branch on the tarski upstream specifically devoted to this refactor and work there, given the scope of the changes. Of course I defer to you all, but just let me know if you want to open a new feature branch off of devel and then have me submit the PR there instead!

mejrpete commented 3 years ago

In terms of the action/decision representation --- I agree that the representation in RDDL is significantly different than in PDDL/FSTRIPS representations. This hasn't been a problem for us on our project thus far, since our domains have only boolean-valued action fluents and the parser/planner(s) we are using are grounding the entire problem, including the action representation, so that at solve-time there are operators corresponding to each possible decision. This is made easier by the fact that (at least currently), each "operator" is associated with only one true-valued action variable (limited by action constraints) at each time step, which means there is no blowup in the action space.

It seems like it would depend on what is needed on the tarski side. If domain generation at the RDDL representational level is the priority, I imagine one option is to keep things the way they are and just worry about action variables, CPFs, and constraints directly (rather than reasoning about the DBN). That's the most that I need right now, at least. Being able to describe actions as "admissible" and "suitable" as @miquelramirez describes would be very interesting --- although my understanding is that would require something with additional descriptive power when compared with RDDL.

miquelramirez commented 3 years ago

Hello @mejrpete.

Thanks very much for the PR @mejrpete, I will start reviewing it soon.

This is made easier by the fact that (at least currently), each "operator" is associated with only one true-valued action variable (limited by action constraints) at each time step, which means there is no blowup in the action space.

Thanks for sharing a bit more details about your use case.

Being able to describe actions as "admissible" and "suitable" as @miquelramirez describes would be very interesting --- although my understanding is that would require something with additional descriptive power when compared with RDDL.

That is something I have been working in parallel to the main tarski repository for quite some time. After working on that for a couple years I discovered that the modeling challenges I was trying to address, have already been looked at in operations research. This paper is a good summary that offers a unified view of dynamic and stochastic programming

https://pubsonline.informs.org/doi/abs/10.1287/educ.2014.0128

Nothing groundbreaking from a theoretical perspective, but I found some key methodological and practical insights.

RDDL allows to model MDPs with "precomputed" decisions, and allowing to define decisions implicitly as an optimization or satisfiability problem so would indeed go well beyond what RDDL can do at the moment. It is also a steep departure from one important theoretical assumption implicit in the literature on planning. Namely, that the set of actions applicable on a given state can be computed in polynomial time. In practice, we kind of know that for current state-of-the-art forward search techniques to scale up we probably need action applicability to be resolved in no worse than L.

Note that P, or rather, FP, accommodates Linear Programming if we impose a bound on the representation of the numbers involved in the computation. So I am looking forward to add a capability like this to tarski really soon (I need it for some projects I am working on).

mejrpete commented 3 years ago

Hi all! I know it has been a long time, but wanted to pop in and reconnect. A few of us including @emilkeyder-invitae and @AnaghaK (I don't know if that mention is going to work correctly) have been discussing the features implemented here and as part of the associated PR from last year. Our discussion was in context of what might be needed to get this branch (and associated features) to a point where it might eventually be appropriate for the mainline tarski project (particularly where it comes to core support for the snippet of RDDL that is in relatively common use).

Wanted to drop in and connect everyone so we could start that discussion. I know you all have been doing a bunch on tarski recently. Because of that, and particularly in context of the fact that this original refactor relied on wrappers to enable interop between Predicate/Formula and Function/Term structures (rather than moving to a full SMT-like underlying representation), I know that almost no matter what figuring out the right path forward on this would involve some careful discussion.

I think the core questions about what is appropriate and what would need to be settled are reasonably well laid out in my notes for PR #96 from last year. In particular,

  1. Whether the wrapper strategy for interop is appropriate, or whether appropriate implementation requires a full SMT-like core representation of boolean types within tarski (and essentially the merging of Formula and Term)
  2. How to reconcile the API quirks associated with STRIPS effects, etc (and core logical formulae) no longer being entirely separable from "languages" in either design approach
  3. How to enable the representations that are necessary from an RDDL-generation standpoint without undermining any of the other use cases for tarski (static analysis, etc) that I'm less familiar with.

Hope you're all doing well! Sorry for the long period of radio silence. Looking forward to the discussion.

Let me know whether the right way to do to this is to open a new (clean) issue (or start something in the new "discussions" github feature), or whether this is the right place to keep discussing this. I figured that this was a reasonable first step, given that we never fully closed this issue (and since the basic ideas still follow directly from those that we originally brought up here).

Thanks a lot!

miquelramirez commented 3 years ago

Hold on there @mejrpete! It's been a turbulent 2020 (and 2021 is throwing some curveballs to me at the moment as well!). Believe or not I have been looking at this on-off for the last year, and until recently I haven't had a excuse (being a post-doc who is "supposed" to work on certain things) to put time into tarski-based projects until last February.

I am happy to respond to you over here. The Discussions app is cool but I would like to keep this long thread preserved as it is.

miquelramirez commented 3 years ago

Hi Pete (@mejrpete ,

let me reiterate our gratitude for your interest and the effort put into the refactor. I was quite surprised that the changes you needed to make turned out to the core classes was relatively small. The side-effects on the effect classes and the like were less so in my opinion.

Feedback on the rddl-refactor branch

1) References from syntactic elements to the language object

Once upon a time, every syntactic element in tarski had a reference to the language object. This was convenient for a number of reasons (type checking, resolving semantic attachments/denotation, etc.). But we had to get rid of it because of the inherent difficulties to define properly a __deepcopy__ operator with correct semantics. I say "correct" as in "not breaking Python standard library and built-ins". This was an issue conceptually similar (us breaking Python) to the one that we had with formulas and terms versus containers when we overrode the operators __eq__, __lt__ etc. Getting rid of references to the language involved some significant refactorings, and the result is the way we define "theory bindings" at the moment (where things are externalized and managed with factories). This refactoring could not encompass Function and Term - the details are probably in some discussion in the Issues section, but I can't recall why we had concede on copies of Function and Term being always shallow.

2) On the changes on Contradiction and Tautology

Funnily enough, the two simplest formulas have quite an important role to play (more so Tautology). I definitely do not like the idea of having the notions of true and false statement to be language dependent. It would be like having 1 and 2 being defined specifically for the naturals and the integers. Besides the conceptual purity issue - which is something always negotiable with me - there is the programming interface. Having to recall a reference to the language to create instances of them isn't very satisfying and looks error-prone. The repercussions on the Effect classes are not great either, complicates something which should be quite straightforward (since effects are just a structure that glues together condition and effect formulas with state variables).

3) Some clarifications

Following Up

I think it is possible to achieve what was wanted - a more unified representation which is aligned less awkwardly with RDDL and SMT solvers. My original thoughts on this was that we could do what you needed by having a Boolean theory that defines the domain as natural numbers (see above) and changes to the parser, or in our case, the RDDL AST post-processor to "intercept" atoms and redefine them as an equality literal. For instance, if we have in our RDDL file something like

        time' = if (snapPicture)
                then (time + 0.25)
                else (time + abs[xMove] + abs[yMove]);

what we would get, in tarski terms would be an entry into the cpfs attribute of tarski.rddl.task.Task like this

( time(), ite( snapPicture() == 1, time() + 0.25, time() + abs(xMove()) + abs(yMove()) ) )

How would this work in practice?

For the last year and a half I have been working on a parser for the temporal fragment of Jussi Rintanen's NDL . This language a very similar syntax (and similar philosophy in some aspects) to RDDL, and clearly for baselines, I was using the SMT encodings of Jussi. Noting this connection was great stuff, because it meant there was another force shaping the solution to the issues raised.

So I tested the concept by basically implementing my notion of the Boolean theory myself, see this gist with the NDL parser. Note: this was a port from the code in SML that Jussi generously shared with me into Python, using ply and tarski. It was a bit of a b*stard to get this working, notions like "sorts that are unions of other sorts" were the proverbial square peg to fit into a board with round holes.

Let me walk you through what I did:

1/ Define a bool_t sort in the language associated with the NDL instance being parsed (lines 925 to 931) 2/ Coerce expression over Boolean functions to become atoms (lines 896 to 906) 3/ Call the coerce_into_boolean_expression strategically (see semantic actions for the rule p_expr)

Doing this we effectively bypass the Predicate versus Function dilemma. For NDL instances, the first order language associated has a Boolean sort which is defined as a subset of the integers. There are no atoms which are made of predicates, we have equality atoms using the predicate with standard semantics ('='). This means that for NDL instances, the first order language requires the Equality theory to be attached.

The planners are still not ready for prime time, but I happy to show you the code to transform tarski formulas into PySMT formulas, in this gist. As you can see, there are no dramas - this approach allows for seamless interoperability with the best (imo) Python library for modeling in SMT.

Obviously, there may be some assumptions which do not transfer well to your set of tests. Would it be possible to move your RDDL-oriented tests to a new branch (e.g. rddl-refactor-2) and work there to try this notion out?

mejrpete commented 3 years ago

@miquelramirez Thanks so much for your detailed comments and your sharing of the representational approach you have been working to support the temporal fragment of NDL! This is a great way to get this conversation moving. I need to spend a little more time looking through the two gists you shared, so apologies if any of the below is slightly off base re: portions of those files that I haven't fully processed!

I very much agree with the framing of the Boolean theory defined as the set {0, 1} (which also enables them to be descended from the Naturals if the Arithmetic theory is attached). That's actually exactly how I managed the FormulaTerm wrapper for Formula structures (by understanding them as implied Terms over Functions with Boolean (in {0, 1}) codomains.

Am I right in understanding that your proposal is to handle Boolean-valued items from both NDL and RDDL as Function/Terms directly, without touching Predicates at all in that case? That seems like a very elegant solution (and I'll admit I'm a bit surprised that didn't occur to me in the previous refactor attempt. I agree completely that the API side effects of my wrapper-based approach were extremely undesirable (particularly in terms of how they affected Effects and Tautology/Contradiction) and I think that it's great that your approach would completely avoid those side effects.

My conceptualization of the refactor is informed additionally by the fact that our main use case is "constructive" in terms of domains that can then be output to clean RDDL. How do you imagine handling logical expression construction over Boolean-codomain atoms in that case? While the if/then/else example you gave doesn't require it, we often see in RDDL CPFs that look like

exploded'(?b) = exploded(?b) | (Bernoulli( CONST) * exists_{?bp : block} [put_on(?b, ?bp)]);

in addition to patterns like

reward = -5 * sum_{?b)[exploded(?b)];

If I understand you correctly, we would understand the exists_{?bp : block} [put_on(?b, ?bp] to be something like a quantified formula (existential) over equality literals like put_on(?b, ?bp) == 1 (parameterized in the quantified formula by ?bp, etc). Am I right in understanding that doing this would require some duplication of functionality related to things like quantified expressions and basic logical operators for the entirely-Function/Term-encompassed notion of Booleans in this approach? That's not at all a criticism. Just raising as an observation and a clarification.

One final note on the elegance of this solution (as opposed to the approach I had assumed when writing the wrapper-based refactor): I had noticed that when defining instances for RDDL domains using Predicate-based atoms in that version that the closed-world assumption means that it's impossible to define, for example, a particular state-fluent to be False (== 0) in the initial state if it has a default value of True (==1) in the domain. This of course makes complete sense, given the theoretical context & STRIPS-related formalizations, but is a basic issue with my approach. Your alternative where we use Function-based atoms for representing Boolean-valued items (state-fluents in our example) completely eliminates that issue as a side effect!

Obviously, there may be some assumptions which do not transfer well to your set of tests. Would it be possible to move your RDDL-oriented tests to a new branch (e.g. rddl-refactor-2) and work there to try this notion out?

This sounds like a great idea. As long as I understand properly the intended API-interaction pattern differences when it comes to defining and constructing expressions, etc over Boolean valued items, I think getting the tests tweaked to match that pattern will be very easy and a nice starting point!

Thanks for your detailed response and engaging on this! tarski is a great project and I'm happy to contribute in any way I can. Even easier to do so when it's possible to work towards supporting my use cases in a robust way as a side effect of core contributions!

(As a final comment --- while interop with PySMT isn't directly related to my current use case -- that is an extremely cool feature. I will be excited to eventually see the planning infrastructure you're working on if/when it's eventually available for prime-time :smile:)

miquelramirez commented 3 years ago

Hi Pete,

Am I right in understanding that your proposal is to handle Boolean-valued items from both NDL and RDDL as Function/Terms directly, without touching Predicates at all in that case? That seems like a very elegant solution (and I'll admit I'm a bit surprised that didn't occur to me in the previous refactor attempt.

Thanks very much for the kind words, @mejrpete. Your approach made perfect sense too, and I agree with your notes that it had some side effects which would be best to avoid. But I have an unfair advantage which is that 1) I co-designed tarski so I know stuff like the attach_sort method, which is kind of a "library developer" method rather than "application developer" one, and 2) I use tarski a lot and certain patterns come up time and again after a couple of years.

I think that it is for the better to leave Predicate's alone. In first-order logic there is some redundancy between functions and predicates. When I used to teach logic for artificial intelligence at the Australian National University there were some students that got a bit worked up thinking that my slides were wrong and should get rid of predicates... But I think that as a representation they are very compact when describing discrete sets which are not "too big". They're also a key element in the construction of STRIPS models for classical planning. That doesn't mean that non-STRIPS descriptions of planning instances have to use them if it is not the most natural vehicle to represent properties of states.

While the if/then/else example you gave doesn't require it, we often see in RDDL CPFs that look like

Those are tricky ones mate

exploded'(?b) = exploded(?b) | (Bernoulli( CONST) * exists_{?bp : block} [put_on(?b, ?bp)]);

is an interesting construct, where we have a Boolean random variable - the Bernoulli(CONST) expression - with a multiplicative coupling with an existentially quantified formula - a compact way of writing a potentially large disjunction of ground atoms put_on(?b, ?bp). Let me say that the expressive power of RDDL can be quite a thing, that's a chance constraint that would melt the brains of most experts on Stochastic Programming out there!

My take would be to write the right-hand side in tarski as follows:

lor(exploded(b)==1, (Bernoulli(CONST) * sum(bp, put_on(b, bp))) > 0)

since put_on is now a function onto {0, 1} it is legal to sum up the terms put_on(b, bp). If no put_on(b, bp) term is true (e.g. has value 1) the realization of the random variable is irrelevant, whatever value it takes, the truth of the formula will follow from the truth value of exploded(b).

Does the above make sense? Note that our current implementation of the sum term perhaps needs to be revised slightly, possibly casting to the sort of bp, the variable we are summing the terms over. The above is very close to the way you would see that constraint in a paper on Integer (Stochastic) Programming.

Thanks for your detailed response and engaging on this! tarski is a great project and I'm happy to contribute in any way I can. Even easier to do so when it's possible to work towards supporting my use cases in a robust way as a side effect of core contributions!

And thanks to you for your inputs Pete :) Looking forward to seeing how far my suggestion can go.

(As a final comment --- while interop with PySMT isn't directly related to my current use case -- that is an extremely cool feature. I will be excited to eventually see the planning infrastructure you're working on if/when it's eventually available for prime-time 😄)

These temporal planners will be out (I hope) in a few months at the latest.

miquelramirez commented 3 years ago

Hi againg @mejrpete ,

Note that our current implementation of the sum term perhaps needs to be revised slightly, possibly casting to the sort of bp, the variable we are summing the terms over.

I was reviewing what I wrote last night and now I realise that the above is actually nonsense!

The actual way forward now I think would be to coerce the expression to be of type Integer if the arguments are some sub-sort of Integer. onto Real if the arguments are a sub-sort of Real and again onto Real if we have a mixture of Integer and Real. Alternatively, we could have tarski to bark and raise an exception to remind users that the library is still work in progress :)

mejrpete commented 3 years ago

@miquelramirez Sorry for the slow response this time! Thanks for the rigorous response. I completely agree that this makes sense from a representational standpoint. I also agree that it's perfect to not have to ruin the core FSTRIPS functionality in tarski by messing around with predicate :)

I think migrating the tests I wrote for RDDL (although they're incomplete re: covering parsing from RDDL into tarski, which seems useful here as well) and taking a look at it that way seems like a great path forward. I've started moving the tests over (including the integration test that tests the construction and writing of instance 01 of the "academic-advising" domain from IPC probabilistic track 2018, and realized that I already have a couple of questions.

  1. In the examples we've discussed previously, assuming that logical operators are resolved by using equality atoms takes care of most things (including that particularly ridiculous CPF example that we discussed above). However, in something like academic-advising, we have this beast

    passed'(?c) =
    // if ?c is taken and has no prerequisites, it's passed according to
    // a prior probability
    if ( take-course(?c) & ~( exists_{?c2 : course} [ PREREQ(?c2,?c) ] ) )
    then Bernoulli( PRIOR_PROB_PASS_NO_PREREQ(?c) )
    
    // if ?c is taken and has no prerequisites, it's passed according to
    // a prior probability and a bonus depending on passed prerequisites
    else if ( take-course(?c) )
    then Bernoulli( PRIOR_PROB_PASS(?c) +
            ( (1 - PRIOR_PROB_PASS(?c) ) * ( sum_{?c2 : course} [ PREREQ(?c2,?c) & passed(?c2) ] ) /
                                (1 + sum_{?c2 : course} [ PREREQ(?c2,?c) ] ) ) )
    
    // otherwise, the value persists
    else passed(?c);

    Most of that is just nonsense without the rest of the domain, but the part I'm interested in is:

    sum_{?c2 : course} [ PREREQ(?c2,?c) & passed(?c2) ]

    I'm fully convinced that the equality atom approach takes care of the logical expression inside the brackets there (since it would turn into land(PREREQ(c2, c) == 1), (passed(c2) == 1)) as discussed). However, unlike before (when we only had to worry about making sure the equality predicates took care of making logical expressions usable when objects are of the {0,1} defined "Boolean" sort, we now have to worry about a situation where we have a sum over a Formula of those equality atoms.

What's your read on how to handle this? My intuition would say that I could just alternately write

sumterm(c2, PREREQ(c2, c) * passed(c2))

but since (unless I'm missing something) we don't have a 1-1 correspondence between logical operators and simple math operators on integers, I don't have an easy way to make this work with a version of that expression using "or" instead of "and", i.e.

sum_{?c2 : course} [PREREQ(?c2, ?c) | passed(?c2)]

I guess my point is that with the RDDL patterns we can't just always coerce the arguments of a logical expression into equality atoms, we have to keep in mind the context outside of that as well. In the & case above, that could be handled by keeping everything as Boolean types and identifying that {&} \equiv {*} for {0,1} Booleans. But given that there's not a convenient mapping like that for logical "or", it seems like we're going to need to duplicate logical operators for Terms, rather than being able to just rely on coercion to equality predicates to handle that. The main problem is that we don't just need to go from [{0,1} Booleans that can do math] -> [Components that can be used in logical formulae], but we also need to be able to go the other direction!

What's your read on this? I get that (in the standard way) the = predicates provided by Theory.EQUALITY handle things when we need to coerce {0,1} Booleans to be well-behaved atoms for logical expressions, but the RDDL pattern above seems to make the less reasonable demand of having logical formulae that can also be coerced back to {0,1} Booleans.

I hope I'm not missing something here! Realized while modifying the integration tests to this pattern that I just didn't know how this sort of thing should be formulated, and figured it would be worth clarifying.

miquelramirez commented 3 years ago

No worries @mejrpete - I am also a bit swamped at the moment :(

I couldn't help smiling :) that you were non-plussed by the ~( exists_{?c2 : course} [ PREREQ(?c2,?c) ] ) - every time I have looked at these instances I have got a headache.

IIRC PREREQ(?c2, ?c) is an input, a piece of data that we represent with a Boolean term/predicate, which does not appear on the left hand side of any cpfs, does it? That could complicate things.

Your first solution

sumterm(c2, PREREQ(c2, c) * passed(c2))

should work without issues. This construct I guess is counterfactual, as in you trying to explore the side effects of the above, is that correct? So let's assume somebody really really needs this on a precondition or an effect

sum_{?c2 : course} [PREREQ(?c2, ?c) | passed(?c2)]

then I would say to write instead

sum_{?c2: course} [if [PREREQ(?c2, c) = 1 | passed(?c2) = 1] then 1 else 0]

that is, using our implementation of ite(formula, term-if, term-else), or alternatively

sum_{?c2: course} [max(1, PREREQ(?c2, c) + passed(?c2))]

Reifying the disjunction is non-trivial, e.g. you get constraints that are not linear.

mejrpete commented 3 years ago

Hey there! I've got some time to work on this again, and wanted to touch base.

I think I'm pretty comfortable after all this discussion on what's needed from the standpoint of usability for this pattern (for RDDL specifically). I think I've got my tests almost entirely sorted out (at least as a starting point) for validation of the approach (and as a driver of any implementation that would actually be needed).

The good news (as you've observed!) is that this pattern requires little modification of the core tarski representations. What implementation is needed to be usable for our case (if I understand correctly) is pretty concentrated in IO portions --- both in the reader and writer portions for RDDL. Our use case is pretty focused on the "writer" portion --- making sure that the wrapped equality atoms, etc get translated to "reasonable" RDDL patterns. Because boolean types are better behaved from a planners standpoint (I'm thinking specifically about anything based on the PROST parser and representation code), it's important to take things represented with sumterms internally, and if they're identifiable as representations of existentials, etc, making sure that they get written as such!

I'm making a last pass through my tests, but I think I'm close to having a good starting point. @miquelramirez are you still on board with me creating a new branch (presumably from the current devel) with these tests added so that I could start working on any changes that need to be made (hopefully mostly concentrated on IO and the relevant changes to the Boolean sort)?

Thanks again for the detailed discussion!

miquelramirez commented 3 years ago

Hold on there buddy! I will get back in detail to you as soon as I can.

Yep, create a new branch with the target being our devel branch.

Thanks again and sorry for being so late to get back to you - your email has been sitting in my inbox for days looking askance at me...

mejrpete commented 3 years ago

No worries at all! I'm starting exploratory implementation locally to debug my tests and see what's necessary. I'll wait for your detailed response and keep doing what I'm doing in the meantime!

And absolutely no worries on the latency on replies. Everyone has a lot going on 😊

mejrpete commented 2 years ago

@miquelramirez shared a draft PR (#120) with what I've done so far, and some comments on what is still open/should be discussed (among any other things 😊). Opened it as a draft PR on devel, although it should probably just be a PR on a new feature branch. Just didn't want to be creating new branches without any discussion, so figured this was the reasonable pattern.

Looking forward to some additional discussion when you have time. Thanks for all the conversation to this point!

mejrpete commented 2 years ago

@miquelramirez Know it has been a while, but this came up on my radar again and figured it would be worthwhile to check in! The draft PR mentioned above has the basic changes that I implemented to get the basic interop we discussed in the most recent round of back and forth related to RDDL support in tarski. Let me know if there's anything I can clarify or anything we should chat through!

Thanks again!

miquelramirez commented 2 years ago

Hello @mejrpete ,

I hope things are good up there in the States. Here they have been difficult. Melbourne has been locked down since very much the day you sent your penultimate message, and we're reopening today. Home-schooling and difficulties to access as much childcare as we would need really put a squeeze on the time I had available - and the mental bandwidth I could devote to anything not laying along the "critical path". Early September deadlines and other internal deadlines I have (and students, and reviews, etc.) haven't been particularly cooperative.

I have been looking through your PR on and off over the past 8 weeks or so, I have found it very hard to sit down with it without being interrupted. I don't have any major issues with it, but I have yet to merge it locally into devel and test the various projects I have used tarski to see if there are any problems. I was looking forward to doing it sometime next week, or the one after that, if I don't get distracted.

Best regards from Melbourne,

Miquel.

mejrpete commented 2 years ago

@miquelramirez

I appreciate it. Things have moderated somewhat here, but definitely have not yet normalized. We have been luckier than many. Understood completely on additional constraints because of COVID. Absolutely no time pressure on any of this, I appreciate your response.

Thanks for your comment on the PR. I will take a look and reply there (and will hopefully clarify re: the comments you highlighted). Thanks for all the time you've spent going back and forth on this. I really appreciate it!

Best wishes with reopening.