normativeai / frontend

GNU General Public License v3.0
6 stars 3 forks source link

"only if" doesn't intuitively express idealized deontic modes #57

Open Gauntlet173 opened 5 years ago

Gauntlet173 commented 5 years ago

Is there a syntax reason that "permission if" is different from ( if then (permission ) )? Or does the visual interface just give access to P>? If the latter, I'd consider getting rid of all of the "if" deontic operator connectives. The cost in confusion is higher than the benefit in having to annotate one additional connective.

There are currently 11 if and deontic connectives. It could be reduced to less than half with no loss of capability, and along with renaming the connectives to include blanks, the interface would be clearer.

I think this is a place where less is more.

shaolintl commented 5 years ago

@Gauntlet173 There is a reason for this choice. The implicative normative operators mean more than just the implication of the normative operator. A P> B does not mean only A => Pm(B) but A => Pm(B) /\ Id (Pm(A) => Pm(B)).

Saying that, we can discuss how to reduce the confusion.

Gauntlet173 commented 5 years ago

I'll need to play with the id() thing for a while to get a good idea of the syntactic meaning. It makes more sense to me in obligations than in permissions. But at the very least, the issue title should be changed, so I'll do that.

shaolintl commented 5 years ago

It just occurred to me what you might have meant with redundancy of normative operators and that you would like to get rid of all conditional normative ones. I said that in the logic, there is a different between A => Ob(B) and A O> B but since nobody is using the former, the algorithm should consider all A => Ob(B) as A O> B.

At the same time, we are having some doubts about the usability of ideals as they appear right now for concrete legal texts. We are considering removing them altogether and replacing them by something more useful.

Gauntlet173 commented 5 years ago

No, you understood correctly. I didn't realize that there was a different semantic meaning to them.

I have doubts about the need for deontic logic altogether, to be honest. Have never understood why I could not simply make obligation, prohibition, and permission predicates, and define the interactions between them as I like them to be, and import it as a library. It may be 6 of one, half dozen of the other, but I don't know what we gain using modality over predicates, and using predicates leaves the interface less complicated.

My two cents. But I still have not had a chance to actually play with the deontic modes or the id(), so I don't know what they are for, or how they might help in the real world.

I don't know what the right improvement would be, exactly, but my instinct is that treating A => Ob(B) as though it meant A O> B would deprive the user of the ability to choose between them. If idealized obligations are useful, I feel like they are unlikely to be universally useful, and including them by default might make the semantics of the language harder to understand. (I still don't quite get it, though, so I'm biased by that.)

shaolintl commented 5 years ago

Hi @Gauntlet173 , These are very good points and I am sure that @matteo-si can probably give a better answer.

In short, it depends what semantics you want these normative predicates to have. If you require complicated axioms, then reasoning over them using a first-order theorem prover will be complicated since it will always have to assume these axioms. Considering these predicates as modalities means, in case the semantics of these modalities coincide with your intended semantics of the predicates, that you can take advantage of theorem provers which have these axioms built-in into their reasoning.

Now, we might ask which axioms do we really need over the predicates obliged, permitted and prohibited. Clearly, there are relationships between them but this is easy to capture axiomatically.

The claim is that more are required. One requirement is the Deontic transitivity (or detachment). If I have Ob(a), a => Ob(b) and b => Ob(c), I sometimes would like to know that Ob(c) is something I am ideally supposed to do. This is were our ideal obligations comes into play. You can clearly axiomatize it in first-order logic: \A x,y: [Ob(x), x => Ob(y)] => Ob(y). But, this is another axiom which you need to add.

The main reason to use modalities is that they give a nice semantical notion of what it means to be obliged to do something. The common understanding is that you actually do that in the ideal cases but that you actually dont do that in some cases. If you always do that, then it is not normatively interesting and can be easily captured with predicate/propositional logic. If you consider these semantics for obliged, it allows you to keep reasoning when obligations are violated. Of course, again, you can capture that using predicates and first-order axioms. Modal logic gives nice semantics here. Using different parallel worlds allows for saying that in all ideal parallel worlds adjacent to ours, I did what I am obliged to but that there is at least one non-ideal worl in which I didnt do that.

Now, if you take first-order logic and add axioms to it so that it will behave like a modal logic, then you reinvent the wheel on the one hand side and prevent yourself from using modal theorem provers, which are more efficient for reasoning over modal problems than first-order theorem provers (if you add the "modal" axioms).

Saying that, one way to reason over modal logic problems is indeed by converting them to first or higher-order logic. @lex-lex Can tell you more about that.

To understand some of the complexities of deontic logic when applied to legal notions, you can consider looking at https://link.springer.com/chapter/10.1007/978-94-010-0387-2_4

Gauntlet173 commented 5 years ago

So it is possible to represent deontics as predicates, and define the semantics of them as a FOL or HOL library, but you are reinventing the wheel, because that has already been done in the modal solver, and the FOL/HOL solver is slower than the modal solver. So it takes time for no reason, and the result is less efficient.

If I'm summarizing that fairly, let me play devil's advocate.

The project is to bring the power of logical representation of rules to lawyers and others. One of the problems is defeasibility. But defeasibility is not an efficiency problem, or a logic problem. It is a user interface problem.

The problem is that the user thinks in exceptions. We need defeasibility in the interface layer, to allow them to express rules in the manner in which they think. But if the defeasible rules can in the back end be reduced to FOL, we don't actually need defeasible logic. We have no evidence whatsoever that lawyers are not using FOL solvers because they are too slow.

Similarly, one of the problems is contrary to duty obligations. What kind of problem is this? A logical problem. The deontic logic fails to express what is intended. But that presumes a deontic logic. If there is only a FOL, and the deontic concepts are predicates, the problem is solved by re-writing the first-order logic that represents the deontic concepts. It will be harder code, and slower, but it will work.

So we spend a great deal of time and effort to build a modal deontic logic that can represent CTD well, and a solver that knows its axioms. All to avoid the risk of "harder and slower."

What evidence do we have that "harder and slower" are the obstacles at play? "Harder" can be solved with libraries, and we have no evidence that "slower" is in any way causal. What evidence to we have that the absence of deontics is real-world important?

I can tell you. The action is at the margin. And the margin of automating legal services right now does not require deontics. I don't need to model prohibitions in order to help a person file taxes, or apply for government supports.

Similar concerns are expressed about decidability. And similarly, we have no reason to believe that a lack of decidability is the reason the tools are not used. No one has run that experiment.

If we can create a user interface that makes logical formalization something that any lawyers use, then we can swap in and out different logics and learn what people care about in the real world. We can experiment to see if efficiency is the problem. We can figure out whether having more connectives and faster solutions is more valuable than having a simpler interface and slower ones.

I know for a fact people don't use programming languages because they are complicated to learn. When we have something that makes translating legislation into a formalization, in ANY logic, easy to learn, and sufficiently useful to be attractive (minimum viable product style) we will have access to data that no one else has on what actually matters to people. And we can make the logic faster, or the interface simpler, because we have evidence that's what will help the project.

I'm confident that "explainable results" and "isomorphism to source material" will be high on the list. Defeasibility serves isomorphism. I would also be frustrated by the absence of predicates and quantifiers. But maybe I'm special in that regard.

I would just pick the simplest-possible explainable, defeasible FOL logic, define what "learnable" means, define what "useful" means, pick a solver that covers the minimum feature set, figure out how to measure learnability and usefulness, do it, and spend time on only those two things until those objectives are achieved.

Keep developing the logics, but that's a different project, with different definitions of success. Just make NAI capable of swapping them out without completely re-engineering the interface.

What NAI is doing right, that is unique, is that we are looking at usability, and have started doing experiments in which we teach people to use it, and get feedback

My advice is that we mercilessly focus on that until we know more about how lawyers actually learn and use these formalisms than anyone on the planet.

Then we can make better decisions about logics and efficiency than anyone else.

End rant.

shaolintl commented 5 years ago

This makes a lot of sense. One of the approaches we have considered is to enable another logical reasoning layer based on a higher-order theorem prover which @lex-lex has developed. Dealing with CTDs in this case is easy and we can probably find an intuitive way to deal with defeasibility. Once we have a higher-order encoding of the problem, we can also take advantage of existing model finders. Then, we can shift our attention to the user interface and add intuitive support for all of this.

I tend to agree it might be the right path we should take. Efficiency might still be an issue. Imagine that the HOL prover times out 50% of the time. Then automation is no longer achievable. @lex-lex had the idea to run all provers in parallel. This means that we can give different but complementary tasks to each one. One will try to find a counter model, the other a proof of validity, yet another a defeating norm.

An interesting question is, do we need the ability to reason over deontic operators in order to see if a norm is defeated? Intuitively, it seems to me that the answer is no since it depends only on the conditions of the norms and they contain only non deontic operators. In this case, we can try to use, in parallel, a first-order defeasible theorem prover as you have suggested. At the same time, extend the UI to support intuitive marking which norm defeats which.

Proleg [1] is supposed to be such a theorem prover based on prolog.

I will move this thread to email.

[1] https://dl.acm.org/citation.cfm?id=2177488.2177504