Open paulch42 opened 7 years ago
As someone whose primary interest in VDM is the use of the functional subset of VDM-SL to specify system behaviour, my view is that VDM should allow arbitrarily nested total function types. When I specify a function as total I am making an explicit statement to which the designer/implementer of the specification is required to adhere; that specification of totality is a valuable part of what I am communicating. As currently specified in the LRM my ability to achieve that goal is hampered.
The notion of a total function was controversal in the development of the ISO-VDM-SL standard. It was felt that all functions should be defined total in relation to their pre-condition. Thus when the pre-condition was defined it was an obligation of the specifier to ensure that it was always defined. However, the difference between a partial and a total function was introduced because it was felt that it should be possible to describe an interpreter for an imperative programming language in a functional style. This meant that the original understanding was no longer viable because of the halting problem. Then the discussion was which one should be the standard function arrow and at that point of time the intension was that the ASCII syntax we use in virtually all tools today was just meant to be an interchange format enabling you to more VDM models between tools since at that point of time everyone used a mathematically syntax. As far as I recall the total function arrow had a small "t" (for total) on top of the arrow. I tell you this historical overview in order for you to understand that deep thoughts about the combination of partial versus total functions. I would say that this was much more well thought out/treated in the Z standard. However, in none of these languages higher order functions was used extensively. Personally I don't have strong opinions about how it would be best to treat this, but I think that it would be advantagous that the majority of the "legacy" models can ben kept without changes.
Hi Peter,
Thanks for the insight!
The point here is that Overture already allows for mixing of +> / -> functions types, so Paul is able to do what he wants. However he recently noticed the language manual says this is not permitted, and +> is only allowed at the top level. So Overture and the LRM disagree.
I'm minded to update the LRM to match the tool (this would not break any legacy examples), however we want to make sure this update is not a fundamental change that needs an RM with community discussion.
Is there a good reason why the LRM says +> is only allowed at top-level of a function type definition?
Cheers, Ken.
Only that this was the way it was done in the ISO VDM-SL standard.
VDMJ and Overture will allow a total or partial function type in any context, be that in a definition ("top level") or elsewhere. When a function definition is being read, it simply reads a type after the colon, and then checks that this is a total/partial function type.
As the example illustrates, specifying totality is a meaningful thing to do and adds value to a specification, so I think it should be allowed.
One interesting case is with lambda expressions. They are assumed to yield partial function types currently, and there is no way to indicate that you intend the function to be total.
We should also consider what effect this has, if any, on the list of POs that we generate when total/partial functions are used - eg. a total function definition might require a PO to check the claim, and assigning a partial function value to a total function variable might also require a PO (or is this an error?)
We should probably also check the standard libraries. Currently, MATH functions with preconditions are partial (like fac) and those without are total (like sin). This seems reasonable, though we could make it an error to declare a total function with a precondition?
Nick,
when you say “we could make it an error to declare a total function with a precondition” do you mean the following would be considered an error
-- Insert a value into an ascending sequence preserving order. insert[@a]: @a * seq of @a +> seq of @a insert(x, s) == cases s: [] -> [x], [y] -> if x <= y then [x,y] else [y,x], [y]^t -> if x <= y then [x]^s else [y]^insert[@a](x, t) end pre ascending@a post ascending@a and permutation[@a]([x]^s, RESULT) measure len s;
If so then I disagree. As a result of RC #44 the following was added to the LRM.
Paul
When a function is defined to be total, that totality is with respect to the pre-condition. Consider an alternative definition of ‘f’
f: nat * nat +> nat f(m,n) == m div n pre n <> 0;
The definition states the function is total for all values of type nat*nat where the second element of the pair is non-zero. Therefore, even though the function is not defined for values such as mk_(4,0) it is still a total function. On the other hand, the definition
f: nat * nat +> nat f(m,n) == m div n;
is incorrect because it is not defined for values such as mk_(4,0) hence not total.
On 14 May 2018, at 1:35 am, Nick Battle notifications@github.com wrote:
VDMJ and Overture will allow a total or partial function type in any context, be that in a definition ("top level") or elsewhere. When a function definition is being read, it simply reads a type after the colon, and then checks that this is a total/partial function type.
As the example illustrates, specifying totality is a meaningful thing to do and adds value to a specification, so I think it should be allowed.
One interesting case is with lambda expressions. They are assumed to yield partial function types currently, and there is no way to indicate that you intend the function to be total.
We should also consider what effect this has, if any, on the list of POs that we generate when total/partial functions are used - eg. a total function definition might require a PO to check the claim, and assigning a partial function value to a total function variable might also require a PO (or is this an error?)
We should probably also check the standard libraries. Currently, MATH functions with preconditions are partial (like fac) and those without are total (like sin). This seems reasonable, though we could make it an error to declare a total function with a precondition?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/overturetool/language/issues/44#issuecomment-388635628, or mute the thread https://github.com/notifications/unsubscribe-auth/AQisRhFe6hmXYQeVJkN5iLMSyxK_1I9Uks5tyFLHgaJpZM4Qi6to.
Hmm, ok yes I remember that. I was just reacting to the way the functions in the MATH library were declared. Those with preconditions are partial and those without are total. So perhaps we can't conclude anything about totality/partiality from the presence of a precondition.
I agree that combining pre conditions and totality can make sense. The insert
example does demonstrate that. But it does seem counter-intuitive at first sight.
Regarding PO generation. If you assign a partial function value to a total function variable then, for consistency, we should probably generate a type compatibility PO (currently the tool does not do that), which is what you'd get if you do something like let a : nat = -1 + 0 ...
. We should probably also generate a PO to support the totality claim, but I'm not sure what it would look like.
OK, so I think we're saying that a precondition does not define the domain of a function, but just specifies some constraints on the arguments. The function is then total if all pre-matching arguments have defined function results, or partial otherwise. A function with no arguments is effectively total?
Could you write a PO using is_T for all of the argument values, where T is the return type? Something like "pre(args) => forall args:A & is_T(body)".
A function with no arguments is effectively total?
According to your definition I don't think we can say that a function with no arguments is effectively total. As an example, consider the following function ... f() == 1/0;
Could you write a PO using is_T for all of the argument values, where T is the return type? Something like "pre(args) => forall args:A & is_T(body)".
Yeah, something like that. I think that is_T
needs to be is_(body,T)
but other than that I think it looks good.
Hmm. I would say that a function with no parameters is a constant. I was thinking of the parameters as a set of argument values, and the function as partial if its value is undefined for any of those argument values. So if the parameter set is empty, none of the argument values produce undefined results (since there are no argument values!). But I see your point.
Do we end up in trouble with POs if an undefined constant function is partial or total?
BTW, you can use is_T like that:
types
Type ::
x: nat
y: nat;
> p is_(123, Type)
false
> p is_(mk_Type(1,2), Type)
true
> p is_Type(123)
false
> p is_Type(mk_Type(1,2))
true
So if the parameter set is empty, none of the argument values produce undefined results (since there are no argument values!)
Yes, I see. That's probably more in line with the definition. If we think of it like that then the totality claim is trivially true. So should we raise an error if a function that has an empty parameter set is declared partial?
Well, I quite liked your counter-argument about a function returning an undefined value no matter how you call it, hence being partial. So I'm not sure which view is correct. That's why I was wondering whether there were any PO impacts. If they are total, then the totality PO would presumably reduce to something like "forall a in set {} & is_T(undefined-exp)", which I think is trivially true?
> p forall a in set {} & is_bool(undefined)
= true
Executed in 0.004 secs.
Well "forall a in set {} & false" is certainly true but an expression such as "is_bool(undefined)" needs to yield undefined. Otherwise the semantics about recursion will NOT work!
is_T(undefined) is currently false in the tools; that would be easy to change though.
But I'm wondering whether PJ's view of parameterless functions being partial is actually better. If a total function is called, then that is guaranteed to yield a defined value. But calling a partial function should generate a PO because it may not return a defined value (and the PO would state that it did in this context). So since calling a parameterless function could yield undefined, we need that PO, so perhaps they ought to be considered as partial after all?
I find it could be useful to allow nested total functions. At least I found it useful in RSL, where we allow it. In RSL, the use of a total function arrow (at any level) gives rise to a PO: for any value x in the argument type it must hold that f(x) is defined and belongs to the range type. I agree with Nick that a precondition does not define the domain of a function, but just specifies some constraints on the arguments, and with peter that a function without parameters may not always be total.
This RC was discussed at today's Language Board net meeting. A summary of the discussion is available here.
reflected to the LRM.
type = partial function type
with type = function type
in page 29.fmap[@elem]: (@elem -> @elem) -> seq of @elem -> seq of @elem
with fmap[@elem]: (@elem +> @elem) +> seq of @elem +> seq of @elem
in page 49.type = partial function type
with type = function type
in page 190.I'm looking at how/where we should have POs for this. I'm starting with this example:
functions
p: nat -> nat
p(a) == a - 1
pre a > 0;
t: nat +> nat
t(a) == p(a)
pre a > 0;
f: nat +> nat
f(a) ==
let g: nat +> nat = p in g(a);
Currently, this generates three POs:
Generated 3 proof obligations:
Proof Obligation 1: (Unproved)
p: subtype obligation in 'DEFAULT' (test.vdm) at line 2:5
(forall a:nat & (a > 0) =>
(a - 1) >= 0)
Proof Obligation 2: (Unproved)
t: function apply obligation in 'DEFAULT' (test.vdm) at line 7:13
(forall a:nat & (a > 0) =>
pre_p(a))
Proof Obligation 3: (Unproved)
f: function apply obligation in 'DEFAULT' (test.vdm) at line 12:34
(forall a:nat &
(let g:(nat +> nat) = p in
pre_(g, a)))
The use of p(a) in t does not raise a "totality" PO, just a check for pre_p(a). I'm not sure whether this should? The partial application may not yield a value, even if pre_p is true(?), but perhaps the obligation is really for the whole of t rather than the use of p - eg. pre_t(a) => is_nat(t(a)) ?
In the third example, I think there would be a similar overall PO for the totality of f, but we also(?) need a PO because the assignment of p to g is a problem - or is this even illegal (p is not total, so how can this assignment "possibly" be correct)?
Are there any other usages of total functions that need POs?
.. but perhaps the obligation is really for the whole of t rather than the use of p - eg. pre_t(a) => is_nat(t(a)) ?
I was thinking that we generate a PO that covers the whole of t
.
In the third example, I think there would be a similar overall PO for the totality of f...
Yes.
... but we also(?) need a PO because the assignment of p to g is a problem - or is this even illegal (p is not total, so how can this assignment "possibly" be correct)?
I agree that g:(nat +> nat) = p
should not be allowed. For this example, can't we just raise a type error? I mean, I would expect the declared type to at least "include" the type of the expression that appears on the right-hand-side of the assignment. Or maybe we just keep things simple and require that the types are the same. Opinions?
Are there any other usages of total functions that need POs?
I was thinking about the following cases which may need special treatment (the definition of p
is given above):
dcl
statements, e.g. dcl x : (nat +> nat) := p;
x : (nat +> nat) = p;
x : (nat +> nat) := p;
x := p
g : () -> nat +> nat
g () == p;
g : (nat +> nat) -> nat
g (f) == f(2);
@ldcouto What do you think?
So, I'd say assigning partial functions to total ones should not be allowed. This is decidable by the type checker so we just issue an error.
The PO should apply to the whole of the function, so the pre-condition doesn't come into play.
This flows from the definition of total function which is "defined for the entire domain" and the comments above that we agree that a pre-condition does not affect the domain.
So, the PO is generally forall x : T1 & is(T2, f(x))
From Peter's cases above, I guess the ones that introduce a function (1st 3) need a PO. I think this kind of PO only occurs for definitions
Thanks for your input, Luis! This sounds reasonable to me. Can people agree on this?
Does this give us enough input to proceed with the implementation?
Yes, thanks Luis, and I think that's enough to make a start on the changes. I'll look at VDMJ first, as always :)
OK... first question that crops up while implementing. What POs do these produce?:
t1: nat +> nat +> nat
t1(a)(b) == a + b;
t2: nat -> nat +> nat
t2(a)(b) == a + b;
t3: nat +> nat -> nat
t3(a)(b) == a + b;
The first one is relatively easy, I think:
Proof Obligation 4: (Unproved)
t1: total function obligation in 'DEFAULT' (test.vdm) at line 10:5
(forall a:nat, b:nat &
is_(t1(a)(b), nat))
But the second one is partial in the "first" of the curried parameters, so how can you "forall" it as the first argument in order to check the second application - ie. is_(t2(a)(b), nat)
implies that t2(a) has to produce a nat, but we know that it cannot for all "a:nat" since it is partial.
The third example is perhaps easier since the "t3(a)" part must always produce a nat->nat, so that could be is_(t3(a), nat->nat)
, with nothing for the second parameter?
Here's the test suite so far, plus results...
functions
t: nat +> nat
t(a) == a + 1;
t1: nat +> nat +> nat
t1(a)(b) == a + b;
t2: nat -> nat +> nat
t2(a)(b) == a + b;
t3: nat +> nat -> nat
t3(a)(b) == a + b;
t4: nat +> nat -> nat
t4(a) == lambda x:nat & x + a; -- NB. lambda always partial?
t5: nat +> nat +> nat
t5(a) == t2(a);
Produces:
> pog
Generated 5 proof obligations:
Proof Obligation 1: (Unproved)
t: total function obligation in 'DEFAULT' (test.vdm) at line 2:5
(forall a:nat &
is_(t(a), nat))
Proof Obligation 2: (Unproved)
t1: total function obligation in 'DEFAULT' (test.vdm) at line 5:5
(forall a:nat, b:nat &
is_(t1(a)(b), nat))
Proof Obligation 3: (Unproved)
t3: total function obligation in 'DEFAULT' (test.vdm) at line 11:5
(forall a:nat, b:nat &
is_(t3(a), nat))
Proof Obligation 4: (Unproved)
t4: total function obligation in 'DEFAULT' (test.vdm) at line 14:5
(forall a:nat &
is_(t4(a), (nat -> nat)))
Proof Obligation 5: (Unproved)
t5: total function obligation in 'DEFAULT' (test.vdm) at line 17:5
(forall a:nat &
is_(t5(a), (nat +> nat)))
Note that there is nothing for t2 because currently, it doesn't think it's total.
For t2
, I'd expect (forall a:nat & (forall b:nat & is_(a+b, nat)))
which is trivially provable.
For the definitions below
t6: nat -> nat +> nat
t6(a)(b) = b + 1 - a
pre a <= 1;
I'd expect (forall a:nat & a <= 1 => (forall b:nat & is_(b+1-a, nat)))
.
Do we need to think about the below t7
?
t7: nat -> nat +> nat
t7(a)(b) = b - a
pre a < b;
But I think for t2, your PO would mean that the partial application t2(x) would always return a function value. But t2(x) is partial: there are some values of x for which it is undefined. For those values of x for which it is defined, we can then require forall y:nat, t2(x)(y) is a nat. We really need is_defined(t2(x)) => is_(nat, t2(x)(y))
.
I think your PO for t6 is depending on the precondition to define the "partiality" of the function. But I thought we had decided that a precondition does not define the domain of the function, just some constraints on the arguments?
Nick,
Yes, I agree that we need is_defined(t2(x)) => is_(nat, t2(x)(y))
.
My understainding on the domain and precondition is that pre_t(a) => defined_t(a)
(precondition should be stricter than domain of the function) in order to ensure safety of execution.
But you're not obliged to provide a precondition for a partial function - or it defaults to true. Hence pre_t(a) can't imply is_defined(t(a))?
Luis,
The function applicationf(x)
in forall x : T1 & is(T2, f(x))
should be exempt from the restriction by pre_f
, so I think the function body expression should take the place of f(x)
?
Nick, I think a partial function defintiion without a pre-clause and a total function definition with the same body without pre-clause are of different types (not interchangable) but should satisfy the same properties (yields the same PO). However, I'd like to hear more opinions from more people about this. Maybe I'm missing a point.
I'm sure I've seen POs where the body f(a) is assumed to mean "without applying preconditions again" since the precondition is explicitly tested outside:
∀ a ∈ P ∙ pre_f(a) ⇒ f(a) ∈ R ∧ post_f(a, f(a))
So we don't literally have to expand the f body?
Aha, Okay, then I agree to leave f(a)
as it is. Thank you for clarification. 😀
I'm starting to think that we need a defined_
operator in these POs, corresponding to the 'delta' operator in the books about inference rules and LPF. That doesn't mean the language has to support it, but its meaning has to be clear to people (or future tools) trying to discharge the POs.
What do people think?
So with a few more tweaks, and addition of defined_
, the test suite is now giving:
> pog
Generated 6 proof obligations:
Proof Obligation 1: (Unproved)
t: total function obligation in 'DEFAULT' (test.vdm) at line 2:5
(forall a:nat &
is_(t(a), nat))
Proof Obligation 2: (Unproved)
t1: total function obligation in 'DEFAULT' (test.vdm) at line 5:5
(forall a:nat, b:nat &
is_(t1(a)(b), nat))
Proof Obligation 3: (Unproved)
t2: total function obligation in 'DEFAULT' (test.vdm) at line 8:5
(forall a:nat, b:nat &
defined_(t2(a)) && is_(t2(a)(b), nat))
Proof Obligation 4: (Unproved)
t3: total function obligation in 'DEFAULT' (test.vdm) at line 11:5
(forall a:nat, b:nat &
is_(t3(a), (nat -> nat)))
Proof Obligation 5: (Unproved)
t4: total function obligation in 'DEFAULT' (test.vdm) at line 14:5
(forall a:nat &
is_(t4(a), (nat -> nat)))
Proof Obligation 6: (Unproved)
t5: total function obligation in 'DEFAULT' (test.vdm) at line 17:5
(forall a:nat &
is_(t5(a), (nat +> nat)))
And...
t6: nat -> nat -> nat +> nat
t6(a)(b)(c) == a + b + c;
Proof Obligation 7: (Unproved)
t6: total function obligation in 'DEFAULT' (test.vdm) at line 20:5
(forall a:nat, b:nat, c:nat &
defined_(t6(a)) && defined_(t6(a)(b)) && is_(t6(a)(b)(c), nat))
Ha! That should be =>
rather than &&
with the defined_ clauses, so...
Proof Obligation 3: (Unproved)
t2: total function obligation in 'DEFAULT' (test.vdm) at line 8:5
(forall a:nat, b:nat &
defined_(t2(a)) => is_(t2(a)(b), nat))
Proof Obligation 7: (Unproved)
t6: total function obligation in 'DEFAULT' (test.vdm) at line 20:5
(forall a:nat, b:nat, c:nat &
defined_(t6(a)) => defined_(t6(a)(b)) => is_(t6(a)(b)(c), nat))
Agree that the precondition shouldn't appear in the PO. The precondition is irrelevant for checking totality.
A defined operator would be nice but I'm concerned about POs that use operators the language doesn't official define or support.
So how can we create a PO that reasons about undefined values using a language that can't? Overture as a value called "undefined", but unfortunately its semantics don't give us what we want: nothing is equal to undefined, for example (not even "undefined"!).
We'll how would you discharge such a PO if the language cannot reason about undefinedness?
I'm happy to support a defined() function. I dont think there's anything in VDM that prevents us from introducing it.
On Thu, Nov 22, 2018, 13:44 Nick Battle <notifications@github.com wrote:
So how can we create a PO that reasons about undefined values using a language that can't? Overture as a value called "undefined", but unfortunately its semantics don't give us what we want: nothing is equal to undefined, for example (not even "undefined"!).
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/overturetool/language/issues/44#issuecomment-441034960, or mute the thread https://github.com/notifications/unsubscribe-auth/AAjkkV9l0Qd-SFCw7n4HSaexQ_DlPHQHks5uxqojgaJpZM4Qi6to .
I'm not sure how much of the proof theory in VDM (as a method) has to be represented in VDM-SL though. For example, many of the inference rules for LPF use the "delta" operator for definedness, but there is no such operator in VDM-SL. That may be because none of the deltas in the inference rules turn into a requirement for definedness in the POs... though we seem to have discovered one here, so I'm not sure.
It might be worth running this past one of our more experienced members... I'll give them a nudge :)
(Note sent to PGL and JF)
The delta operator in the VDM proof theory is a meta-operator only used in the proof theory (and not one that users can use inside their VDM models. Thus, I agree that it makes sense to include this in the generated proof obligations and the associated proofs of these. This delta operator comes from the LPF proof rules and it makes sense to use it here in the way suggested. The only thing that it is important to look out for is to ensure that we have monotonicity at the VDM level (i.e. f(\bot) = \bot) so things do not get more defined that its formal arguments in a function/operation. At the meta level the delta operator is acceptable (but it will not be computable).
OK, thanks Peter. I think this means that the changes in the master branch of VDMJ are now complete. We just have to move the changes over to Overture...
Would anyone be interested in trying to apply the VDMJ changes to Overture? It amounts to creating a new TotalFunctionObligation and generating the necessary AST stuff to represent the PO. The defined_
operator ought probably to be a new AST node type, though to start with it could be built as a function apply node perhaps?
The VDMJ changes are in one commit, with diffs here: https://github.com/nickbattle/vdmj/commit/5a782a494cbf65d5054f6cecd750c55c2f203f42#diff-306111f18dbed89a8ff5dfefc2366a29
Gentle reminder before tomorrow's LB NM: I was asking whether anyone else would be interested to help port the VDMJ changes for total-function POs over to Overture? Then I think this RM is complete.
As this has been open for some time, and since the intention is to move from Overture to VSCode as the primary development tool for the project, this issue will be suspended, pending a review and closure once VSCode has successfully replaced Overture. The "VSCode Review" tag is added to remind us.
According to the Language Reference Manual a function can be specified as total (operator '+>') but that specification of totality can only occur at the top level. Consider a filter function over sequences
The function can be specified as total with type
filter[@a]: (@a -> bool) * seq of @a +> seq of @a
Such a function is not realisable; it can't be total unless we can guarantee the test function is total. We need
filter[@a]: (@a +> bool) * seq of @a +> seq of @a
This is not valid according to the LRM because the totality operator is only allowed at the top level of a type (i.e. the operator that would appear at the root of the syntax tree). Having allowed functions to be specified as total, a restriction is then imposed that severely constrains its use with higher order functions.
filter
has a function argument, but the constraint equally applies to function results. ConsiderIf we want this to be total the best we can do is
add: nat +> nat -> nat
The desired type
add: nat +> nat +> nat
is invalid (the function type operator is right associative). Again we are not able to specify totality.
Overture allows function types to be arbitrarily nested so is more flexible than the LRM, but as a consequence allows specifications that do not adhere to the standard.
Clarification is required. Should the language support arbitrarily nested total function types, or should Overture be modified to adhere to the LRM.